Men underwear fashion show video

Men underwear fashion show video

Energy and the Human Journey: Where We Have Been; Where We Can Go

By Wade Frazier

 

Version 1.2, published May 2015.  Version 1.0 published September 2014.

Note to Readers: This essay is more easily navigated with a browser other than Internet Explorer, such as Firefox.  This essay has internal links to this essay and to other essays on my website, with external links largely to Wikipedia and scientific papers.  I have published this essay in other formats: .pdf format (10.7 megabytes) and .pdf format without visible links (the closest experience to reading a book), to honor different methods of digesting this essay, but this html version comprises the online textbook that I intended this essay to be.

Dedication

Acronyms Used in This Essay

Summary and Purpose

This Essay’s Tables and Timelines

Energy and the Industrialized World

The Toolset of Mainstream Science

The Orthodox Framework and its Limitations

Energy and Chemistry

Timelines of Energy, Geology, and Early Life

The Formation and Early Development of the Sun and Earth

Early Life on Earth

The Cryogenian Ice Age and the Rise of Complex Life

Speciation, Extinction, and Mass Extinctions

The Cambrian Explosion

Complex Life Colonizes Land

Making Coal, the Rise of Reptiles, and the Greatest Extinction Ever

The Reign of Dinosaurs

The Age of Mammals

Mid-Essay Reflection

The Path to Humanity

Tables of Key Events in the Human Journey

Humanity’s First Epochal Event(s?): Growing our Brains and Controlling Fire

Humanity’s Second Epochal Event: The Super-Predator Revolution

Humanity’s Third Epochal Event: The Domestication Revolution

Epochal Event 3.5 – The Rise of Europe

Humanity’s Fourth Epochal Event: The Industrial Revolution

Epochal Event 4.5 – The Rise of Oil and Electricity

The Postwar Boom, Peak Oil, and the Decline of Industrial Civilization

What Running out of Energy Looks Like

My Adventures and Those of My Fellow Travelers

Humanity’s Fifth Epochal Event: Free Energy and an Abundance-Based Political Economy

The Sixth Mass Extinction or the Fifth Epochal Event? 

What Has Not Worked So Far, and What Might

Footnotes

 

 

Dedication

This essay is dedicated to the memory of Mr. Professor and Brian, two great men whom it was an immense privilege to know and who spent their lives in a quest for healing this world.  I miss them.

 

Acronyms Used in This Essay

A number of acronyms in this essay are not commonly used and at least one is unique to my work.

They are:

BYA – Billion Years Ago

MYA – Million Years Ago

KYA – Thousand Years Ago

PPM – Parts Per Million

FE – Free Energy

GC – Global Controller

EROI – Energy Return on Investment

UP – The Universal People

LUCA – Last Universal Common Ancestor

ATP - Adenosine Triphosphate

GOE – Great Oxygenation Event

BIF – Banded Iron Formation

ROS – Reactive Oxygen Species

PETM - Paleocene-Eocene Thermal Maximum

 

Summary and Purpose

Chapter summary:

  • My background
  • Essay summary, including:
  • My strategy for manifesting that energy event for humanity's and the planet's benefit.

I was born in 1958.  NASA recruited my father to work in Mission Control during the Space Race, and I was trained from childhood to be a scientist.  My first professional mentor invented as Nikola Tesla did, and among his many inventions was an engine hailed by a federal study as the world’s most promising alternative to the internal combustion engine.  In 1974, as that engine created a stir in the USA’s federal government, I began dreaming of changing the energy industry.  In that same year, I had my cultural and mystical awakenings.  During my second year of college, I had my first existential crisis and a paranormal event changed my studies from science to business.  I still held my energy dreams, however, and in 1986, eight years after that first paranormal event, I had a second one that suddenly caused me to move up the coast from Los Angeles to Seattle, where I landed in the middle of what is arguably the greatest attempt yet made to bring alternative energy to the American marketplace.  The company sold the best heating system that has ever been on the world market and it placed that system for free on customers’ homes by using the most ingenious marketing plan that I ever saw.  That effort was killed by the local electric industry, which saw our technology as a threat to its revenues and profits, and my wild ride began.  The owner of the Seattle business left the state to rebuild his effort.  I followed him to Boston and soon became his partner.  My partner's experiences in Seattle radicalized him.  My use of "radical" intends to convey the original "going to the root" meaning.  Radicals seek a fundamental understanding of events (so they aim for the root and do not hack at branches), but more economically than politically in my partner's instance.  He would never see the energy industry the same way again after his radicalization (also called "awakening") in Seattle, but he had more radicalization ahead of him.

The day after I arrived in Boston, we began to pursue what is today called free energy, or new energy, which is abundant and harmlessly produced energy generated with almost no operating cost.  Today's so-called free energy is usually generated by harnessing the zero-point field, but not always, and our original effort was not trying to harness it.  We attracted the interest of a legendary and shadowy group while we were in Boston.  They offered million for the rights to our fledgling technology.  I have called that group the Global Controllers and others have different terms for them.  However, they are not the focus of my writings and efforts.  I regard them as a symptom of our collective malaise, not a cause.  Our fate is in our hands, not theirs.  Our efforts also caused great commotion within New England’s electric industry and attracted attempts by the local authorities to destroy our business.  They were probably trying to protect their economic turf and were not consciously acting on the Global Controllers’ behalf, which was probably also the case in Seattle.

In 1987, we moved our business to Ventura, California, where I had been raised, before the sledgehammer in Boston could fall on us.  We moved because I had connected us with technologies and talent that made our free energy ideas potentially feasible.  Our public awareness efforts became highly successful and we were building free energy prototypes.  In early 1988, our efforts were targeted by the local authorities, again at the behest of energy interests, both local and global.  In a surprise raid in which the authorities blatantly stole our technical materials, mere weeks after those same authorities assured us that we were not doing anything illegal, my radicalization began.  A few months later, my partner was offered about billion to cease our operations by that shadowy global group; the CIA delivered that offer.  Soon after my partner refused their offer, he was arrested with a million dollar bail and our nightmare began.  The turning point of my life was when I became the defense’s key witness and the prosecution made faces at me while I was on the witness stand as they tried to intimidate me.  It helped inspire me to sacrifice my life in an attempt to free my partner.  My gesture incredibly worked, in the greatest miracle that I ever witnessed.  I helped free my partner, but my life had been ruined by the events of 1988, and in 1990 I left Ventura and never returned.  I had been radicalized ("awakened"), and I then spent the next several years seeking understanding of what I had lived through and why the world worked starkly differently from how I was taught that it did.  I began the study and writing that culminated in publishing my first website in 1996, which was also when I briefly rejoined my former partner after he was released from prison, after the courts fraudulently placed him there and prison officials repeatedly put him in position to be murdered.  The Global Controllers then raised their game to new, sophisticated levels and I nearly went to prison.

As I discovered the hard way, contrary to my business school indoctrination, there is little that resembles a free market in the USA, particularly in its energy industry, and there has never been a truly free market, a real democracy, a free press, an objective history, a purely pursued scientific method, or any other imaginary constructs that our dominant institutions promote.  They may all be worthy ideals, but none has existed in the real world.  Regarding free markets in the energy industry, reality has effectively been inverted, as the world’s greatest effort of organized suppression prevents alternative energy technology of any significance from public awareness and use. 

Soon after I moved from Ventura, I met a former astronaut who was hired by NASA with a Mars mission in mind and was investigating the free energy field.  We eventually became colleagues and co-founded a non-profit organization intended to raise public awareness of new energy.  A few days after we began planning the organization’s first conference in 2004, the first speaker that we recruited for our conference was murdered and my astronaut colleague immediately and understandably moved to South America, where he spent the rest of his life.  In the spring of 2013, I spent a few days with my former free energy partner and, like my astronaut colleague, he had also been run out of the USA after mounting an effort around high-MPG carburetor technology.  The federal government attacked soon after a legendary figure in the oil industry contacted my partner, who also attracted the attention of the sitting USA’s president.  Every American president since Ronald Reagan knew my partner by name, but they proved to be rather low-ranking in the global power structure.

My astronaut colleague investigated the UFO phenomenon early in his adventures on the frontiers of science and nearly lost his life immediately after refusing an "offer" to perform classified UFO research for the American military.  It became evident that the UFO and free energy issues were conjoined.  A faction of the global elite demonstrated some of their exotic and sequestered technologies to a close fellow traveler, which included free energy and antigravity technologies.  My astronaut colleague was involved with the same free energy inventor that some around me were, who invented a solid-state free energy prototype that not only produced a million times the energy that went into it, but it also produced antigravity effects.  I eventually understood the larger context of our efforts and encountered numerous fellow travelers; they reported similar experiences, of having their technologies seized or otherwise suppressed, of being incarcerated and/or surviving murder attempts, and other outrages inflicted by global elites as they maintained their tyrannical grip over the world economy and, hence, humanity.  It was no conspiracy theory, but what my fellow travelers and I learned at great personal cost, which was regularly fatal.

I continued to study and write and became my astronaut colleague’s biographer.  My former partner is the Indiana Jones of the free energy field, but I eventually realized that while it was awe-inspiring to witness his efforts, one man with a whip and fedora cannot save humanity from itself.  I eventually took a different path from both my partner and astronaut colleague, and one fruit of that direction is this essay.  Not only was the public largely indifferent to what we were attempting, but those attracted to our efforts usually either came for the spectacle or were opportunists who betrayed us at the first opportunity.  As we weathered attacks from the local, state, national, and global power structures, such treacherous opportunities abounded.  I witnessed dozens of attempts by my partner’s associates to steal his companies from him (1, 2, 3, 4), and my astronaut colleague was twice ejected from organizations that he founded, by the very people that he invited to help him.  During my radicalizing years with my partner, I learned that personal integrity is the world’s scarcest commodity, and it is the primary reason why humanity is in this predicament.  The antics of the global elites are of minor importance; the enemy is us.

I eventually realized that there were not enough heroes on Earth to get free energy over the hump of humanity’s inertia and organized suppression.  Soon after I completed my present website in 2002, one of R. Buckminster Fuller’s pupils called my writings “comprehensivist” and I did not know what he meant.  I then read some of Fuller’s work and saw the point.  My writings since then have been more consciously comprehensivist (also called “generalist”) in nature. 

This essay is intended to draw a comprehensive picture of life on Earth, the human journey, and energy's role.  The references that support this essay are usually to works written for non-scientists or those of modest academic achievement so that non-scientists can study the same works without needing specialized scientific training.  I am trying to help form a comprehensive awareness in a tiny fraction of the global population.  Between 5,000 and 7,000 people is my goal.  My hope is that the energy issue can become that tiny fraction's focus.  Properly educated, that group might be able to help catalyze an energy effort that can overcome the obstacles.  That envisioned group may help humanity in many ways, but my primary goal is manifesting those technologies in the public sphere in a way that nobody risks life or livelihood.  I have seen too many wrecked and prematurely ended lives (1, 2) and plan to avoid those fates, for both myself and the group’s members.

Here is a brief summary of this essay.  Ever since life first appeared more than three billion years ago and about a billion years after the Sun and Earth formed, organisms have continually invented more effective methods to acquire, preserve, and use energy.  Complex life appeared after three billion years of evolution and, pound-for-pound, it used energy 100,000 times as fast as the Sun produced it.  The story of life on Earth has been one of evolutionary events impacted by geophysical and geochemical processes, and in turn influencing them.  During the eon of complex life that began more than 500 million years ago, there have been many brief golden ages of relative energy abundance for some fortunate species, soon followed by increased energy competition, a relatively stable struggle for energy, and then mass extinction events cleared biomes and set the stage for another golden age by organisms adapted to the new environments.  Those newly dominant organisms were often marginal or unremarkable members of their ecosystems before the mass extinction.  That pattern has characterized the journey of complex life over the past several hundred million years.  Intelligence began increasing among some animals, which provided them with a competitive advantage.

The oldest stone tools yet discovered are about 3.3-3.4 million years old, likely made by australopiths, which may have led to the appearance of the Homo genus.  About 2.6 million years ago, when our current ice age began, our ancestors began making Oldowan culture stone tools, which was soon followed by the control of fire, and the human journey’s First Epochal Event(s?) transpired.  The human evolutionary line’s brain then grew dramatically.  About two million years later, the human line evolved to the point where behaviorally modern humans appeared, left Africa, and conquered all inhabitable continents.  Their expansion was fueled by driving most of Earth’s large animals to extinction.  That Second Epochal Event was also the beginning of the Sixth Mass Extinction.  After all the easy meat was extinct and the brief Golden Age of the Hunter-Gatherer ended, population pressures led to the Third Epochal Event: domesticating plants and animals.  That event led to civilization, and many features of the human journey often argued to be human nature, such as slavery and the subjugation of women, were merely artifacts of the energy regime and societal structure of agrarian civilizations.  Early civilizations were never stable; their energy practices were largely based on deforestation and agriculture, usually on the deforested soils, and such civilizations primarily collapsed due to their unsustainable energy production methods. 

As the Old World’s civilizations continually rose and fell, Europe's peoples rediscovered ancient teachings that contained the first stirrings of a scientific approach.  Europeans used energy technologies from that ancient period, borrowed novel energy practices from other Old World civilizations, and achieved the technological feat of turning the world’s oceans into a low-energy transportation lane.  Europeans thereby began conquering the world.  During that conquest, one imperial contender turned to fossil fuels after its woodlands were depleted by early industrialization.  England soon industrialized by using coal and initiated humanity’s Fourth Epochal Event.  England quickly became Earth’s dominant imperial power, riding on the power of coal.  As Europeans conquered Earth, elites, who first appeared with the first civilizations, could begin thinking in global terms for the first time, and a global power structure began developing.  As we learned the hard way, that power structure is very real, but almost nobody on Earth has a balanced and mature perspective regarding it, as people either deny its existence or obsess about it, seeing it as the root of our problems when it is really only a side-effect of humanity’s current stage of political-economic evolution, which has always been based on its level of energy usage. 

Today, industrialized humanity is almost wholly dependent on the energy provided by hydrocarbon fuels that were created by geological processes operating on the remains of organisms, and humanity is mining and burning those hydrocarbon deposits about a million times as fast as they were created.  We are reaching peak extraction rates but, more importantly, we have already discovered all of the easily acquired hydrocarbons.  We are currently seeking and mining Earth’s remaining hydrocarbon deposits, which are of poor energetic quality.  It is merely the latest instance of humanity's depleting its energy resources, in which the dregs were mined after the easily acquired energy was consumed.  The megafauna extinctions created the energy crisis that led to domestication and civilization, and the energy crisis of early industrialization led to using hydrocarbon energy, and the energy crisis of 1973-1974 attracted my fellow travelers and me to alternative energy.  However, far more often over the course of the human journey, depleting energy resources led to population collapses and even local extinctions of humans in remote locations.  Expanding and collapsing populations have characterized rising and falling polities during the past several thousand years, ever since the first civilizations appeared.

Today, humanity dominates Earth and is not only depleting its primary energy resources at prodigious rates, but it is also driving species to extinction at a rate that rivals the greatest mass extinctions in Earth’s history.  Humans may cause Earth’s greatest mass extinction, which may take humanity with it.  Today, humanity stands on the brink of the abyss, and almost nobody seems to know or care.  Humanity is a tunnel-visioned, egocentric species, and almost all people are only concerned about their immediate self-interest and are oblivious of what lies ahead.  Not all humans are so blind, and biologists and climate scientists, among others intimately familiar with the impacts of global civilization, are terrified by what humanity is inflicting onto Earth.  Also, those who realize that we are quickly coming to the Hydrocarbon Age’s end are beating the drums of doom and I cannot blame them.  We are in a “race of the catastrophes” scenario, and several manmade trends threaten our future existence.

Even the ultra-elites who run Earth from the shadows readily see how their game of chicken with Earth may turn out.  Their more extreme members advocate terraforming Mars as their ultimate survival enclave if their games of power and control make Earth uninhabitable.  But the saner members, who may now be a majority of that global cabal, favor the dissemination of those sequestered technologies.  I am nearly certain that members of that disenchanted faction are those who gave my close friend an underground technology demonstration and who would quietly cheer our efforts when I worked with my former partner.  They may also be subtly supporting my current efforts, of which this essay comprises a key component, but I have not heard from them and am not counting on them to save the day or help my efforts garner success.  It is time for humanity to reach the level of collective sentience and integrity required to manifest humanity’s Fifth Epochal Event, which will initiate the Free Energy Epoch.  Humanity can then live, for the first time, in an epoch of true and sustainable abundance.  It could also halt the Sixth Mass Extinction and humanity could turn Earth into something resembling heaven.  With the Fifth Epochal Event, humanity will become a space-faring species, and a future will beckon that nobody on Earth today can truly imagine, just as nobody on Earth could predict how the previous Epochal Events transformed the human journey (1, 2, 3, 4).

Also, each Epochal Event was initiated by a small group of people, perhaps even by one person for the earliest events, and even the Industrial Revolution and its attendant Scientific Revolution had few fathers.  However, I came to realize that there is probably nobody else on Earth like my former partner, and even Indiana Jones cannot save the world by himself.  With the strategy that I finally developed, I do not look for heroes because I know that there are not enough currently walking Earth.  I am attempting something far more modest.  The greatest triumph of the ultra-elites running Earth today is making free energy technology and the resulting epoch of abundance unimaginable, and all of today’s dominant ideologies assume scarcity in the foundation of their frameworks, which is largely why my former partner and my astronaut colleague were voices in the wilderness and like ducks in a shooting gallery that did not know where the next shot would come from.  The most damaging shots were usually fired by their “allies,” right into their backs, which nobody could have convinced me of in 1985.  But after watching similar scenarios play out dozens of times, I finally had to admit the obvious, and my partner admitted it to me in 2013.

I noticed several crippling weaknesses in all alternative energy efforts that I was involved with or witnessed.  Most importantly, when my partner mounted his efforts, people participated primarily to serve their self-interest.  While the pursuit of mutual self-interest is the very definition of politics, self-interested people were easily defeated by organized suppression, although the efforts usually self-destructed before suppression efforts became intense.  Another deficiency in all mass free energy efforts was that most participants were scientifically illiterate and did not see much beyond the possibility of reducing their energy bills or becoming rich and famous.  Once the effort was destroyed (and they always are, if they have any promise), the participants left the alternative energy field.  Also, many lives were wrecked as each effort was defeated, so almost nobody was able or willing to try again.  Every time that my partner rebuilt his efforts, it was primarily with new people; few individuals lasted for more than one attempt.

I realize that almost nobody on Earth today can pass the integrity tests that my fellow travelers were subjected to, and I do not ask that of anybody whom I will attempt to recruit into my upcoming effort.  It will be a non-heroic approach, of “merely” achieving enough heart-centered sentience and awareness to where a world of free energy and abundance is only imagined by a sizeable group who will not stay quiet about it, but who will also not be proselytizing.  If they can truly understand this essay’s message, they will probably not know anybody else in their daily lives that can. 

Those recruits will simply be singing a song of practical abundance that will attract those who have been listening for that song for their entire lives.  Once enough people know the song by heart and can sing it, and have attracted a large enough audience that can approach the free energy issue in a way that risks nobody’s life and will not be easy for the provocateurs and the effort’s “allies” to wreck, then it will be time to take action, but in a way never tried before. 

That is my plan, and this essay is intended to form the foundation of my efforts to educate and amass the “choir” that will sing the abundance song.  I am looking for singers, not soldiers, and the choir will primarily sing here.  My approach takes the lamb’s path, not the warrior’s.  That “choir” may only help a little, it may help a lot, but it will not harm anybody.  This effort could be called trying the enlightenment path to free energy, an abundance-based global political economy, and a healed humanity and planet.  I believe that the key is approaching the issue as creators instead of victims, from a place of love instead of fear.  Those goals may seem grandiose to the uninitiated, and people in this field regularly succumb to a messiah complex and harbor other delusions of grandeur, but I also know that those aspirations are attainable if only a tiny fraction of humanity can help initiate that Fifth Epochal Event, just like the previous Epochal Events.  This essay is designed to begin the training process.  Learning this material will be a formidable undertaking.  This material is not designed for those looking for quick and easy answers, but is intended to help my readers attain the levels of understanding that I think are necessary for assisting with this epochal undertaking.

 

This Essay’s Tables and Timelines

In order to make this essay easier to understand, I created some tables and timelines, and they are:

Timeline of Significant Energy Events in Earth's and Life's History

Abbreviated Geologic Time Scale

Timeline of Earth’s Major Ice Ages

Timeline of Earth’s Major and Minor Mass Extinction Events

Early Earth Timeline before the Eon of Complex Life

Timeline of Key Biological Innovations in the Eon of Complex Life

Timeline of Humanity’s Evolutionary Heritage

Human Event Timeline Until Europe Began Conquering Humanity

Human Event Timeline Since Europe Began Conquering Humanity

Table of Humanity’s Epochs

 

Energy and the Industrialized World

There are greater contrasts in humanity’s collective standard of living than ever before.  As of 2014, Bill Gates topped the list of the world’s richest people for nearly all years of the previous 20.  In 2000, his net worth was about 0 billion, or about the same as the collective wealth of the poorest hundred million Americans or the poorest half of humanity.  Although Gates and other high-technology billionaires can live surprisingly egalitarian lifestyles, for one person to possess the same level of wealth as billions of people collectively is a recent phenomenon.  In 2014, about 30 thousand children died each day because of their impoverished conditions. 

Ever since I was thrust into an urban hell soon after graduating from college, I became a student of wealth, poverty, and humanity’s problems.  My teenage dreams of changing humanity’s energy paradigm have had a lifelong impact.  It took me many years to gain a comprehensive understanding of how energy literally runs the world and always has.  A good demonstration of that fact is to consider the average day of an average American professional, who is a member of history’s most privileged large demographic group and lives in Earth’s most industrialized nation.  A typical day in my life during the winter before I wrote this essay can serve as an example.

When I worked 12-hour days and longer during that winter, which was the busiest time of my year, I often fasted and needed less sleep, so I often awoke before 5:00 A.M.  In 2014 as I write this, I live in a fairly large house.  When I fast, my body generates less heat, so I feel cold rather easily; I wear thermal underwear under my work attire and have other strategies for staying warm, especially in the winter.  I programmed our furnace to begin operation soon before I awakened, so that my day started in a warm environment.  I also have a space heater in my home office, so that the rest of the house can stay cold while I work in warmth.

That winter, my first tasks when arising were turning on my computer and drinking a glass of orange juice, which raised my blood sugar.  After some hours of reading about world events, answering emails, and working on my writings, I took a hot shower, dressed, and walked to a bus stop.  I read a book while awaiting the bus that took me to downtown Bellevue, where I worked in a high-rise office building for an Internet company. 

When I arrived at my office, I turned on my lights and computer.  When I was eating, I put the food that I brought to work in a refrigerator under my desk.  During my work day, I interacted with many people in my air-conditioned, high-technology office environment.  My cellular telephone was never far away.  The view from my office window of the Cascade Mountains was pleasant.  My computer interfaced with our distant data centers and the world at large via the Internet.  When my workday was finished, I rode the bus home.  In the winter, the furnace is programmed to stop functioning when my wife and I leave for work, and comes on soon before we arrive home, so we never experienced a cold house.  In the evening, we might watch a movie on a DVD on our wide-screen plasma TV.  When I am not fasting, I usually eat dinner, with the food in my refrigerator usually purchased at a cooperative grocery store that has an enormous produce section, with food grown locally and imported from as far away as New Zealand, China, and Israel.[1]  We have a high-tech kitchen, with a “smart” stove, refrigerator, and other appliances.

When I resumed my career in 2003, I became an early riser and consequently went to bed by 9:00 PM on most nights, and often read fantasy literature before I turned out the lights and snuggled into bed (with two comforters in the winter to keep us warm as we sleep).

That was a typical winter’s day in early 2013.  During that day, around 80 times the calories that fueled my body were burned to support my activities.  Those dying children often succumbed to hunger and diseases of poverty, and the daily energy that supported their lives was less than 1% of what I enjoyed that day.  How did energy serve my daily activities?  How did that disparity between the dying children and me come to be?  This essay will address those questions.

 

The Toolset of Mainstream Science

Humanity is Earth's leading tool-using species, and our tools made us.  Twigs, sticks, bones, and other organic materials were undoubtedly used as tools by our protohuman ancestors, but the only tools to survive for millions of years to be studied today are made of stone; the oldest discovered so far are about 3.4-to-3.3 million years old.[2]  Humanity’s tools have become increasingly sophisticated since then.  The Industrial Revolution was accompanied by the Scientific Revolution, and the synergy between scientific and technological advances has been essential and impressive, even leaving aside the many technologies and related theories that have been developed and sequestered in the above-top-secret world.

The history of science is deeply entwined with the state of technology.  Improving technology allowed for increasingly sophisticated experiments, and advances in science spurred technological innovation.  While many scientific practices and outcomes have been evil, such as vivisection and nuclear weapons, many others have not been destructive to humans or other organisms.  The 20th century saw great leaps in technological and scientific advancement.  My grandfather lived in a sod hut as a child, his son helped send men to the moon, and his grandson pursued world-changing energy technologies and still does.  Relativity and quantum theory ended the era of classical physics and, with their increasingly sophisticated toolset, scientists began to investigate phenomena at galactic and subatomic scales.  Space-based telescopes, electron microscopes, mass spectrometers, atomic clocks, supercolliders, computers, robots that land on distant moons and planets, and other tools allowed for explorations and experiments that were not possible in earlier times.

Intense organized suppression has existed in situations in which scientific and technological advances can threaten economic empires, but many areas of science are not seen as threatening, and reconstructing Earth’s distant past and the journey of life on Earth is one of those nonthreatening areas.  I have never heard of a classified fossil site or a Precambrian specialist being threatened or bought out in order to keep him/her silent.  There is more controversy with human remains and artifacts, but I am skeptical of popular works that argue for technologically advanced ancient civilizations and related notions.  Something closer to “pure science” can be practiced regarding those ancient events without the threat of repercussions or the enticements of riches and Nobel Prizes.  Much of this essay’s subject matter deals with areas in which the distortions of political-economic racketeering have been muted and the theories and tools have been relatively unrestricted.

Mass spectrometers measure the mass of atoms and molecules, and have become increasingly refined since they were first invented in the 19th century.  Today, samples that can only be seen with microscopes can be tested and measured down to a billionth of a gram.[3]  Elements have different numbers of protons and neutrons in the nuclei of their atoms, and each nuclear variation of an element is called an isotope.  Unstable isotopes decay into smaller elements (also called “daughter isotopes”).  Scientific investigations have determined that radioactive decay rates are quite stable and are primarily governed by the dynamics in a decaying atom.  The dates determined by radioactive dating have been correlated to other observed processes and the data has become increasingly robust over the years.[4]

The ability to weigh various isotopes, at increasing levels of precision, with mass spectrometers has provided a gold mine of data.  Scientists are continually inventing new methods and ways to use them, new questions are asked and answered, and some examples of methods and findings follow.

Carbon has two primary stable isotopes: carbon-12 and carbon-13.  Carbon-14 is the famous unstable isotope used for dating recently deceased life forms, but testing carbon’s stable isotopes has yielded invaluable information.  Carbon is the backbone of all of life’s structures, and life processes often have a preference for using carbon-12, which is lighter than carbon-13 and hence take less energy to manipulate.  Scientists have been able to test rocks in which the “fossils” are nothing more than smears and determine that those smears resulted from life processes, as there is more carbon-12 in the smear than carbon-13 than would be the case if life was not involved.[5]  This has also helped date the earliest life forms.  Life’s preference for lighter isotopes is evident for other key elements such as sulfur and nitrogen, and scientists regularly make use of that preference in their investigations.[6]

The hydrological cycle circulates water through Earth’s oceans, atmosphere, and land.  The energy of sunlight drives it, and that sunlight is primarily captured at the surface of water bodies and the oceans in particular.  The hydrological cycle’s patterns have changed over the eons as Earth’s surface has changed its continental configurations and temperature.  Today’s global weather system generally begins with sunlight hitting the atmosphere, and the equator’s air receives the most direct radiation and becomes warmest.  That air rises and cools, which reduces the water vapor that it can hold, so it falls as rain.  That is why tropical rainforests are near the equator.  The rising equatorial air creates high-pressure dry air that pushes toward the poles, and at about 30o latitude that air cools and sinks to the ground.  That dry air not only does not bring precipitation, but it absorbs moisture from the land it hits and forms the world’s great deserts.  That high pressure at the ground at 30o latitude pushes air back toward the tropics, and Earth’s rotation creates a distinctive bend in the northern and southern hemispheres that create trade winds that pick up moisture as they approach the equator.  The pole-ward sides of the mid-latitudes’ dry temperate regions also have low pressures and wet climates, and dry high-pressure zones exist at the poles.  As clouds pass over land, mountains force them upward and they lose their moisture in precipitation.[7]  As that water makes its way back to the oceans to start the cycle again, it provides the freshwater for all land-based ecosystems. Below is a diagram of those dynamics.  (Source: Wikimedia Commons)

A water molecule containing oxygen-16 (the most common oxygen isotope) will be lighter than a water molecule containing oxygen-18 (both are stable isotopes), so it takes less energy to evaporate an oxygen-16 water molecule than an oxygen-18 water molecule.  Also, after evaporation, oxygen-18 water will tend to fall back to Earth more quickly than oxygen-16 water will, because it is heavier.  As a consequence, air over Earth’s poles will be enriched in oxygen-16 – the colder Earth’s surface temperature, the less oxygen-18 will evaporate and be carried to the poles – and scientists have used this enrichment to reconstruct a record of ocean temperatures.  Also, the oxygen-isotope ratio in fossil shellfish (as their life processes prefer the lighter oxygen isotope) has been used to help determine ancient temperatures.  During an ice age, because proportionally more oxygen-16 is retained in ice sheets and does not flow back to the oceans, the ocean’s surface becomes enriched in oxygen-18 and that difference can be discerned in fossil shells.  Sediments are usually laid down in annual layers, and in some places, such as the Cariaco Basin off of Venezuela's coast, undisturbed sediments have been retrieved and analyzed, which has helped determine when ice sheets advanced and retreated during the present ice age.[8]

Mass spectrometers have been invaluable for assigning dates to various rocks and sedimentary layers, as radioactive isotopes and their daughter isotopes are tested, including uranium-lead, potassium-argon, carbon-14, and many other tests.[9]  Also, the ratios of elements in a sample can be determined, which can tell where it originated.  Many hypotheses and theories have arisen, fallen, and been called into question or modified by the data derived from those increasingly sophisticated methods, and a few examples should suffice to give an idea of what is being discovered.

The moon rocks retrieved by Apollo astronauts are still being tested, as new experiments and hypotheses are devised.  In 2012, a study was published which resulted from testing moon rocks for the titanium-50 and titanium-47 ratios (both are stable isotopes), and it has brought into question the hypothesis that the Moon was formed by a planetary collision more than four billion years ago.  The titanium ratio was so much like Earth’s that a collision with Earth forming the Moon has been questioned (as very little of the hypothesized colliding body became part of the Moon).  The collision hypothesis will probably survive, but it may be significantly different from today’s hypothesis.  Meteorites have been dated, as well as moon rocks, and their ages confirm Earth’s age that geologists have derived, and meteorite dates provide more evidence that our solar system probably developed from an accretion disk.

In the Western Hemisphere, the Anasazi and Mayan civilization collapses of around a thousand years ago, or the Mississippian civilization collapse of 500 years ago, have elicited a great deal of investigation.  From New Age ideas that the Anasazi and Mayan peoples “ascended” to the Eurocentric conceit that the Mississippian culture was European in origin, many speculations arose that have been disproven by the evidence.  It is now known that the Anasazi and Mayan culture collapses were influenced by epic droughts, but that was only the proximate cause.  The ultimate cause was that those civilizations were not energetically sustainable, and the unsustainable Mississippian culture was in decline long before Europeans invaded North America.  The Anasazi used logs to build their dwellings that today are famous ruins.  Scientists have used strontium ratios in the wood to determine where the logs came from, as well as dating the wood with tree-ring analysis and analyzing pack rat middens, and a sobering picture emerged.  The region was already arid, but agriculture and deforestation desertified the region around Chaco Canyon, which was the heart of Anasazi civilization.  When Anasazi civilization collapsed, at Chaco Canyon they were hauling in timber from mountains more than 70 kilometers away (the strontium ratios could trace each log from the particular mountain that it came from).  When the epic droughts delivered their final blows, Anasazi civilization collapsed into a morass of starvation, warfare, and cannibalism, and the forest has yet to begin to recover, nearly a millennium later.[10]

Another major advance happened in the late 20th century: the ability to analyze DNA.  DNA’s double-helical structure was discovered in 1953.  In 1973, the first amino acid sequence for a gene was determined.  In 2003, the entire human genome was sequenced.  Sequencing the chimpanzee genome was accomplished in 2005, for orangutans in 2011, and for gorillas in 2012.  The comparisons of human and great ape DNA have yielded many insights, but the science of DNA analysis is still young.  What has yielded far more immediately relevant information has been studying human DNA.  The genetic bases of many diseases have been identified.  Hundreds of falsely convicted Americans have been released from prison, and nearly 20 from death row, due to DNA evidence's proving their innocence.  Human DNA testing has provided startling insights into humanity's past.  For instance, in Europe it appears that after the ice sheets receded 16,000 to 13,000 years ago, humans repopulated Europe, and for all the bloody history of Europe over the millennia since then, there have not really been mass population replacements in Europe by invasion, migration, genocide, and the like.  Europeans just endlessly fought each other and honed the talents that helped them conquer humanity.  There were some migrations of Fertile Crescent agriculturalists into Europe, but other than hunter-gatherers being displaced or absorbed by the more numerous agriculturalists, there do not appear to be many population replacements.  In 2010, a study suggested that male farmers from the Fertile Crescent founded the paternal line for most European men as they mated with the local women.  DNA testing has demonstrated that all of today’s humans are descended from a founder population of about five-to-ten thousand people, of whom a few hundred left Africa around 60-50 thousand years ago and conquered Earth.  The Neanderthal genome has been sequenced, as well as genomes of other extinct species, and for a brief, exuberant moment, some scientists thought that they could recover dinosaur DNA, Jurassic-Park-style.  Although dinosaur DNA is unrecoverable, organic dinosaur remains have been recovered, and even some proteins have been sequenced, which probably no scientist believed possible in the 1980s.[11]

Since 1992, scientists have discovered planets in other star systems by using a variety of methods that reflect the improving toolset that scientists can use, especially space-based telescopes.  Before those discoveries, there was controversy whether planets were rare phenomena, but scientists now admit that planets are typical members of star systems.  Extraterrestrial civilizations are probably visiting Earth, so planets hosting intelligent life may not be all that rare.

Those interrelated and often mutually reinforcing lines of evidence have made many scientific findings difficult to deny.  The ever-advancing scientific toolset, and the ingenuity of scientists developing and using them, and particularly the multidisciplinary approach that scientists and scholars are increasingly using, have made for radical changes in how we view the past.  Those radical changes will not end any time soon, and what follows will certainly be modified by new discoveries and interpretations, but I have tried to stay largely within the prevailing findings, hypotheses, and theories, while also poking into the fringes and leading edges somewhat.  Any mistakes in fact or interpretation in what follows are mine.

 

The Orthodox Framework and its Limitations

  Chapter summary:

In the West, the conception of the physical universe and humanity’s ability to manipulate it has remarkably changed in the past few thousand years, which has been a tiny fraction of humanity‘s journey on Earth.  Thousands of years ago, the Greek philosophers Democritus and Leucippus argued that the universe was comprised of atoms and the void, and Pythagoras taught that Earth orbited the Sun.[12]  Greeks also invented the watermill during the same era.  Hundreds of years later, a Greek mathematician and engineer, Heron of Alexandria, described the first steam engine and windmill and is typically credited as the inventor, but the actual inventors are lost to history.  Western science and technology did not significantly advance for the next millennium, however, until ancient Greek writings were reintroduced to the West via captured Islamic libraries.  The reintroduction of Greek teachings, and the pursuit of their energy technologies, ultimately led to the Industrial and Scientific revolutions.

Scientific practice is ideally a process of theory and experimentation that can lead to new theories.  There are three general aspects of today's scientific process, and it developed from a method proposed by John Hershel, which Charles Darwin used to formulate his theory of evolution.[13]  First, facts are adduced.  Facts are phenomena that everybody can agree on, ideally produced under controlled experimental conditions that can be reproduced by other experimenters.  Hypotheses are then proposed to account for the facts by using inductive (also called abductive) logic.  The hypotheses are usually concerned with how the universe works, whether it is star formation or evolution.  If a hypothesis survives the fact-gathering process – often by predicting facts that later experiments verify – then the hypothesis may graduate to the status of a theory.[14]  Scientific theories ideally can be falsified, which means that they can be proven erroneous.  The principle of hypothesis and falsification is primarily what distinguishes science from other modes of inquiry.

The relegation of hypotheses and theories to oblivion, without getting a fair hearing, as the pioneer dies in obscurity or is martyred, only to be vindicated many years later, has been a typical dynamic.  The man who first explained the dynamics behind the aurora borealis, Kristian Birkeland, died in obscurity in 1917, with his work attacked and dismissed.  It was not until Hannes Alfvén won the 1970 Nobel Prize that Birkeland’s work was finally vindicated.  Endosymbiotic theory, the widely accepted theory of how mitochondria, chloroplasts, and other organelles came to be, was first proposed in 1905, quickly dismissed, and not revived until the late 1960s.

When a new hypothesis appears, particularly a radical one, even if it is not a lone pioneer suffering martyrdom, the old guard usually attacks the new hypothesis and the situation turns into bitter feuds and armed camps all too often, such as the rise of the asteroid impact hypothesis regarding the dinosaurs’ demise.[15]  To a degree, those withering attacks are supposed to be how science works.  Doubt instead of faith is the guiding principle of science.[16]  Until a scientist’s bright idea is tested against the real world, it is just a bright idea.  Only hypotheses that have survived numerous attempts to falsify them graduate to becoming theories.  It can be argued that the “attack mode” that science has adopted toward new hypotheses has formed a structural bias so that all scientific pioneers will be attacked by their peers; it is simply the nature of the profession.  Only scientists who can weather the attacks from their peers will survive long enough to see their hypotheses receive a fair hearing.  That “shark tank” environment, particularly with lucrative prizes and tenured academic berths awaiting the winners, has arguably set back science’s progress considerably.

With what I know has been suppressed by private interests, often with governmental assistance, mainstream science is largely irrelevant regarding many important issues that could theoretically be within its purview.  Paradoxically, scientists can also fall for fashionable theories and get on bandwagons.[17]  Scientific practice is subject to human foibles, just as all human endeavors are.  There can be self-reinforcing bias in that the prevailing hypotheses can determine what facts are adduced, and potential facts thus escape inquiry, particularly when entire lines of inquiry are forbidden by organized suppression and the excesses of the national security state, as well as the indoctrination that scientists are subject to, as all people are.

Early in the 20th century, radical theories were proposed that remade scientists’ view of the universe.  Along with relativity and quantum theory, a primary pillar of today’s physics is the notion that everything in the universe is a form of energy, as summarized by Einstein’s equation: E = MC2.  Although the notion is still challenged in unorthodox corners, today’s prevailing hypothesis is that the universe came into existence in an instant called the Big Bang, stars are the energy centers in the observable universe, and nuclear fusion powers them.  When the Big Bang supposedly happened, there was no matter, but only energy.  Only when the universe had sufficiently expanded and cooled, less than a second after the Big Bang, did matter begin to appear, which is considered to be comprised of relatively low energy states.[18]  This essay hews fairly closely to today’s orthodox perspective for much of it.  However, there will be limitations, and some of them follow.

In the early days of science, it had a quasi-religious stature among its practitioners, and 19th-century scientists were prone to calling their hypotheses and theories “laws,” often appending their names to the “laws” as soon as possible, like imperialist “explorers” of the era appending European names to landmarks that they encountered during their conquests.  Brian O’Leary, one of two whom this essay is dedicated to the memory of, was a former astronaut, Ivy League professor, and political activist who explored the frontiers of science and stated that there are no “laws” of physics, only theories, but the term “law” is lodged deeply in the scientific lexicon, although by the 20th century scientists stopped calling new hypotheses and theories laws.  Modest scientists readily admit that the so-called “laws” of science are not the “laws” of the universe, but rather human ideas about what those laws might be, if there are any laws at all.  As Einstein and his colleagues readily admitted, the corpus of scientific fact and theory barely says anything at all about how the universe works.  Sometimes, paradigms shift and scientists see the universe with fresh eyes.  The ideals and realities of scientific practice are often at odds.  Ironically, when scientists reach virtual unanimity on a theory, it can be a sign that the theory is about to radically change, and many if not most scientists will go to their graves believing the theory that they were originally taught, no matter how much evidence weighs against it.

A key tension in mainstream science has long been the conflict between specialists and the generalists and multidisciplinarians.  The specialist’s motto might be, “The devil is in the details.”  Deductive reasoning is their specialty and reductionist principles often guide their investigations, in which breaking down phenomena into their most basic components is the goal.  The generalist’s motto might be, “I seem to see a pattern here.”  Generalists often use inductive reasoning and tend to think holistically, usually in terms of systems, and they recognize emergent properties arising from higher levels of systems complexity, which can be something new and not necessarily inherent in lower levels of complexity or predictable by analyzing those lower levels.  New hypotheses often come from generalists and their inductive reasoning, and the best of them usually have some flash of insight that leads them to their breakthroughs, which is called intuition or the Creative Moment.  I found that it is a close cousin to psychic ability, if not the same thing. 

Specialists are often those on the ground, getting their hands dirty and doing the detailed work that forms the bedrock of scientific practice.  Without their efforts, science as we know it would not exist.  However, mainstream science has long suffered from the tunnel vision that overspecialization encourages, and R. Buckminster Fuller thought that the epidemic overspecialization and naïveté of mainstream scientists in his time was a ruling class tactic to keep scientists controlled and unable to see the forest for the trees.[19]  That has been slowly changing in my lifetime, so that collaborative efforts are drawing from multiple disciplines and achieving synthetic views that were not feasible in earlier times, and patterns are newly recognized that were invisible in a scientific world filled with isolated specialists.  Many paradigmatic breakthroughs in science and technology were made by non-professionals, specialists working outside of their field of professional expertise, and generalists traversing disciplinary boundaries.[20]  Scientific training today attempts to prevent that overspecialized tunnel vision, and today’s practicing scientists ideally get deep into the details and then pull back and try to see context, connections, and patterns.  A comprehensivist tries to understand the details well enough to refrain from making unwarranted generalizations while also striving for that big-picture awareness.  There are also top-down and bottom-up ways to approach analyses; each can provide critical insight, and scientists and other analysts often try to use both.[21]

Another key set of tensions are those between theorists, empiricists, and inventors.  Theorists attempt to account for scientific data and ideally predict data yet to be adduced, which tests the validity of their hypotheses and theories.  Empiricists often produce that scientific data.  Inventors create new technologies and techniques.  Albert Einstein is the quintessential example of a theorist, who never performed experiments relating to his theories but accounted for experimental results and predicted them.  Michelson and Morley, who performed the experiment that produced results that various scientists wrestled with for a generation before Einstein proposed his special theory of relativity, never suspected that their experiment would lead to the theories that it did.  The most important experiments in science’s history were often those producing unexpected results and were usually called failures.  Einstein’s general theory of relativity had no experimental evidence when he proposed it (it explained Mercury’s orbit, but that was the only evidence for it when the theory was proposed), but it has been confirmed numerous times since then.  Einstein expected that his theories would eventually be falsified by experimental evidence, but that the best parts of his theories would survive in the new theories. 

The Wright brothers were typical inventors.  Before they flew, theorists said that man-powered flight was “impossible,” mainstream science ignored or ridiculed them for five years after they first flew, and the Smithsonian Institution tried to deny the Wright brothers their rightful precedence for generations.  The theorists were spectacularly wrong, the empiricists had abandoned their primary principle of observation, and it was up to inventors to finally open their eyes and minds, years after the public witnessed the new technologies working.  Brian O’Leary told me that the scientific establishment’s collective blindness and denial is worse in the early 21st century than in the Wright brothers’ time.

I have encountered numerous technologies that theorists denounce as “impossible,” empiricists ignore as if they did not exist, while the inventors are not exactly sure why their inventions work, but only know that they do.  Such inventions often threaten to upend the very foundations of scientific disciplines, which is primarily why they have been ignored as they have, if they are not actively suppressed.[22]  When their breakthroughs threatened the dominance of the industrial/professional rackets, then the risks could become deadly. 

The findings of mainstream science can be particularly persuasive when lines of evidence from numerous disciplines independently converge, which has become increasingly common as scientific investigations have become more interdisciplinary.  DNA testing is clearly showing descent relationships and ghost ancestors are being reconstructed via genetic testing.  Numerous dating methods are used today, and more are regularly invented.  Typically, a new technique will emerge from obscurity, often pioneered by a lonely scientist.  For instance, dendrochronology, the reading of tree rings, was developed as a dating science by the dogged efforts of an astronomer who labored in obscurity for many years.  He was a fortunate pioneer; when he died after nearly 70 years of effort, he had lived to see dendrochronology become a widely accepted dating method.  Eventually, the new method can break past the inertia and active suppression, and sometimes even if the breakthrough threatens powerful interests.  Then the newly accepted method can be seen as a panacea for all manner of seemingly insoluble problems, in the euphoric, bandwagon phase.  Yesterday’s heresy can become tomorrow's dogma.  Then early victories may not seem as triumphant as previously hailed, and a “morning after” period of sobering up arrives.  The history of science is filled with fads that faded to oblivion, sometimes quickly, while advances that survived the withering attacks are eventually seen in a more mature light, in which its utility is acknowledged as well as its limitations.  DNA and molecular clock analyses have largely passed through those phases in recent years.  In the 1980s, the idea of room-temperature superconductors had its brief, frenzied day in the sun when high-temperature superconductors were discovered.  Cold fusion had a similar trajectory, although the effect seems real and MIT manipulated their data to try to make the effect vanish.  A scientist who spoke out against MIT’s apparent fraud was murdered years later at the same time as a series of events that I was close to that may have been related.  After the bolide impact hypothesis broke through a taboo that lasted for more than a century, some scientists tried explaining all mass extinctions with bolide impacts.  Today, the bolide event that ended the dinosaurs’ reign is the only impact event widely accepted as responsible for a mass extinction, and even that event is still under siege by scientists who propose other dynamics for the dinosaurs’ extinction.

In the dating sciences, the tests have all had their issues and refinements.  The equipment has become more sophisticated, problems have been resolved, and precision has been enhanced.  While there are continuing controversies, dating techniques have advanced just like many other processes over the history of science and technology.  In 2014, dates determined for fossils and artifacts are generally only accepted with confidence when several different samples are independently tested and by different kinds of tests, if possible.  If thermoluminescence, carbon-14, and other tests produce similar dates, as well as stratigraphic evidence, paleomagnetic evidence, current measurements of hotspot migration rates across tectonic plates, along with genetic and other evidence introduced in the past generation, those converging lines of evidence have produced an increasingly robust picture of not only what happened, but when.

In the 1990s, I found the dating issue enthralling and saw it assailed by fringe theorists and by catastrophists in particular.  A couple of decades later, I reached the understanding that, like all sciences, dating has its limitations and the enthusiasm for a new technique can become a little too exuberant, but dating techniques and technologies have greatly improved in my lifetime.  Dating the Cambrian Period’s beginning to 541 million years ago, and using 100,000-year increments to place the dates, may seem a conceit, thinking that scientists can place that event with that precision, but over the years my doubts have diminished.  When moon rocks and meteorites can be tested, and the findings support not only Earth’s age previously determined by myriad methods, but also support the prevailing theories for the solar system’s and Moon’s formation, call me impressed.  Controversies will persist over various finds and methods used, and scientific fraud certainly occurs, but taken as a whole, those converging lines of independently tested evidence make it increasingly unlikely that the entire enterprise is a mass farce, delusion, or even a conspiracy, as many from the fringes continue to argue.  There is still a Flat Earth Society, and it is not a parody.  I have looked into fringe claims for many years and few of them have proven valid; even if many were, their potential importance to the human journey was often minor to trifling.  As the story that this essay tells comes closer to today’s humanity, orthodox controversies become more heated and fringe claims proliferate.

Quite often, the pioneers of science and technology receive no credit at all, not even posthumous vindication, as others steal their work and become rich and famous.  But if private and governmental interests do not suppress the data and theory, as is regularly achieved regarding alternative energy and other disruptive technologies, usually the data will eventually win.  But the data does not always win.  The expedient but misleading tale of Louis Pasteur’s triumph in explaining the origins of life, which microbiology students are still taught in college, is an example of the phenomenon of false credit attributed to a figure who may have also marched the discipline off in the wrong direction, from which it has yet to recover.  Another problem has been fabricated “discoveries” that become uncritically accepted by the mainstream, and that ideal “skepticism” of science completely disappeared, as powerful interests promote industrial waste as “medicine,” for instance, as was done with fluoridation.  It was also done with tobacco smoking, and medical authorities even promoted asbestos cigarette filters, in one of many “believe it or not” episodes in the history of science and medicine.  Mercury was sold as “medicine” until my lifetime, and is still found in vaccines, for which the very theoretical and empirical foundation seems pretty shaky.  Lead received a similar clean bill of health by industrially funded laboratories as the conflicts of interest were surreal, and the public was completely unaware of who was really managing such public health issues and why.  Similar situations exist today.

Perhaps the most significant challenge to mainstream science is the fact that numerous advanced technologies already exist on Earth, including free energy and antigravity technologies, but they are actively kept from public awareness and use.  They and other exotic technologies developed in the above-top-secret world operate on principles that make the physics textbooks resemble cave drawings.

Although some scientists have challenged Carnot’s Second Law of Thermodynamics and even the First, tapping the zero-point field, as some fellow travelers did, does not violate the “laws of physics” at all; it is merely harnessing an energy source that mainstream science does not recognize, even though its greatest minds have posited its existence.  For that reason, my astronaut colleague called such energy “New Energy,” and we co-founded an organization in 2003 with that name.  However, when my partner and I began to mount a business in 1987 around “New Energy,” he called it “free electricity” in ads, and we used the term “free energy” before we knew anything about the field or our professional ancestors.  I used the term “free energy” for many years before I heard the term “new energy,” and I will probably always use “free energy” (“FE”), largely because I grew up with it and it is still commonly used in the field.  My partner's shared savings programs were also the closest thing to truly “free” energy that has ever been on the world market.

Thousands of scientists and inventors have independently pursued FE technologies, but all such efforts, if they had promise or garnered any success, have been suppressed by a clandestine and well-funded effort of global magnitude.  However, this essay will lay most of that aside until near the essay’s end, other than to note that one of Einstein’s protégés, David Bohm, theorized that space is anything but empty.[23]  Einstein also stated that his general theory of relativity resurrected the idea of an ether that his special theory of relativity supposedly rendered obsolete.[24]  According to Bohm’s computation, the energy existing in “empty space” is unimaginably vast, as one cubic centimeter of it contains more energy than is contained in all the mass of the known universe.  One of Fuller’s pupils not only subscribed to the notion that “empty” space is not empty, but he helped build technologies that harnessed that energy source, and his life’s story, like my former partner’s, is hard to believe, but has impressive evidence for its validity.  According to him, the recently discovered Higgs boson is part of an effort to “rebrand” what has been called the zero-point field and other names over the years, which is the field that FE technology often harnesses.[25]  I have encountered dozens of instances of scientists with theories that challenge the Standard Model of particle physics, and their primary upshot is a “new” energy source, which is often called zero-point energy.[26]  But, black projects[27] and “leading edge” theory aside (theory that is far older than I am), technologies have been publicly available for many years whose operation upends some of science’s oldest theories.[28]  “White science” (AKA "Establishment" or "mainstream" science) has great defects, especially when its pursuit conflicts with deeply entrenched economic and political interests.

Although the greatest physicists were arguably mystical in their orientation, they rarely explored the nature of consciousness in the way that modern human potential efforts have.  When I was 16 years old, it was demonstrated to me, very dramatically, that everybody inherently possesses psychic abilities, which falsifies today's materialistic theories of consciousness.  Millions of people had similar experiences during the last decades of the 20th century when performing such exercises.  They are usually life-changing events and available to nearly anybody who devotes the time to experiencing them, but a politically active arm of mainstream science, known as organized “skepticism,” has waged a holy war against such evidence for longer than I have been alive.  The scientific establishment’s warriors often denigrate such phenomena as “pseudoscience,” which is a term that they greatly abuse when attacking ideas and phenomena outside of their ability to investigate or that conflict with their materialistic assumptions.  Far too often, when scientists discuss materialism, they compare it to organized religion, particularly its fundamentalist strains, as if those are the only two alternatives, when they are on opposite ends of a spectrum in one way and two sides of the same coin in others.  Ironically, organized skepticism is largely comprised of anti-scientists who try to deny that such abilities of consciousness are even worthy of scientific investigation.  That they defend materialism with flawed logic, dishonesty, and dirty tricks is one thing, but all too often, as I performed the studies that led to this essay, I saw mainstream scientists trust the “skeptics” for their pronouncements on the validity of “paranormal” phenomena.  That would be like asking a Wall Street executive in the 1950s what his opinion of communism was.

I was also regularly dismayed by orthodox scientific and academic works that dealt with the human brain, consciousness, human nature, UFOs, FE technology, and the like, in which the authors accepted declassified government documents at face value (as in not wondering what else remained classified, for starters) or looked no further than 19th-century investigations.[29]  Direct personal experience is far more valuable than all of the experimental evidence that can be amassed; there is no substitute for it, as that is where knowledge comes from.  Armchair scientists who accept the skeptics' word for it have taken the easy way out and rely on highly unreliable "investigators" to tell them about the nature of reality.  They consequently do not have informed opinions, or perhaps more accurately, they have disinformed opinions.  The holy warriors’ efforts aside, the scientific data is impressive regarding what has been called “psi” and other terms, which clearly demonstrated abilities of consciousness that are still denied and neglected by mainstream science.[30]  Brian O'Leary advocated scientific testing of paranormal phenomena, but he was a voice in the wilderness.

Not all mainstream scientists relegate consciousness to a mere byproduct of chemistry.  John von Neumann’s interpretation of quantum mechanics is that consciousness is required for the wavefunctions that describe fields at the subatomic level to collapse into observable particles.[31]  He was not the only scientist whose theories required consciousness to exist in order for the physical universe to become observable.  The greatest physicists knew that materialism was a doctrine built on unprovable assumptions, which amounts to a faith.[32]  It can be quite revealing when mainstream scientists deal with phenomena that challenge the tenets of their faith.  Forthcoming quantum physicists regard the controversy over the implications of quantum theory as “our skeleton in the closet.”[33]  To the end of his life, Einstein was very uncomfortable with the implications of quantum theory, and his disquiet was ahead of its time.[34]  French physicist Alain Aspect performed a state-of-the-art test of Bell’s inequality, which helped establish the reality of quantum entanglement, which Einstein derided to his grave as “spooky action at a distance.”  When they met and Aspect proposed the experiment, John Bell’s first question was, “Do you have tenure?”[35]  That paradox at the heart of quantum physics was avoided by the Copenhagen interpretation, which focuses on getting the right answers for quantum predictions and avoids the implications for reality that the quantum enigma presents.[36]  Einstein and Schrödinger were not satisfied with a framework that made accurate predictions but avoided grappling with what was really happening.

White science still has almost nothing to say about the nature of consciousness.  However, Black Science (covert, largely privatized, and the same province where that advanced technology is sequestered) is somewhat familiar with the nature of consciousness and considers it to be far more than a byproduct of chemistry.  The assumption that the entire universe is a manifestation of consciousness is not only unassailable by White Science, but is probably a foundational assumption of Black Science and mystics.

The battle between materialists and religious orders over the years, in which materialist evolutionists grapple with creationists and intelligent design proponents, seems to be a feud between two fundamentalist camps.  Nowhere in such battles are the abilities or wisdom of accomplished mystics found.  The nature and role of consciousness, both in this dimension and beyond it, are likely far too subtle to be profitably engaged by the level of debate that predominates today.  Scientists such as Einstein were awestruck by the evident intelligence behind the universe’s design, but that did not mean that they believed in a God with a flowing beard.  As this essay will explore later, those issues are not merely fodder for idle philosophical pursuit, but at their root lies the crux of the current conundrum that humanity finds itself in, as we race toward our self-destruction.

White Science does not really know what energy is; it can only describe its measurable effects.[37]  At its root, there are two primary components of our universe: energy and consciousness.  Our universe may have begun as pure energy (and even if it did not, all matter appears to be comprised of energy), and consciousness may be required for our universe to exist at all, which may be part of the quantum paradox.  Energy and matter may be manifestations of consciousness, and large brains could be simply more refined “transducers” for more complex consciousness to manifest in physical reality.  In summary, everything physical is made of energy and our consciousness is all that we know, but the greatest physicists admitted that the nature of consciousness is not something that today’s science is equipped to study.  There is evidence that evolution is not purely the province of chance mutations, but that organisms can affect their evolution at the genetic level.[38]

The greatest scientists readily admitted that the theories and data of physics, that hardest of the hard sciences, drew highly limited descriptions of reality, and those scientists were usually, to one extent or another, mystics.  If textbook science falls far short of explaining reality, what can be said within its framework that is useful?  Plenty.  Our industrialized world is based on textbook science and feats such as putting men on the Moon were performed within the parameters of textbook science.  With the waning of overspecialization and overreliance on reductionism in the last decades of the 20th century, multidisciplinary works have proliferated and will tend to dominate the references for this essay.  I have found them not only very helpful for my own understanding, but they are appropriate references for a generalist essay.  I have also avoided scientific terminology when feasible.  For example, I use “seafloor” instead of “benthic,” and if a non-specialized term will suffice for a scientific concept, I will often use it.

The mainstream theory is that matter consists of elementary particles (which are all forms of energy), and their interaction with the Higgs field is responsible for all mass.  Almost all mass in the known universe consists of protons in hydrogen atoms, and those protons are in turn comprised of quarks, and electrons and neutrinos are the other first generation fundamental particles.  Protons have a positive electric charge, electrons a negative charge, and neutrinos no net charge.  The simplest atom consists of one proton in the nucleus and one electron in “orbit” around it, which is the most common hydrogen atom.  Today, mainstream science recognizes four forces in the universe: gravity, electromagnetism, and the strong and weak forces in an atom’s nucleus.  Gravity attracts matter to matter, and is thought to be responsible for the formation of stars, planets, and galaxies.  But the universe seems to be built from processes, not objects.

The Standard Model of particle physics is complex, but the preceding presentation is largely adequate for this essay’s purpose, while it can be helpful to be aware that the physics behind FE and antigravity technologies will probably render the Standard Model obsolete.  If FE, antigravity, and related technologies finally come in from the shadows, the elusive Unified Field may come with them, and the Unified Field might well be consciousness, which will help unite the scientist and the mystic, and that field may be divine in nature.  But that understanding is not necessary to relate the story that White Science tells today of how Earth developed from its initial state to today’s, when complex life is under siege by an ape that quickly spread across the planet like a cancer once it achieved the requisite intelligence, social organization, and technological prowess.

With the above limitations acknowledged, this essay will explore the earthly journeys of life and humanity, and energy’s role in them. 

 

Energy and Chemistry

  Chapter summary:

This chapter presents several energy and chemistry concepts essential to this essay.  Even though scientists do not really know what energy is (they do not know what light or gravity are, either), energy is perhaps best seen as motion, whether it is a photon flying through space, the "orbit" of an electron around an atom's nucleus or of Earth around the Sun, an object falling to Earth, a river flowing toward the ocean, air moving through Earth's atmosphere, rising and falling tides, and blood moving through a heart.

In their dance around an atom’s nucleus, electrons exist in “shells.”  The most stable electron configuration exists when the electrons fill the shells and each electron is paired with another, and each electron spins in the opposite direction of its partner.  The classical view of an electron had an electron orbiting the nucleus much in the same way that Earth orbits the Sun, but quantum theory presents a different picture, in which an electron is a wave that only appears to be a particle when it is observed.  Even then, a hydrogen electron’s orbit as presented by quantum theory does not look much different from the classical image, and the classical view largely suffices for this essay in presenting the energetic aspects of the electrons’ properties.

When one electron shell is filled, electrons begin to fill shells farther from the nucleus.  For the simplest atoms it works that way, but for larger atoms, particularly those of metallic elements, electrons fill shells in more complex fashion and electrons begin to fill subshells not necessarily in the shell closest to the nucleus.  When an electron is unpaired or in an unfilled shell, it can be a valence electron, which can form bonds with other atoms.  In most circumstances, only unpaired electrons form bonds with other atoms.  Electron bonds between atoms provide the basis for chemistry and life on Earth.

For that simplest element, hydrogen, its lone electron has an affinity to pair up with another electron, and that smallest shell contains two electrons.  Hydrogen is never found in its monoatomic state in nature, but is bonded to other elements, as that lone electron finds another one to pair with, which also fills that simplest shell.  In its pure state in nature, hydrogen is found paired with itself and forms a diatomic molecule.  In chemistry notation, it is presented as H2.  The most common hydrogen combination with another element on Earth is with oxygen (“O” in chemistry notation), which forms water and is presented as H2O.  Oxygen has two unpaired electrons in its electron shell (its outer shell has eight positions for electrons, with six of them filled), and oxygen’s electrons pair with electrons in other atoms with a “hunger” that is only surpassed by fluorine, which is the most reactive known element.  The “hungriest” atoms can completely strip an electron from nearby atoms and form ions, whereby the resulting atoms have imbalances between their electrons and protons, and thus possess net electric charges.  An atom that loses an electron in a chemical reaction is called “oxidized,” while the atom that gains one is called “reduced.”  When electrons are transferred or shared, those hungriest atoms will cause the greatest amounts of energy to be released in the reactions.  Fluorine is so reactive that if it were sprayed on water, the water would burn.

The element with two protons in its nucleus is helium (the number of protons determines what element the atom is), and its electrons are paired and its shell is filled.  Consequently, helium does not want to share its electrons with anything.  Helium is the most non-reactive element known.  It has never bonded with any other element, even fluorine.  In the periodic table of the elements, helium is in the family known as noble gases (formerly named “inert”), because they resist reacting with other atoms.  Their electron shells are completely filled. 

An electron’s distance from the nucleus can vary.  It is not a smooth variation of distance, but only certain distances are possible.  When an electron changes its distance, it jumps in a process known as quantum leaping.  That quantum leaping reflects how electrons gain or release energy.  When light hits an atom, if it is absorbed by an electron, the photon gives the electron the energy to move to an orbit farther away.  When an electron emits light, that lost photon removes energy and the electron falls to a lower orbit.  The potential energy in the electron as it orbits the nucleus and the potential energy in a rock that I hold above the ground are similar, as the diagram below demonstrates. 

Below is a diagram of a hydrogen atom as its electron orbits farther from the nucleus when it absorbs energy. 

As the diagram depicts, the atom gets larger.  When an electron moves into an orbit farther from the nucleus, the atom will vibrate more, like the way a car’s engine will vibrate more when it runs faster.  Lateral movement (also called translational motion) is called temperature.  While finding an accurate definition of temperature can be a frustrating experience, temperature is a measure of the kinetic energy (the energy of motion) in matter.  As with the behavior of photons, at the atomic level the concept of temperature can break down, and classical behaviors emerge as groups of atoms lose their quantum properties.[39]  When one atom collides with another, there is a transfer of energy, as there is in any collision.  The transferred energy can be stored by the electrons leaping into higher orbits.  They can in turn release that energy in the form of photons and return to lower orbits.

The increased movement of heated atoms is why substances expand in volume.  The more motion, the higher the temperature, and just as an engine will fly apart when the RPMs go too high, when an atom vibrates too fast, an electron can leave the atom entirely and the atom becomes an ion.  As substances become hotter, the electrons will be in higher orbits, and will fall farther when giving off photonic energy, so the photons have more energy (shorter wavelengths).  Get a substance hot enough and it will emit photons that we can see (visible light).  Those first visible photons will be on the lower end of the spectrum of light that we can see with our eyes, and will be red.  Get the substance hotter and the light can turn white, which means that we are seeing the full visible spectrum of light.  Nealry half of the Sun’s energy output is in the form of visible light.  Get matter hot enough and it becomes plasma, as electrons float in a soup with nuclei.  Those electrons are too energetic to be captured by nuclei and placed into shells. 

When two atoms come close to each other, if the potential energy of their combined state is less than their potential energy when they are separate, the atoms will tend to react.  But the reaction only happens when the electron shells come into an alignment so that the reaction can happen.  It is an issue of alignment and the atoms’ velocity.  If the shells do not meet in the proper alignment and velocity, the reaction will not happen and the atoms will bounce away from each other.  The faster and more often the atoms collide, the likelier they are to react and reach that lower energy state.  Chemical (electron shell) reactions need to reach their activation energy to occur, and this is measured in temperature.  The activation energy for hydrogen and oxygen to react and form water is about 560 degrees Celsius (560o C).  Nuclear reactions work in similar fashion, but for nuclear fusion in the Sun’s core, at 16 million degrees Celsius, at a pressure 340 billion times greater than Earth’s atmosphere at sea level, in 10 billion years at one trillion collisions per second, a proton has a 50% chance of fusing with another proton.[40]  Nuclear fusion is thus far rarer than electron bonding, and far less energy is released when atoms bond via electrons.  The fusion of a helium nucleus releases more than a million times the energy that it takes to ionize a hydrogen atom.  As will be discussed later, some reactions have a cumulative result of absorbing energy, while others release it.  The first can be seen as an investment of energy, while the second can be seen as consuming it.  Organisms and civilizations have always faced the investment/consumption decision. 

Below is a diagram of two hydrogen atoms before and after reaction, as they bond to form H2. 

Elements with their electron shells mostly, but not completely, filled are, in order of electronegativity: fluorine, oxygen, chlorine, and nitrogen.  In that upper right corner of the periodic table, of largely filled electron shells, phosphorus and sulfur also reside.  Carbon and hydrogen have their valence shells half filled.  With the exception of fluorine, those elements listed above provide virtually all of the human body’s atoms.  The body also contains metals, particularly sodium, magnesium, calcium, and iron, which “donate” electrons and make key chemical reactions possible.  Fluorine forms the smallest negatively charged ions known to science and wrecks organic molecules for reasons discussed later in this essay.  Organisms do not use fluorine, except for some plants that use it as a poison. 

When atoms combine through shared electrons (called “covalent” bonds), the electrons are not always shared equally.  The classic example of this is the water molecule.  Oxygen “hogs” the electrons that the hydrogen atoms share with it.  Because those electrons spend more time in the oxygen atom‘s electron shell than they do in the hydrogen atoms’ electron shells, the oxygen atom in a water molecule will get a negative charge to it, and the hydrogen atoms will get positive charges.  The charges will not be as strong as if they were ionized atoms, but those charges “polarize” the molecule.  In a body of water, oxygen atoms will attract hydrogen atoms of neighboring molecules, and a relatively weak attraction known as a hydrogen bond forms.  Below is a picture of hydrogen bonds in water.  (Source: Wikimedia Commons)

Those hydrogen bonds make water the miraculous substance that it is.  The unusual surface tension of water is due to hydrogen bonding.  Water has a very high boiling point for its molecular weight (compare the boiling points of water and carbon dioxide, for instance) because of that hydrogen bonding.  Water’s unique properties made it the essential medium for biochemical reactions; the human body is mostly made of water. 

Those energy and chemistry concepts should make this essay easier to digest. 

 

Timelines of Energy, Geology, and Early Life

Timeline of Significant Energy Events in Earth's and Life's History

Abbreviated Geologic Time Scale

Early Earth Timeline before the Eon of Complex Life

Significant Energy Events in Earth's and Life's History as of 2014

Energy Event

Timeframe

Significance

Nuclear fusion begins in the Sun

c. 4.6 billion years ago (“bya”)

Provides the power for all of Earth's geophysical, geochemical, and ecological systems, with the only exception being radioactivity within Earth.

Life on Earth begins

c. 3.8 – 3.5 bya

Organisms begin to capture chemical energy.

Enzymes appear

c. 3.8 – 3.5 bya

Enzymes accelerate chemical reactions by millions of times, making all but the simplest life (pre-LUCA) possible.

Photosynthetic organisms first appear

c. 3.5 – 3.4 bya

Organisms begin to directly capture photonic solar energy.

Oxygenic photosynthesis first appears

c. 3.5 – 2.8 bya

Oxygen is generated, which complex life will later use, which makes non-aquatic life possible and also preserves the global ocean.

Aerobic respiration first appears

c. 2.4 – 1.8 bya

Allows for more energetic respiration than anaerobic respiration.

Complex cells first appear (eukaryotic)

c. 2.1 – 1.6 bya

Allows for larger cells and far greater energy generation capacity – pound for pound, a complex cell uses energy 100,000 times as fast as the Sun creates it.

First chloroplast created

c. 1.6 – 0.6 bya

Allows for direct energy capture of complex life, and led to plants.

Dramatic climb in atmospheric oxygen, to eventually achieve modern levels, begins

c. 850-420 million years ago ("mya") 

Creates conditions for complex life to appear and dominate Earth's ecosystems.

First animal appears

c. 760 to 665 mya

First large-scale energy users.

Deep oceans oxygenated

c. 580 - 560 mya

Creates conditions for complex life to appear, first in the global ocean.

Cambrian Explosion begins

c. 541 mya

First complex ecosystems appear.

Teeth appear

c. 540-530 mya

Concentrated application of muscle energy.

Reef ecosystems appear

c. 513 mya

The most complex aquatic ecosystem appears.

Land plants appear

c. 470 mya

Energetic basis for land-based ecosystems appears.

Land animals appear

c. 430-420 mya

Ability to create non-aquatic ecosystems.

Jaws appear

c. 420 mya

Greatest energy manipulation enhancement among vertebrates until the rise of humans.

Vascular plants appear

c. 410 mya

Ability to create vertical ecosystems.

Trees appear

c. 385 mya

Largest organisms ever, and greatest energy storage and delivery to any biome, and they become the basis for coal.

Fish migrate to land

c. 375 mya

Precursor to dominant land animals.

Seed-reproducing plants appear

c. 375 mya

Ability to colonize dry lands.

Amniotes appear

c. 320-310 mya

Ability to survive in dry lands.

Lignin-digesting organism appears

c. 290 mya

Ability to make tree-stored energy available to ecosystems.

Dinosaurs appear

c. 243 mya

Among the first terrestrial animals with upright posture, enabling great aerobic capacity and domination of terrestrial environments.

Tools first used

c. 400-200 mya?

Confers energy advantage to tool user.

Flowering plants appear

c. 160 mya

Great energy innovation to reduce reproductive costs, and animals are the beneficiaries, as they act as reproductive enzymes in greatest symbiosis of plant and animal life, which allows flowering plants to dominate terrestrial ecosystems.

The control of fire

c. 2.0-1.0 mya

Allows protohumans to leave trees, become Earth's dominant predator, alter ecosystems, and cooked food helped spur dramatic biological changes, including encephalization in human line.

Projectile weapons invented

c. 400 thousand years ago ("kya")

Changes the terms of engagement with prey and reduces hunting risk of large animals and increases effectiveness.

Boat invented

c. 60 kya

Allows for first low-energy transportation, and ability to travel to unpopulated continents.

Widespread domestication of plants and animals

c. 10 kya

Provides the local and stable energy supply that allowed for sedentary human populations and civilization.

First metal smelted

c. 7 kya

Allows for tools highly improved over stone, for greater energy effectiveness of human activities.

Plow invented

c. 7 kya

Allows for greatly increased energy yields from agriculture.

First sailboat invented

c. 6 kya

First technology to take advantage of non-biological energy.

Wheel invented

c. 5.5 kya

Reduces energy use for ground-based transportation.

Coal first burned

c. 5-4 kya

First use of non-biomass for chemical energy.

Iron first smelted

c. 4.5 kya

Allows for vastly improved tools.

Coal used to smelt metal

c. 3.0 kya

First use of non-biomass to smelt metal

Watermill invented

c. 2.2 kya

First time the energy of the hydrological cycle is harnessed for use on land.

Windmill invented

c. 2.0 kya

First time wind is harnessed for use on land.

Steam engine invented

c. 2.0 kya

First time the motive power of fire is harnessed.

Europe learns to sail across the world's oceans

The years 1420 – 1522, common era

Turns global ocean into low-energy transportation lane and allows Europe to conquer the world.

First use of coal for smelting metal in England

1709

First act of Industrial Revolution.

First commercial steam engine built

1710

First time the motive power of fire is harnessed to perform work.

First practical use of electricity

c. 1805

New way to use energy would revolutionize civilization.

First commercial oil well drilled

1859

The most coveted fuel of the Industrial Revolution is first used.

Incandescent lighting first commercialized

c. 1880

First commercial use of electricity. 

Alternating current technology prevails over direct current

1891

The major technical hurdle to electrifying civilization is overcome.

First attempt to create "free energy" technology is abandoned due to lack of funding

1903

This event inaugurates the era of organized suppression of free energy technologies.

First man-powered flight, and establishment of first company to mass-produce automobiles

1903

Major transportation developments begin to be powered by petroleum.

Albert Einstein published his special theory of relativity and equation for converting mass to energy

1905

Forms the framework for 20th century physics, including the energy that can be liberated from an atom's nucleus.

British Navy converts from coal to oil

1911

Provides incentive for oil-poor United Kingdom to dominate the oil-rich Middle East.

Oil-rich Ottoman Empire dismembered by industrial powers, establishing imperial and neocolonial rule in Middle East

1918

The West invades the Middle East and has yet to leave, lured by the oil.

USA harnesses the atom's power, and first use is vaporizing two cities, and the greatest period of economic prosperity in history begins

1945

The nuclear age is born, as well as the Golden Age of American capitalism.

The USA's national security state is born, Roswell incident

1947

By this time, free energy technology has probably been either developed or acquired.

Electrogravitic research goes black

1950s

This is the final technology, along with free energy technology, to make humanity a universally prosperous and space-faring species.

The USA reaches Peak Oil

1970

The decline in the American standard of living begins.

Former astronaut nearly dies immediately after rejecting the American military's UFO research "offer"

1990s

The incident is one of many that demonstrate that the UFO issue is very real, but happened to somebody close to me.

A close personal friend is shown free energy and antigravity technologies, among others, and another close friend had free energy technology demonstrated

1980-1990s

Those incidents are two of many that demonstrate that the free energy suppression issue is very real, but were witnessed by people close to me.

The world reaches Peak Oil

2006

The beginning of the end of industrial civilization.

The Deepwater Horizon oil spill is history's largest

2010

More evidence of how dangerous humanity's current energy production methods are.

The Fukushima nuclear event is probably history's greatest

2011

More evidence of how dangerous humanity's current energy production methods are.

The table below presents an abbreviated geologic time scale, with times and events germane to this essay.  Please refer to a complete geologic time scale when this one seems inadequate.   

Abbreviated Geologic Time Scale

Eon

Period

Epoch

Timeframe

Global Map Reconstruction

Geophysical events

Life events

Hadean

   

c. 4.56 to 4.0 bya

No land masses yet.

Earth, Moon, and oceans form.  Earth is bombarded with planetesimals.  Everything is hot.  Atmosphere is primarily comprised of carbon dioxide.

None yet.

Archaean

   

4.0 to 2.5 bya

Too much uncertainty and too little evidence to confidently draw maps, but landmasses existed.

By the Archaean's end, the Sun is 80% as bright as today.  Earth cools to habitable temperature.  Continents begin forming and growing.  Atmosphere is mostly nitrogen, but oxygen begins to increase.  First known glaciation.

First life appears.  Photosynthesis begins.  All life is bacterial.  Oxygenic photosynthesis first appears.

Proterozoic

   

c. 2.5 bya to 541 mya

Maps begin to be made with confidence at about 750 mya.

Earth’s two Snowball Earth events (1, 2) bookend the “boring billion years.”  Banded iron formations coincide with ice ages. 

Complex cell (eukaryote) first appears.  Aerobic respiration first appears.  First chloroplast appears.  Sexual reproduction first appears.  Grazing of photosynthetic organisms first appears.

Cryogenian

 

c. 850 to 635 mya

Late Cryogenian Map

Supercontinent Rodinia breaks up.  Second Snowball Earth event.  Atmosphere oxygenated to near modern levels.  Final banded iron formations appear.

First animals appear.  First land plants may have appeared.

Ediacaran

 

c. 635 to 541 mya

Mid-Ediacaran Map

Deep ocean is oxygenated.  Proto-Tethys Ocean appears.

Mass extinction of microscopic eukaryotes.  First large animals appear.

Phanerozoic

Cambrian

 

c. 541 to 485 mya

Late Cambrian Map

Continents primarily in Southern Hemisphere.  Oceans are hot.

First mass diversification of complex life.  Most modern phyla appear.  First eyes develop.  Arthropods dominate biomes.

Ordovician

 

c. 485 to 443 mya

Late Ordovician Map

Paleo-Tethys Ocean begins forming.  Ice age begins and causes mass extinction which ends period.

Complex life continues diversifying.  First large reefs appear. Mollusks proliferate and diversify.  Nautiloids are apex predators.  First fossils of land plants recovered from Ordovician sediments.  Period ends with first great mass extinction of complex life.

Silurian

 

c. 443 to 419 mya

Mid-Silurian Map

Hot, shallow seas dominate biomes.  Climate and sea level changes cause minor extinctions. 

Reefs recover and expand.  Fish begin to develop jaws.  First invasions of land by animals.  First vascular plants appear. 

Devonian

 

c. 419 to 359 mya

Late Devonian Map

Continents closing to form Pangaea, ice age begins at end of Devonian and cause mass extinction, possibly initiated by first forests sequestering carbon.

Fishes thrive.  First forests appear.  First vertebrates invade land.

Carboniferous

 

c. 359 to 299 mya

Early Carboniferous Map

End-Carboniferous Map

Atmospheric oxygen levels highest ever, likely due to carbon sequestration by coal swamps.  Ice age increases in extent, causing collapse of rainforest. 

Sharks thrive.  Gigantic land arthropods.  First permanent land colonization by vertebrates.  Amphibians thrive.  Reptiles appear.  Rainforests and swamps proliferate, forming most of Earth’s coal deposits.  Fungus appears that digests lignin. 

Permian

 

c. 299 to 252 mya

Late Permian Map

Tethys Ocean forms.  Oxygen levels drop.  Great mountain-building and volcanism as Pangaea forms, and its formation initiates the greatest mass extinction in eon of complex life.  Ice age ends.

Synapsid reptiles dominate land.  Conifer forests first appear.

Triassic

 

c. 252 to 201 mya

Mid-Triassic Map

Pangaea begins to break up.  Greenhouse Earth begins and lasts the entire Mesozoic Era. 

Dinosaurs and mammals appear, and by the Triassic’s end, diapsid reptiles dominate land, sea, and air.  Stony corals appear as reefs slowly recover.

Jurassic

 

c. 201 to 145 mya

Early Jurassic Map

Mid-Jurassic Map

Late Jurassic Map

Northern continents split from southern continents.  Atlantic Ocean begins to form.

Dinosaurs become gigantic.  First birds appear.

Cretaceous

 

c. 145 to 66 mya

Mid-Cretaceous Map

End-Cretaceous Map

Sea levels dramatically rise.  Continents continue to separate.  Asteroid impact drives non-bird dinosaurs extinct and ends the Mesozoic Era.

Flowers first appear.  Chewing dinosaurs become prominent.  Forests near the poles.  Rudist bivalves displace coral reefs, but go extinct before the end-Cretaceous extinction. 

Paleogene

Paleocene

c. 66 to 56 mya

Paleocene Climate Map

Greenhouse Earth conditions still prevail, and an anomalous warming occurred to end the epoch. 

Mammals grow and diversify to fill empty niches left behind by reptiles. 

Eocene

c. 56 to 34 mya

Mid-Eocene Map

Late Eocene Map

Warmest epoch in hundreds of millions of years, but began cooling midway into epoch, beginning Icehouse Earth conditions.  Europe collides with Asia, and Asian mammals displace European mammals.

A Golden Age of Life on Earth, when life thrived all the way to the poles.  Whales appear.  Cooling in Late Eocene drives warm-climate species to extinction. 

Oligocene

c. 34 to 23 mya

Oligocene Climate Map

Cool epoch, as Antarctic ice sheets form, with warming at epoch’s end. 

Early whales die out, replaced by whales adapted to new ocean biomes. 

Neogene

Miocene

c. 23 to 5.3 mya

Mid-Miocene Map

First half of epoch is warm, then cools down. 

First half of epoch is warm, and called The Golden Age of Mammals.  Apes appear and spread throughout Africa and Eurasia.  Apes migrate back to Africa in cooling, while some remain in Southeast Asia.

Pliocene

c. 5.3 to 2.6 mya

Would appear nearly identical to today’s global map.

Earth continues to cool, and land bridge of North and South America initiates mass extinction of South American mammals and initiates current ice age. 

Bipedal apes appear.  First stone tools made at end of epoch. 

Quaternary

Pleistocene

c. 2.6 mya to 12 kya

Early Pleistocene Map

Late-Pleistocene Map

Current ice age begins.

Mammals already cold-adapted, and relatively few extinctions, until the rise of humans. 

Holocene

12 kya to present

Today’s Map

Interglacial period in current ice age, and recent and probably human-caused warming may extend the interglacial period. 

Mass extinctions of large animals happen wherever humans begin to appear. By the 21st century, the Sixth Mass Extinction in the eon of complex life appears to be underway, entirely caused by humans. 

 

 

The Formation and Early Development of the Sun and Earth

Chapter summary:

In the tables above, some dates have ranges as such old dates often have relatively thin evidence supporting them, which can be interpreted in different ways.  Those dates will be adjusted as the scientific evidence and theories develop.  As I was writing this essay, a study was published that may have pushed back the beginning of the Great Oxygenation Event by several hundred million years.[41]  Moving dates can change some theories of causation, but few scientists will dispute the idea that Earth’s atmosphere was primarily oxygenated by oxygenic photosynthesis.  It is the only plausible mechanism for that oxygenation event and Earth’s continuing high atmospheric oxygen content.[42] 

After the Big Bang, when matter began to coalesce, virtually all mass in the universe was contained in hydrogen atoms, with traces of the next two lightest elements: helium and lithium.  According to the Standard Model, atoms have no mass by themselves, but the field that gives rise to the Higgs Boson provides the mass.  Gravity attracted hydrogen atoms to each other and, where “clumps” of hydrogen became large enough, the pressure in the clump’s center (a star’s core) became great enough so that the mutual repulsion of the protons in hydrogen nuclei was overcome (like charges repel each other, while opposite charges attract), and the protons fused together.  That fusion released a great deal of primordial Big Bang energy, and fusion powers stars.

Depending on the star’s size and the resulting temperatures and pressures, various larger elemental nuclei can be produced.  Iron is the heaviest element created during a large star’s primary fusion process.  Nuclei larger than the simplest hydrogen nucleus contain neutrons as well as protons.  As the name implies, neutrons have no net electric charge, but have about the same mass as a proton (an electron has less than a thousandth the mass of a proton, so virtually all the mass in atoms is provided by its protons and neutrons).  Radioactive decay into daughter isotopes is mediated by the weak nuclear force.

In the smaller stars that eventually become white dwarfs, the primary fusion process creates oxygen as its heaviest element.  Even though the Sun is larger than about 95% of the Milky Way Galaxy’s stars, it is destined to become a white dwarf in about six or seven billion years.

Several different fusion processes have been identified, and stars from about half the size of the Sun to about nine times larger can undergo a process known as s-process fusion late in their lives, and that process has created about half of the elements heavier than iron; bismuth is the heaviest element created by the process.  Those heavier elements are eventually blown from the star by its stellar wind as it becomes a white dwarf.  Stars with more than nine times the mass of the Sun undergo a different process at the end of their lives.  When the hydrogen and helium fuel is used up and the fusion processes in those stars’ cores are reduced low enough, gravity will cause those stars to collapse in on themselves.  That collapse creates the pressures needed to fuse those other atoms heavier than iron, including the heaviest elements.  Uranium is the heaviest naturally produced element.  In an instant, r-process fusion occurs.  Depending on a collapsing star’s composition, it can collapse into a black hole or neutron star or explode into a supernova.

When a star becomes a supernova, those heavy elements are sprayed into the galactic neighborhood by a stupendous release of fusion energy.  Over the subsequent eons, gravity will cause the remnants of stars, and hydrogen that had not yet become a star or did not fuse within a star, to coalesce into an accretion disk, and a new star with its attendant planets will form.  The Sun will take more than ten billion years to live its life cycle before becoming a white dwarf.  Large stars burn much more quickly and can become supernovas after as little as ten million years of main-sequence burning.  The rule is: the larger the star, the shorter its life.

The accretion disk from which the Sun and its planets were formed appeared in a relatively short time, and the disk was originally a molecular cloud that may have been disturbed by an exploding star.  A "local" exploding star likely provided the bulk of our solar system's matter, and the entire mess gravitationally collapsed into the disk.  Earth’s age is estimated to be about 4.6 billion years, and formed fewer than 100 million years after the Sun did.  In a mere 50 million years after formation, the Sun became compressed enough to initiate the sustained fusion that still powers it and will for several billion more years.

Our solar system’s planets initially formed from clumps of heavier atoms, and the rocky planets formed in a region too hot for lighter elements and compounds to condense.  Oxygen and iron, those two largest products of main-sequence burning, comprise nearly two-thirds of Earth’s mass.

Just past our solar system’s “frost line,” the largest planet and first gas giant, Jupiter, formed.  In our solar system’s early days, smaller agglomerations of mass, called planetesimals, swarmed.  Those that began their lives inside the frost line were rocky, and those outside the frost line were generally comprised of lighter elements.  Those planetesimals bombarded the forming planets and increased their mass.  Other planetesimals were ejected from the solar system as the gravity of the Sun and planets whipped them around.  Today’s solar system provides mute evidence of that bombardment, as all rocky planets and moons are heavily cratered.  Earth’s geological processes have removed most evidence of that bombardment, but other rocky bodies have preserved the evidence.  It is thought that the bombardment of Earth by the planetesimals comprised of lighter elements provided the materials for Earth’s oceans and atmosphere.  Venus and Mars were also bombarded with the lighter elements and may have plentiful water long ago, but only Earth retained its water.  The biggest collision between Earth and its neighbors may well have created the Moon, and although the currently prevailing hypothesis has plenty of problems, the other hypotheses have more.  Moon rocks obtained by NASA’s Apollo missions show that the oldest parts of the Moon’s surface are about the same age as Earth. 

Today’s prevailing scientific theories consider stars to be the observable universe‘s energy centers.  According to today’s theories, 95% of the universe is not observable, as about 70% is dark energy and 25% is dark matter.  At this time, dark energy and dark matter have never been observed.  Any theory that relies on unobserved phenomena is going to be highly provisional, and I consider it unlikely that the prevailing cosmological theories a century from now will much resemble those of today.  The scale of the universe, from its largest to smallest objects, is truly difficult to imagine, and this animation can help provide some perspective.

The chemistry of Earth’s land, oceans, and atmosphere provides the raw material for life, but if the Sun disappeared tomorrow, Earth’s surface would quickly become a block of ice with an insignificant atmosphere.  Partly because humanity has not explored beyond our home star system, our planet is the universe’s only place officially acknowledged to host life as we know it. 

What is called geologic time is the calendar of Earth’s life cycle so far.  The scale of geologic time strains human brains with its immensity.  Writing about a geological period that “only” lasted 24 million years is part of the sometimes surreal experience of writing in terms of geologic time.  European geologists developed most of the calendar’s names in the 19th century, generally naming the timeframes after the locations where the first fossils of that time were discovered in their particular sedimentary layers.  Earth’s calendar has been divided into eons, eras, periods, epochs, and ages, and those categories are defined by the layers’ geological particulars, usually the discovered fossils.

The journey of life on Earth has been greatly affected by geophysical and geochemical processes as well as influences from beyond Earth, such as:

 

  •  Continental formation and moving tectonic plates, and volcanism;

  • Land-based dynamics, including erosion, weathering, uplift, and subsidence;

  • The chemistry of the oceans and atmosphere;

  • The currents in the oceans and atmosphere, including oceanic tides;

  • The physics of Earth’s atmosphere and magnetic field;

  • The climate, including precipitation and changing temperatures;

  • Comet and asteroid impacts;

  • Earth’s relationship with the Moon;

  • Variations in Earth’s orientation to the Sun;

  • Slowly increasing solar output as the Sun grows older, and minor variations in solar output.

 

Those processes and events can interact with each other, and a few examples can provide an idea of the dynamics’ complexity.  What follows are today’s orthodox views, to the best of my knowledge, and they can certainly change in the future, perhaps even radically, just as cosmological and subatomic theories may change radically.  It seems to me, however, that geophysical and geochemical processes are understood better and have more robust data than many other areas of science, so geophysics and geochemistry are areas where I expect fewer radical changes than others.  Maybe that is because it is neither too big nor too small and closer to our daily reality than distant stars or what is happening inside atoms.

Volcanism can not only temporarily alter the atmosphere’s chemistry, but the ash from volcanism can also block sunlight from reaching Earth’s surface and lead to atmospheric cooling.[43]  Carbon dioxide vented by volcanism in the Mesozoic era is what made it so warm.  Tectonic plate movements can alter the circulation of the atmosphere and ocean.  When continental plates come together into a supercontinent, oceanic currents can fail and the oceans can become anoxic, as atmospheric oxygen is no longer drawn into the global ocean’s depths, which may have triggered numerous mass extinction events.[44]  When continents are near the poles, ice ages can appear, but in our current ice age the tipping point is variations in Earth’s orientation to the Sun, which is affected by, among other influences, the Moon. 

Tectonic plates can collide, such as the collision of India into Asia, which formed the Himalayan Mountains and raised the Tibetan Plateau.  That continuing event not only changed Earth’s weather patterns and influenced the monsoons’ formation, it also exposed a great deal of raw rock to the atmosphere and consequently removed atmospheric carbon dioxide through weathering, which in turn made the atmosphere cooler.  That may have contributed to the ice age that we currently experience, although other studies indicate that the carbon removal may have been more due to the burial of organic matter.  The debate is continuing as the complex dynamics are subjected to scientific investigation.[45]  For all of the controversy over the dynamics, few scientists argue against the idea that atmospheric carbon dioxide has been falling, fairly consistently, since about 150-to-100 mya, from more than a thousand parts per million to the roughly 200-300 parts per million (“PPM”) of the past million years.  Nearly 35 million years ago (also written as “35 mya”), carbon dioxide levels fell below 600 PPM, when the Antarctic ice sheet began to form.[46]  During the current fossil fuel era, Earth’s atmosphere may reach 600 PPM again, or higher, in this century.  It is already nearly 400 PPM and rising fast.  Carbon dioxide levels are considered to be a primary variable affecting the temperature of Earth’s surface over the eons. 

Earth’s development has also been greatly impacted by life processes.  For instance, if hydrogen floats free in the atmosphere, Earth’s gravity is not strong enough to prevent it from escaping to space.  Ultraviolet light breaks water vapor into hydrogen and oxygen.[47]  If not for the high oxygen content of Earth’s atmosphere, Earth would have lost its oceans as all the hydrogen from split water molecules eventually drifted into space.  Scientists believe that that happened to Venus and Mars, although Venus may have never cooled enough to form liquid water; it split in the atmosphere and hydrogen then escaped to space.[48]  Without the ocean, there would not be life on Earth as we know it.  On Earth, that hydrogen liberated by ultraviolet light reacts with atmospheric oxygen and turns back into water before it can escape into space.[49]  The reason for free oxygen in the atmosphere is photosynthesis.  When comparing Earth’s tectonics to Venus’s, the formation of granite, continents, and setting the tectonic plates in motion appears to be due to Earth’s ocean.[50]  Plate tectonics are responsible for recycling elements through Earth’s crust and mantle, and the carbon cycle in particular has great import.  Photosynthesis led to atmospheric oxygen, which led to the ozone layer that helped prevent the splitting of water, and atmospheric oxygen recaptured hydrogen that would have otherwise escaped to space, which prevented the oceans from disappearing, which probably led to plate tectonics, which led to the formation of granitic continents, which led to land-based life.  In short, life made Earth more conducive to life.  That is the most important impact of life on geophysical and geochemical processes, but far from the only one; others will be explored in this essay. 

Geology in the West is considered to have begun during the Classic Greek period, and Persian and Chinese scholars furthered the discipline during the medieval period.  While volcanoes and geysers have always provided humanity with abundant evidence that Earth’s interior is hot, when humans began mining hydrocarbons and metals in abundance during the early days of industrialization, the collection of data about Earth’s subterranean temperature began.  It was not until my lifetime that some of Earth’s geological processes were understood well enough to begin mapping its energy flows.  Today’s most widely accepted hypothesis is that the energy provided by radioactive decay of elements such as potassium, uranium, and thorium is the primary heat source for Earth’s geological processes, and propels mass flows within Earth.  There is a constant upwelling of mass from the mantle, riding those energy currents.  When those flows reach Earth’s crust, the lighter portions float to Earth’s surface.[51]  Those portions eventually cool, become denser, and sink back into the mantle.  That process is thought to have begun about three billion years ago (also written as “three bya”), about the time that the continents began forming in earnest.  Three bya, the continents may have only had about a quarter of the mass that they do today.[52]  There are even recent ideas that life processes led to forming the continents.[53] 

The lightest portions of Earth’s crust, a relative wisp of Earth’s mass, make up the continents today, which are primarily made of lighter rocks such as granite, and the remainder of the crust is composed of denser rock such as basalt.  The granites formed when basalt was exposed to water, and the process partly replaced heavier iron with lighter sodium and potassium.  Earth is our solar system’s only known home of granite.  Water also became incorporated into the rocks, generally where the heavier oceanic crust was subducted below the lighter continental crust.[54]  It is thought today that the original global ocean had about twice the volume of today’s ocean.[55]  The “missing” ocean was incorporated into the crust and mantle, and helps make the granitic continents lighter so that they float on the heavier basaltic crust.  Granite is solely comprised of metallic oxides, and hydrated minerals abound in Earth’s continents.  Those continental masses have been floating across Earth’s surface for billions of years as they have collided with each other, rebounded, lifted, subducted below the crust, and recycled into the mantle.  Those tectonic plates have been likened to the surface of a pot of boiling oatmeal.  Plates can collide and form mountains, and they can pull apart and expose the hot interior, which spews out in volcanism (at the edges of tectonic plates, including ridges in the oceans).  Currently, it seems that there is a 500-million-year cycle whereby the continents crash together to form a supercontinent, then break apart and scatter across Earth’s surface before coming back together.[56]  Today, the continents are about 100 million years from their furthest projected spread across Earth's surface, when they will begin to come back together to form a supercontinent about 250 million years from now.

Earth’s volume is about one trillion cubic kilometers, its core is believed to be about 90% iron, and the rest is largely nickel.  The mantle is thought to be mostly oxygen and silicon, and the remainder is largely composed of the lighter alkali and alkaline earth metals, such as sodium, potassium, and calcium.  Those mantle metals are primarily bound in oxides.  The mantle makes up more than 80% of Earth’s volume.  The crust also is almost solely comprised of oxides.  Silicon dioxide (sand and glass are made from it) is the most prevalent compound and the crust is, by mass, nearly 75% oxygen and silicon (granite's primary constituent elements), and nearly all of the remainder is aluminum, iron, and those lighter alkali and alkaline earth metals.  All other elements combined amount to less than 2% of Earth’s crust.  An accompanying table presents the current estimates of the relative concentrations of Earth’s mass and atoms that are relevant to this essay.[57] 

The oceans and atmosphere amount to a tiny portion of Earth’s mass and are made of light elements and compounds with low boiling points as compared to crustal compounds.  The oceans are primarily comprised of water, and that water contains most of Earth’s hydrogen.  On Earth, about 1-in-5,000 atoms are hydrogen, but 63% of the human body’s atoms are hydrogen.  Carbon and nitrogen are also scarce Earth elements, but they total more than 10% of the human body’s atoms; life is made of rare Earth elements.  What geochemists call the biosphere (comprised of all living organisms; biologists call it biomass) amounts to less than one billionth of Earth’s mass.  Land-based biomass is about 500 times greater than ocean-based biomass.  Life as we know it seems to be rare and delicate, found nowhere else in our solar system so far, and few places seem promising for it to exist.  Below is a graphical representation of the relationship of Earth’s mass to the masses of the ocean, atmosphere, and biosphere. 

Earth receives less than one-billionth of the energy that the Sun produces.  The above image of the biosphere’s proportion of Earth’s mass is close to the proportion of the Sun’s energy that Earth receives (the largest sphere indicates the Sun’s output, and that small green dot indicates the proportion of the Sun’s output that Earth receives).  About 0.02% of the Sun’s energy that reaches Earth is captured by photosynthesis (that tiny dot would be invisible in that diagram).  That infinitesimal proportion captured by photosynthesis is the basis for nearly all life on Earth. 

Earth’s iron core gives rise to its pronounced magnetic field, which helps protect Earth’s surface from the solar wind.  Planets with weak magnetic fields, such as Mars, are believed to be vulnerable to the solar wind stripping away their atmospheres.  If Earth did not have a magnetic field, its ozone layer may have been stripped away, which may have led to the extinction of complex life on Earth, if it would have ever appeared at all.

The fact that complex life exists on Earth seems to be a miracle of circumstance.  From the life of the Sun, to the part of our galaxy where our solar system resides, to the dynamics that led to Earth retaining her global ocean and having an ozone layer, to the molten core and magnetic field that protects Earth’s surface, life on Earth may be far rarer in the universe than it seems from the perspective of a species that has yet to visit other stars.[58] 

For the first 500 million years of Earth’s life, called the Hadean Eon, it was hot and bombarded by planetesimals.  A naked human would not have survived for a minute on the Hadean Earth.  The atmosphere held no oxygen, the ocean’s temperature was higher than today’s boiling point of water, and there was little if any land to stand on.  Earth’s surface was regularly bombarded by comets and asteroids, and the larger collisions vaporized the ocean, which would then condense and settle back in the greatest rains in Earth’s history.  The Moon was probably created during the Hadean Eon when a planet-sized mass collided with Earth.  The oldest known “native” rocks on Earth date from the Hadean Eon’s end, four bya.  The Hadean atmosphere may have been like Venus’s today – almost all carbon dioxide and at an immensely higher pressure than today’s atmosphere, although this is controversial today and recent evidence favors far lower carbon dioxide levels, at least in the Archaean, which was the next eon.[59]  The continents probably began forming during the Archaean Eon (although as with many ancient events like that, there are competing hypotheses with various levels of acceptance, and one of them is that the continents were fully formed by 4 bya), and is likely when life as we know it first appeared on Earth.  At the Archaean Eon’s beginning, the chemistry of the oceans and atmosphere would have been unfamiliar to us, and would not have supported today's animal life because there was no free oxygen in the atmosphere or oceans.  The global ocean may have been full of dissolved iron and other minerals not prevalent in today’s ocean.  The environment that life first appeared in would have been highly hostile to today’s multicellular life forms, and those early life forms were tough.

 

Early Life on Earth

Chapter summary:

  • Appearance of life on Earth, and its energetic basis

  • Role of DNA, enzymes, ATP, and membranes

  • Basic aspects of life

  • Biochemistry, geochemical cycles, and entropy

  • Respiration and photosynthesis

  • Split of bacteria and archaea

  • Oxygenic photosynthesis

  • Formation of the continents, plate tectonics

  • Great Oxygenation Event, and formation of the iron deposits, the first ice age, and formation of the ozone layer

  • Development of the complex cell and its energy centers - the mitochondria - and mitochondrial DNA

  • Development of aerobic respiration

  • Free radicals and cell death

  • Formation of supercontinents

  • Evolutionary struggles, the appearance of plants, sexual reproduction, grazing, and predation

  • One-way path of evolution

Above all else, life is an energy acquisition process.  All life exploits the potential energy in various atomic and molecular arrangements, or captures energy directly, as in photosynthesis.  Early life exploited the potential energy of chemicals.  The chemosynthetic ideal is capturing chemicals fresh to new environments that have yet to react with other chemicals.  The currently most-accepted hypothesis has life first appearing on Earth about 3.5-3.8 bya, probably in volcanic vents on the ocean floor.[60]  The earliest life forms took advantage of fresh chemicals introduced to the oceans.  Life had to be opportunistic and quick in order to capture that energy before other molecules did.

Today’s mainstream science has nothing to say about any intent behind the appearance of life on Earth.  Today’s science pursues the physical mechanism.  When life first appeared on Earth, the evolutionary process that led to humanity began.  The USA's population has more doubt about evolution than any other Western nation, and that is primarily because Biblical literalism is still strong here.  In all other Western nations, there is virtually no controversy over evolution being a fact of existence, and those nations view the controversy over evolution in the USA with befuddlement.  Enlightened scientists will state that science’s story of evolution is one of process and history, not intent, and really has nothing to say about a creator.[61]

There is no scientific consensus regarding how life first appeared, but it is currently thought that all life on Earth today descended from one organism, a creature known today as the Last Universal Common Ancestor (“LUCA”).[62]  The reasoning is partly that all life has a preference for using certain types of molecules.  Many molecules with the same atomic structure can form mirror images of themselves.  That mirror-image phenomenon is called chirality.  In nature, such mirror images occur randomly, but life prefers one mirror image over the other.  In all life on Earth, proteins are virtually without exception left-handed, while sugars are right-handed.  If there was more than one line of descent, life with different “handedness” would be expected, but it has never been found, which has led scientists to think that LUCA is the only survivor that spawned all life on Earth today.  All other lineages died out (the likely answer, and there was probably hundreds of millions of years of evolution on Earth before LUCA lived), or they may have all descended from the same original organism.  As we will see, this is far from the only instance when such seminal events are considered to have probably happened only once.  Also, the unique structure of DNA and many enzymes are common to all life, and they did not have to form the way that they did.  That they came through different ancestral lines is extremely unlikely.

The critical feature of earliest life had to be a way to reproduce itself, and DNA is common to all cellular life today.  The DNA that exists today was almost certainly not a feature of the first life.  The most accepted hypothesis is that RNA is DNA’s ancestor.  The mechanism today is that DNA makes RNA, and RNA makes proteins.  DNA, RNA, proteins, sugars, and fats are the most important molecules in life forms, and very early on, protein “learned” the most important trick of all, which was an energy innovation: facilitate biological reactions.  If we think about activation energy at the molecular level, it is the energy that crashes molecules into each other, and if they are crashed into each other fast enough and hard enough, the reaction becomes more likely.  But that is an incredibly inefficient way to do it.  It is like putting a key in a room with a lock in a door and shaking up the room in the hope that the key will insert itself into the lock during one of its collisions with the room’s walls.  Proteins make the process far easier, and those proteins are called enzymes.

Enzymes speed up chemical reactions and they do it as in the above analogy but as if a person entered that room, picked up the key, and inserted it into the lock.  That took far less effort than shaking up the room a million times.  Enzymes are like hands that grab two molecules and bring them into alignment so that the key inserts into the lock.  The lock-and-key analogy is the standard way to explain enzymes to non-scientists.  Enzymes make chemical reactions happen millions and even billions of times faster than they would occur in the enzymes’ absence.  Life would never have grown beyond some microscopic curiosities without the assistance that enzymes provide.  Almost all enzymes are proteins, which are generally huge molecules with intricate folds.  The animation of human glyoxalase below depicts a standard enzyme (author is WillowW at Wikipedia, and the zinc ions that make it work are the purple balls).

GLO1

Enzymes look like Rube Goldberg-ish contraptions when their function is considered: huge molecules are used to make small ones interact.  Proteins have a four-level structure, and the second level is held in place by hydrogen bonds.  The enzyme’s pair of “hands” is like that of a robot on an assembly line, putting two parts together and passing the assembly to the next stage.  An enzyme can catalyze millions of reactions per second.  All of today’s life on Earth would cease to exist in the absence of enzymes.  Other than the ability to reproduce itself and produce proteins, speeding up reactions by millions of times is life’s most important “trick” and its greatest energy innovation.  Adenosine triphosphate ("ATP") is a coenzyme used to fuel all known biological processes.  The human body produces its own weight in ATP each day.  Poisons and drugs generally disable enzymes by plugging or wrecking the “lock” so that the intended “key” will not fit.  Cyanide kills by disabling a key enzyme that produces ATP, which induces an energy shortage at the cellular level.

Another vital invention of life is creating the “room” in which those reactions can take place.  The “rooms” of the first life forms were created by membranes, which are comprised of proteins and fats.  As with the first RNA, DNA, and proteins, the first membranes probably did not resemble today’s very much.  Membranes define life, keeping it separate from other molecules in Earth’s brew.

There are two primary aspects of life, and what can be observed in human civilization are often only more complex iterations of those aspects, which are:

 

1.      Life harnessed energy so that it could manipulate matter to create itself;

2.      Life created information so that it could reproduce itself.

 

One aspect manipulated matter and energy, and the other was the “program” for manipulating it.  Matter and energy could be manipulated to either build a living structure or operate it (or disassemble it), and the organism always made the “decision.”

Entropy is another important concept for this essay.  Entropy is, in its essence, the tendency of hot things to cool off.  The concept is now introduced to students as energy dispersal.  Even though science really does not know what energy is, it can measure its effect.  At the molecular level, entropy is the tendency of mass to become disordered over time, as the random motion of molecules spreads in collisions with other molecules, until the interacting molecules have the same temperature.  Life had to overcome entropy in order to exist, as it brought order out of disorder and maintained it while alive, and it takes energy to do that.  The prevailing theory is that net entropy can only increase, and life has to create more entropy in its surroundings so that it can reduce entropy internally and produce and maintain the order that sustains itself.  Life is called a negentropic phenomenon, in which it uses energy to reverse entropy to make the order of its organism’s structures, and it is continually using energy to reverse the natural entropy that is called decay.[63]

Of those key elements necessary for life as we know it, the most diverse is carbon, with that half-filled outer electron shell.  Carbon provides the “backbone” for life’s chemistry, and is the foundational element of DNA, RNA, sugars, proteins, fats, and virtually all other components of life.  Carbon can form one, two, three, and four bonds with itself and so forms the most diverse bonds with itself of all elements, and an entire branch of chemistry is devoted to carbon, called organic chemistry.  Organic molecules are by far the largest known to science.  During my first day of organic chemistry class, the professor observed that because the primary use of hydrocarbons was burning them to fuel the industrial age, we were living in “the age of waste,” as hydrocarbons are a treasure trove of raw materials.  In the eyes of an organic chemist, burning fossil hydrocarbons to fuel our industrial world is like making Einstein dig ditches or making Pavarotti wash dishes for a living.

Nitrogen and phosphorus are the most vital elements for life after carbon, hydrogen, and oxygen.  In its pure state in nature, nitrogen, like hydrogen and oxygen, is a diatomic molecule.  Hydrogen in nature is single-bonded to itself, oxygen is double-bonded, and nitrogen is triple-bonded.  Because of that triple bond, nitrogen is quite unreactive and prefers to stay bonded to itself.  In nature, nitrogen will not significantly react with other substances unless the temperature (activation energy) is very high.  Most nitrogen compounds in nature are created when the nitrogen and oxygen that comprise more than 99% of Earth’s atmosphere react under lightning’s influence to create nitric oxide, which then reacts with oxygen to form nitrogen dioxide, and atmospheric water combines with that to make nitrous and nitric acids, which then fall to Earth’s surface in precipitation.  Certain kinds of bacteria “fix” the nitrogen from the acidic rain into biological systems.  Also, some bacteria can fix nitrogen directly from atmospheric nitrogen, but it is an energy-intensive operation that uses the energy in eight ATP molecules to fix each atom of nitrogen.  For the earliest life on Earth, nitrogen would have been essential, and some nitrogen is fixed at volcanic vents, where life may have first appeared. 

The nitrogen cycle is one of life’s most important, in which some bacteria fix nitrogen for biological use and others release nitrogen back to the atmosphere.  Nitrogen’s relatively inert nature and preference for being bonded to itself is why it is the dominant atmospheric gas, at 78% of the atmosphere’s volume.  It has held that dominant status for billions of years. 

Carbon dioxide, on the other hand, has been generally decreasing as an atmospheric gas for billions of years, and has consistently declined for the past 100-150 million years.  The geochemical process is like nitrogen's in that atmospheric water combines with carbon dioxide to form a weak acid, which then falls to Earth in precipitation.  But carbon is in the same elemental family as an abundant crustal element: silicon.  Carbon replaces the silicon in crustal compounds and turns silicates into carbonates in a process called silicate weathering.[64]  Most of Earth’s primordial carbon dioxide was probably removed by this process, although the exact mechanisms are in dispute.  In all paleoclimate studies, carbon dioxide is a prominent variable, if not the prominent variable, for determining Earth’s surface temperature.  But perhaps as early as three bya, life became a significant source of carbon removal from the atmosphere, as life forms died and sank to the ocean floor, were subsequently buried by sedimentation, and tectonic plate movements further buried them into Earth’s crust and mantle.

More carbon dioxide was removed from the atmosphere by those processes than was reintroduced to the atmosphere by volcanism and other processes.  That removal and reintroduction of carbon to Earth’s surface is called the carbon cycle.  As carbon dioxide continues to be removed from the atmosphere, life will have a harder time surviving, to eventually go extinct, as first plants, then animals decline and go extinct, and it will be back to microbes ruling the Earth until the Sun’s expansion into a red giant destroys Earth.[65]  The earthly end of complex life’s reign may be a billion years away, but might come much sooner.

When life first appeared, it was single-celled and simple, and such organisms are called prokaryotes today.  Below is a diagram of a typical prokaryotic cell.  (Source: Wikimedia Commons)

The diagrams used in this chapter are only intended to provide a glimpse of the incredible complexity of structure and chemistry that takes place at the microscopic level in organisms, and people can be forgiven for doubting that it is all a miraculous accident.  I doubt it, too, as did Einstein.  Prokaryotes do not have organelles such as mitochondria, chloroplasts, and nuclei, but even the simplest cell is a marvel of complexity.  If we could shrink ourselves so that we could stand inside an average bacterium, we would be astounded at its complexity, as molecules move here and there, are brought inside the bacterium’s membrane, used to generate energy and build structures, and waste products are ejected from the organism.  Cellular division would be an amazing sight.

The most significant branch of evolution’s tree of life may have been the first, when bacteria split into two branches; one branch is called Bacteria and the other is Archaea.  Darwin’s notion of slowly accumulating differences through descending organisms gradually leading to new species is confounded at the single-celled level in particular, as microbes swap DNA with abandon.  The so-called tree of life at the microbe level better resembles a web.[66]  The classifications in the evolutionary tree of life are by no means settled, with constant disputes and changes, but every scientist still thinks that it is a tree, with perhaps some webby roots.

In the earliest days of life on Earth, it had to solve the problems of how to reproduce, how to separate itself from its environment, how to acquire raw materials, and how to make the chemical reactions that it needed.  But it was confined to those areas where it could take advantage of briefly available potential energy as Earth’s interior was disgorged into the oceans.  The earliest process of skimming energy from energy gradients to power life is called respiration.  That earliest respiration is today called anaerobic respiration because there was virtually no free oxygen in the atmosphere or ocean in those early days.  Respiration was life’s first energy cycle.[67]  A biological energy cycle begins by harvesting an energy gradient (usually by a proton crossing a membrane or, in photosynthesis, directly capturing photon energy), and the acquired energy powered chemical reactions.  The cycle then proceeds in steps, and the reaction products of each step sequentially use a little more energy from the initial capture until the initial energy has been depleted and the cycle’s molecules are returned to their starting point and ready for a fresh influx of energy to repeat the cycle.

Back in life’s early days, some creatures discovered another source of energy and nutrients besides the chemical brew of volcanic vents: other life forms.  Predation was then born.[68]  Evolution has plenty to answer for, and opportunistically robbing creatures of their lives to eat them is perhaps evolution’s primary “negative” outcome.

The evidence is that after “only” 100 million years or so after LUCA lived, life learned its next most important trick after learning how to exist and speed up reactions: it tapped a new energy source.  Photosynthesis may have begun 3.4 bya.  Bacteria are true photosynthesizers that fix carbon from captured sunlight.  Archaeans cannot fix carbon via sunlight capture, so are not photosynthesizers, even those that capture photons. 

As with other early life processes, the first photosynthetic process was different from today’s, but the important result – capturing sunlight to power biological processes – was the same.  The scientific consensus today is that a respiration cycle was modified, and a cytochrome in a respiration system was used for capturing sunlight.  Intermediate stages have been hypothesized, including the cytochrome using a pigment to create a shield to absorb ultraviolet light, or that the pigment was part of an infrared sensor (for locating volcanic vents).  But whatever the case was, the conversion of a respiration system into a photosynthetic system is considered to have only happened once, and all photosynthesizers descended from that original innovation.[69]

Metals used by biological processes can donate electrons, unlike those other elements that primarily seek them to complete their shells.  Those metals used by life are isolated in molecular cages called porphyrins. 

As with enzymes, the molecules used in biological processes are often huge and complex, but ATP energy drives all processes and that energy came from either potential chemical energy in Earth’s interior or sunlight, but even chemosynthetic organisms rely on sunlight to provide their energy.[70]  The Sun thus powers all life on Earth.  The cycles that capture energy (photosynthesis or chemosynthesis) or produce it (fermentation or respiration) generally have many steps in them, and some cycles can run backwards, such as the Krebs cycle.[71]  Below is a diagram of the citric acid (Krebs) cycle.  (Source: Wikimedia Commons)

The respiration and photosynthesis cycles in complex organisms have been the focus of a great deal of scientific effort, and cyclic diagrams (1, 2) can provide helpful portrayals of how cycles work.  Photosynthesis has several cycles in it, and Nobel Prizes were awarded to the scientists who helped describe the cycles.[72]  Chlorophyll molecules look like antennae, with magnesium in their porphyrin cages, and long tails.  Below is a diagram of a chlorophyll molecule.  (Source: Wikimedia Commons)

Those molecules initiate photosynthesis by trapping photons.  Chlorophyll is called a pigment and, as it sits in its “antennae complex,” it only absorbs wavelengths of light that boost its electrons into higher orbits.  The wavelengths that plant chlorophyll does not absorb well are in the green range, which is why plants are green.  Some photosynthetic bacteria absorb green light, so the bacteria appear purple, and there are many similar variations among bacteria.  Those initial higher electron orbits from photon capture are not stable and would soon collapse back to their lower levels and emit light again, defeating the process, but in less than a trillionth of a second the electron is stripped from the capturing molecule and put into another molecule with a more stable orbit.  That pathway of carrying the electron that got “excited” by the captured photon is called an electron transport chain.  Separating protons from electrons via chemical reactions, and then using their resultant electrical potential to drive mechanical processes, is how life works. 

Early photosynthetic organisms used the energy of captured photons to strip electrons from various chemicals.  Hydrogen sulfide was an early electron donor.  In the early days of photosynthetic life, there was no atmospheric oxygen.  Oxygen, as reactive as it is, was deadly to those early bacteria and archaea, damaging their molecules through oxidization.  Oxidative stress, or the stripping of electrons from life’s molecules, has been a problem since the early days of life on Earth.[73]  Oxidative stress is partly responsible for how organisms age, but it can also be beneficial, as organisms use oxidative stress in various ways.

The dates are controversial, but it appears that after hundreds of millions of years of using various molecules as electron donors for photosynthesis, cyanobacteria began to split water to get the donor electron, and oxygen was the waste byproduct.  Cyanobacterial colonies are dated to as early as 2.8 bya, and it is speculated that oxygenic photosynthesis may have appeared as early as 3.5 bya and then spread throughout the oceans.  Those cyanobacterial colonies formed the first fossils in the geologic record, called stromatolites.  At Shark Bay in Australia and some other places the water is too saline to support animals that can eat cyanobacteria, so stromatolites still exist and give us a glimpse into early life on Earth.

Oxygenic photosynthesis uses two systems for capturing photons.  The first one (called Photosystem II) uses captured photon energy to make ATP.  The second one (called Photosystem I because it was discovered before Photosystem II) uses captured photon energy to add an electron to captured carbon dioxide to help transform it into a sugar.  That “carbon fixation” is accomplished by the Calvin Cycle, and an enzyme called Rubisco, Earth’s most abundant protein, catalyzes that fixation.  Below is a diagram of the Calvin cycle.  (Source: Wikimedia Commons)

Some bacteria use Photosystem I and some use Photosystem II.  More than two bya, and maybe more than three bya, cyanobacteria used both, and a miraculous instance of innovation tied them together.  Some manganese atoms were then used to strip electrons from water.  Although the issue is still controversial regarding when it happened and how, that instance of cyanobacteria's using manganese to strip electrons from water is responsible for oxygenic photosynthesis.  It seems that some enzymes that use manganese may have been "drafted" into forming the manganese cluster responsible for splitting water in oxygenic photosynthesis.[74]  Water is not an easy molecule to strip an electron from, a single cyanobacterium seems to have “stumbled” into it, and it probably happened only once.[75]  Once an electron was stripped away from water in Photosystem I, then stripping away a proton (a hydrogen nucleus) essentially removed one hydrogen atom from the water molecule.  That proton was then used to drive a “turbine” that manufactures ATP, and wonderful animations on the Internet show how those protons drive that enzyme turbine (ATP synthase).  Oxygen is a waste product of that innovative ATP factory.

Below is a diagram of the photosynthetic process in grass.  (Source: Wikimedia Commons)

photosynthesis.jpg (106103 bytes)Click on image to enlarge

About the time that the continents began to grow and plate tectonics began, Earth produced its first known glaciers, between 3.0 and 2.9 bya, although the full extent is unknown.  It might have been an ice age or merely some mountain glaciation.[76]  The dynamics of ice ages are complex and controversial, and numerous competing hypotheses try to explain what produced them.  Because the evidence is relatively thin, there is also controversy about the extent of Earth's ice ages.  About 2.5 bya, the Sun was probably a little smaller and only about 80% as bright as it is today, and Earth would have been a block of ice if not for the atmosphere’s carbon dioxide and methane that absorbed electromagnetic radiation, particularly in the infrared portion of the spectrum.  But life may well have been involved, particularly oxygenic photosynthesis, and it was almost certainly involved in Earth's first great ice age, which may have been a Snowball Earth episode, and some pertinent dynamics follow. 

As oxygenic photosynthesis spread through the oceans, everything that could be oxidized by oxygen was, during what is called the Great Oxygenation Event (“GOE”), although there may have been multiple dramatic events.  The event began as long as three bya and is responsible for most of Earth’s minerals.  The ancient carbon cycle included volcanoes spewing a number of gases into the atmosphere, including hydrogen sulfide, sulfur dioxide, and hydrogen, but carbon dioxide was particularly important.  When the continents began forming, carbon dioxide was removed from the atmosphere via water capturing it, falling onto the land masses as carbonic acid, the carbon became combined into calcium carbonate, and plate tectonics subducted the calcium carbonate in the ocean sediments into the crust, which was again released as carbon dioxide in volcanoes.[77]

When cyanobacteria began using water in photosynthesis, carbon was captured and oxygen released, which began the oxygenation of Earth's atmosphere.  But the process may have not always been a story of continually increasing atmospheric oxygen.  There may have been wild swings.  Although the process is indirect, oxygen levels are influenced by the balance of carbon and other elements being buried in ocean sediments.  If carbon is buried in sediments faster than it is introduced to the atmosphere, oxygen levels will increase.  Pyrite is comprised of iron and sulfur, but in the presence of oxygen, pyrite's iron combines with oxygen (and becomes iron oxide, also known as rust) and the sulfur forms sulfuric acid.  Pyrite burial may have acted as the dominant oxygen source before carbon burial did.[78]  There is sulfur isotope evidence that Earth had almost no atmospheric oxygen before 2.5 bya.[79]

About 2.7 bya, dissolved iron in anoxic oceans seems to have begun reacting with oxygen at the surface, generated by cyanobacteria.  The dissolved iron was oxidized from a soluble form to an insoluble one, which then precipitated out of the oceans in those vivid red (the color of rust) layers that we see today and are called banded iron formations ("BIFs"), which became an oxygen sink and kept atmospheric oxygen low.[80]  The GOE is widely accepted to have created almost all of the BIFs, but it is not the only BIF-formation hypothesis and there is a great deal of controversy, but life processes are generally considered to be primarily responsible for forming the BIFs.[81]  Most iron in the crust is bound in silicates and carbonates, and it takes a great deal of energy to extract the iron from those minerals; the oxides that comprise BIFs are much less energy-intensive to refine, as the iron is so concentrated.  Far less ore needs to be melted to get an equivalent amount of iron.  BIFs are the source of virtually all iron ore that humans have mined.  Life processes almost certainly performed the initial work of refining iron, and humans easily finished the job billions of years later.  Copper was not refined by life processes, and copper ore takes twice as much energy to refine as iron ore does. 

When BIF deposition ended about 2.4 bya (maybe because all of the available iron had been removed), oxygen levels then skyrocketed and may have even reached modern levels, although it may have only been a few percent of Earth's atmosphere, but was substantially higher than it had ever been.[82]  Not coincidentally, Earth experienced its first definite ice age, beginning 2.4 bya.

Earth's Venus-level carbon dioxide likely began declining during the Hadean Eon, and the GOE also removed methane from the atmosphere (a methane molecule is more than 20 times as effective as a carbon dioxide molecule in absorbing radiation in Earth’s atmosphere), which may have been created by methanogens (methane-producing archaea), and Earth’s first ice age lasted for 300 million years.[83]  There is no scientific consensus regarding the exact dynamics that caused that first ice age (although I consider the above dynamics persuasive and likely relevant), but there is general agreement that it was ultimately due to reduced greenhouse gases.  That first ice age might have been a “Snowball Earth” event, in which Earth’s surface was almost completely covered in ice.

The high oxygen levels may have turned pyrite on the continents into acid, which increased erosion, flooded essential nutrients, particularly phosphorus, into the oceans, and would have facilitated a huge bloom in the oceans.[84]  But this also happened in the midst of Earth's first ice age, so increased glacial erosion may have been primarily responsible, as we will see with a Snowball Earth that happened more than a billion years later.  The two largest carbon-isotope excursions (carbon 13/12) in Earth's history are related to ice ages.  The first was a positive excursion (more carbon-13 than expected), and the second was negative.  Scientists are still trying to determine what caused them.  Beginning a little less than 2.3 bya and lasting for more than 200 million years is the Lomagundi excursion, in which there was great carbon burial.[85]  When the Lomagundi excursion finished, oxygen levels seem to have crashed back down to almost nothing and may have stayed that way for 200 million years, before rebounding to a few percent, at most, of Earth's atmosphere, and it stayed around that low level for more than a billion years.[86] 

Atmospheric oxygen prevented Earth from losing its water as Venus and Mars did, which saved all life on Earth.  An atmosphere of as little as two percent oxygen may have been adequate to form the ozone layer, and that level was likely first attained during the first GOE.[87]  The ozone layer absorbs most of the Sun’s ultraviolet light that reaches Earth.  Ultraviolet light carries more energy than visible light and breaks covalent and other bonds and wreaks biological havoc, particularly to DNA and RNA.  Before the ozone layer formed, life would have had a challenging time surviving near the ocean’s surface.  Ultraviolet light damage presented a formidable evolutionary hurdle, and proteins and enzymes that assist cellular division are like those that arose to repair damaged DNA.  Life has adapted to many hostile conditions in Earth’s past, but if conditions change too rapidly, life cannot adapt in time to survive.  Many mass extinctions that dot Earth’s past were probably the result of conditions changing too rapidly for most organisms to adapt, if they could have adapted at all.  During the Permian-Triassic extinction event, which was the greatest extinction event yet known, there is evidence that the ozone layer was depleted and ultraviolet light damaged photosynthesizing organisms that formed the base of the food chains.  From the formation of stromatolites to mass extinction events, ultraviolet light has played a role.[88]

Around the end of that first ice age, another unique event transpired with enormous portent for life’s journey on Earth: one microbe enveloped another, and both lived.  Today's prevailing hypothesis is that an archaean enveloped a bacterium, either by predation or colonization, and they entered into a symbiotic relationship.  Today’s leading hypothesis, called the hydrogen hypothesis, is that the archaean consumed hydrogen and the bacterium produced hydrogen, which formed the basis for their symbiosis.[89]  That unique event transpired around two bya and led to complex life on Earth.[90]  That enveloped bacterium was the parent of all mitochondria on Earth today, which are the primary energy-generation centers in all animals.  About 10% of the human body’s weight is mitochondria.[91]  If not for the red of hemoglobin and the melanin in skin, humans would look purple, which is the mitochondria’s color.  That purple color is probably because the original enveloped bacterium that led to the first mitochondrion was purple.[92] 

The mitochondrion’s creation had impact far beyond “only” creating “power plants” in cells; it allowed cells to grow to immense size.  That first mitochondrion became, according to the most restricted definition, the first organelle.  Cells with organelles are called eukaryotes, and today they are generally thought to have descended from that instance when a hydrogen-eating archaean enveloped a hydrogen-producing bacterium.  That animation of ATP Synthase in action depicts a typical event in life forms - the generation of energy as protons cross a membrane - which in that instance makes the turbine rotate that manufactures ATP.  For prokaryotes, the cellular membrane is their only one and the site of the process that fuels their lives.  Cells are three-dimensional entities, and if spherical, the cell’s volume will increase at the cube of its diameter, while its cellular membrane only grows at the square of its diameter.  If the diameter of a spherical bacterium is doubled, its surface area increases four times, but its volume increases eight times, and the disparity between surface area and volume increases as the diameter does.[93]  For a prokaryote, it means that the cytoplasm-to-membrane ratio quickly shrinks as the cell grows, so that less ATP is serving more cytoplasm.  That means that with increasing size comes slower metabolism, so the cell becomes sluggish.  Imagine a grown man trying to live on the calories that he ingested when he was an infant.  He would quickly starve to death or have to hibernate each day.

Prokaryotic cells are limited in size because their energy production only takes place at their cellular membranes.  In ecosystems, the race usually goes to the quick, and it is very true with bacteria, as the smallest bacteria are faster and “win” the race of survival.[94]  Mitochondria increase the membrane surface area for ATP reactions to take place, which allowed cells to grow in size.  The average eukaryotic cell has more than 10 thousand times the mass of the average prokaryotic cell, and the largest eukaryotic cells have hundreds of thousands of times the mass (or around a trillion times for ostrich eggs, for instance, which exist as single-cells when formed).  Where an organism has the greatest energy needs, such as in muscle and nerve cells, the greatest numbers of mitochondria are found.  In a typical animal cell, dotted with hundreds of mitochondria, a single mitochondrion is the size of the prokaryote that became the mitochondrion, and is representative of prokaryote size in general.  That increased surface area to generate ATP allowed eukaryotic cells to grow large and complex.  There are quintillions (a million trillion) of those ATP Synthase motors in a human body, spinning at up to hundreds of revolutions per second, generating ATP molecules.[95] 

It can help to think of mitochondria as “distributed” energy generation centers in eukaryotes, versus the “perimeter” energy generation in prokaryotes.  The new mode of energy production presented various challenges, but it allowed life to become large and complex.  Size is important, at the cellular level as well as the organism level.  Below is a diagram of a typical plant cell.  (Source: Wikimedia Commons)

The primary advantage that mitochondria provided was not only increased surface area for reactions, but unlike other organelles that began as bacteria (such as hydrogenosomes), mitochondria retained some of their DNA.[96]  That DNA was probably retained by mitochondria that could make key proteins vital to their functioning on the spot, instead of waiting for the nucleus to send DNA “instructions.”  Essentially, mitochondria provided flexible power generation, like a field commander empowered to make decisions far from headquarters and quickly responding to conditions on the ground.  Mitochondria move around inside the cells and provide energy where it is needed.  That flexibility of decentralized power generation may be the mitochondrion’s chief contribution to making complex life possible, and that in turn led to many changes that are characteristic of complex life, some of which follow. 

Perhaps a few hundred million years after the first mitochondrion appeared, as the oceanic oxygen content, at least on the surface, increased as a result of oxygenic photosynthesis, those complex cells learned to use oxygen instead of hydrogen.  It is difficult to overstate the importance of learning to use oxygen in respiration, called aerobic respiration.  Before the appearance of aerobic respiration, life generated energy via anaerobic respiration and fermentation.  Because oxygen is in second place for creating the most energetic reactions, aerobic respiration generates, on average, about 15 times as many ATP molecules per cycle as fermentation and anaerobic respiration do (although some types of anaerobic respiration can get four times the typical ATP yield).[97]  The suite of complex life on Earth today would not have been possible without the energy provided by oxygenic respiration.  At minimum, nothing could have flown, and any animal life that might have evolved would have never left the oceans because the atmosphere would not have been breathable.  With the advent of aerobic respiration, food chains became possible, as it is several times as efficient as anaerobic respiration and fermentation (about 40% as compared to less than 10%).  Today’s food chains of several levels would be constrained to about two in the absence of oxygen.[98]  Some scientists have questioned oxygen's role in the rise of complex life and oxygen and respiration in eukaryote evolution.  Whether the first animals needed oxygen at all is controversial.[99]

Complex life means, by definition, that it has many parts and they move.  Complex life needs energy to run its many moving parts.  Complexity’s dependence on greater levels of energy use not only applies to all organisms and ecosystems, but it has also applied to all human civilizations, as will be explored later in this essay.  When cells became “complex” with organelles, a tiny observer inside that cell would have witnessed a bewildering display of activity, as mitochondria sailed through the cells via cytoskeleton “scaffolding” on their energy generating missions, the ingestion of molecules for fuel and to create structures, the miracle of cellular division, the constant building, repair, and dismantling of cellular structures, and the ejection of waste through the cellular membrane.[100]  The movement of molecules and organelles in eukaryotic cells is accomplished by using the same protein that became muscle: actin.[101]  Prokaryotes used an ancestor of actin to move, and their flagella provide their main mode of travel, to usually move toward food and safety or away from danger, including predators.

For various reasons that are far from settled among scientists, eukaryotes did not immediately rise to dominance on Earth but were on a fairly even footing with prokaryotes for more than a billion years.  That situation was at least partially related to continental configurations and oceanic currents.

The Moon seems to have stabilized Earth’s axial tilt in relation to the Sun and made Earth's seasons vary within a relatively narrow range.  Without the Moon, Earth could have up to 90o changes in its axis of rotation instead of the 22o-to-24.5o variation of the past several million years.[102]  If that had happened, although life may have survived, Earth’s climate would have been extremely chaotic, with part of the planet going into perpetual day while another went into perpetual night, and other wild variations.  Earth would have had mass-extinction effects on those portions, and the rest of the biosphere would have been extremely challenged to survive.  Complex life on Earth would little resemble today’s (if it had appeared and survived at all), if Earth’s axis tilted chaotically and severely.  The primary effect of Earth’s stable tilt is the planet’s entire surface receiving relatively uniform and predictable energy levels.

The primary heat dynamic on Earth’s surface is that the oceans near the equator are heated by sunlight and entropy spreads the heat toward the poles via oceanic currents.  Today’s continental configuration, with three major oceans besides the polar ones, has seen a global current develop that takes water 1,600 years to travel.  Where the Atlantic Ocean meets the polar oceans, the warm surface currents cool and sink to the ocean’s bottom, which is how the oceans are oxygenated.  Without that oxygenation, there would be little life on the ocean floor or much below the surface; almost the entire global ocean would be lifeless.  Before the GOE, this was certainly the case, but relatively recent hypotheses make the case that the oceans were anoxic for more than a billion years after the GOE began, largely because of the continental configurations and geophysical and geochemical processes. 

Many people are familiar with the term Pangaea, which was all of today’s continents merged into a supercontinent.  Pangaea formed about 300 mya, but it was not the only supercontinent; it was just the only one existing during the eon of complex life.  One called Rodinia may have existed one bya and did not break up until 750 mya (and reformed into another supercontinent, Pannotia, 600 mya, which did not break up until 550 mya), and there is a hypothesized earlier one called Columbia that existed two bya.  There is also a hypothesis that all continental mass was contained in one supercontinent that lasted from 2.7 bya to 600 mya.  The continental land masses of two bya may have been only about 60% the size of today’s.[103]  Supercontinents are generally associated with ice ages. 

When the total continental land mass was small or combined into a supercontinent, there was no land to divert that diffusion of warm water toward the poles, which results in currents.  During those times, the global ocean became one big, calm lake, with no currents of significance.  Those oceans are called Canfield Oceans today, and they would have been anoxic; the oxygenated surface waters would not have been drawn by currents to the ocean floor, and the oceans were certainly anoxic before the GOE.  The interplay of those many interacting dynamics can be incredibly complex and lead to the multitude of hypotheses posited to explain those ancient events, but a leading hypothesis today is that a combination of factors, including supercontinents, variations in volcanic output, Canfield Oceans, and ice ages prevented eukaryotic life from gaining ecosystem dominance until the waning of the second Snowball Earth event, which was the greatest series of glaciations that Earth has yet experienced.  It is known today as the Cryogenian Period, which ended about 635 mya.  The study of the Cryogenian Period, which is the subject of this essay’s next chapter, resulted in the term “Snowball Earth.”

All animals, except for some tiny ones in anoxic environments, use aerobic respiration today, and early animals (multicellular heterotrophs, which are called metazoans today) may have also used aerobic respiration.  Before the rise of eukaryotes, the dominant life forms, bacteria and archaea, had many chemical pathways to generate energy as they farmed that potential electron energy from a myriad of substances, such as hydrogen sulfide, sulfur, iron, hydrogen, ammonia, and manganese, and photosynthesizers got their donor electrons from hydrogen sulfide, hydrogen, arsenate, nitrite, and other chemicals.  If there is potential energy in electron bonds, bacteria and archaea will often find ways to harvest it.  Many archaean and bacterial species thrive in harsh environments that would quickly kill any complex life, and those hardy organisms are called extremophiles.  In harsh environments, those organisms can go dormant for millennia and perhaps longer, waiting for appropriate conditions (usually related to available energy).  In some environments, it can take a hundred years for a cell to divide.

But once the GOE reached the level where eukaryotes could reliably power their respiration aerobically, then virtually all complex life went “all in” with aerobic respiration, and all plants engage in oxygenic photosynthesis.  The conventional view has long been that the GOE was a microbe holocaust, as most anaerobic microbes died from oxygen damage.  However, there is little evidence for a holocaust.  Today, it looks more like the anaerobes were driven to the margins where oxygen is scarce (underground, and in some anoxic waters such as today’s Black Sea) while aerobes quickly came to dominate the planet.[104]  Once the oxygenic photosynthesis and aerobic respiration regime was achieved around two bya, the cycle of photosynthesizers creating oxygen and aerobes eating it began.  Atmospheric carbon dioxide and oxygen levels have seesawed ever since the beginning of the eon of complex life and probably earlier.  For instance, the coal beds that humanity is mining and burning with such abandon today were created because trees produced lignin that allowed them to grow tall, and it took about 100 million years for a fungus to learn how to break lignin down, and like the other big events, that trick was probably only learned once.  Consequently, carbon got buried with those trees in immense amounts and eventually formed most of Earth’s coal beds.  That time is known as the Carboniferous Period, and all of that carbon sequestered in Earth led to skyrocketing oxygen levels, the highest that Earth has yet seen.  Over the billions of years since oxygenic respiration began, aerobes have consumed 99.99% of all the oxygen created by oxygenic photosynthesis.  That remaining 0.01% was buried into Earth’s crust and is responsible for the generally declining atmospheric carbon dioxide levels.  It has been estimated that there is 26,000 times more organic carbon buried in Earth’s crust than exists in today’s biosphere.[105]

The times between 1.8 bya and 800 mya are called “the boring billion years” in scientific circles, because there were no dramatic evolutionary events that left a fossil record, and it was likely because the oceans were largely anoxic and rich in hydrogen sulfide, which prevented eukaryotes from attaining dominance.[106]  It is also speculated that a shortage of molybdenum, which bacteria use to fix nitrogen, may have contributed.

During that “boring” time before complex life appeared, key biological events happened that were critical for the later appearance of complex life, and some of them follow.  About 1.5 bya, eukaryotic organisms are clearly seen in the fossil strata, but are simple spheroids and tubes.[107] 

About 1 bya, stromatolites began to decline and microbial photosynthesizers began to evolve spines, probably due to predation pressure from protists, which are eukaryotes.  Eating stromatolites may reflect the first instance of grazing, although grazing is really just a form of predation.  The difference between grazing and predation is the prey.  If the prey is an autotroph (it fixes its own carbon, by using energy from either sunlight capture or harvesting the energy potential of inorganic chemicals), it is called grazing, and if the prey got its carbon from eating autotrophs (such creatures are called heterotrophs), then it is called predation.  There are other categories of life-form consumption, such as parasitism and detritivory (eating dead organisms), and there are many instances of symbiosis.  For complex life, the symbiosis between the mitochondrion and its cellular host was the most important one ever.

Just as mitochondria were “invented,” somewhere between 1.6 bya and 600 mya a eukaryote ate a cyanobacterium and both survived, and that cyanobacterium became the ancestor of all chloroplasts, which is the photosynthetic organelle in all plants.[108]  As with similar previous events, it appears that it happened only once, and all plants are descended from that unique event.[109]  The invention of the chloroplast quickly led to the first multicellular eukaryotes, algae, which were the first plants.  The first algae fossils are from about 1.2 bya.[110]  Most algae species are not called plants, as they are not descended from that instance when a eukaryote ate a cyanobacterium.  The non-plant algae, such as kelp, also have chloroplasts, from various “envelopment” events when algae chloroplasts were eaten and the grazers and chloroplasts survived.  Below is the general outline of the tree of life today, in which bacteria and archaea combined to make eukaryotic cells, and in which the bacterium enveloped into a protist to make plants, and all complex life developed from protists.  (Source: Wikimedia Commons)

Since mitochondria are the energy generation centers in eukaryotic cells (some eukaryotes lost their mitochondria, usually because the mitochondria evolved into other organelles such as mitosomes and hydrogenosomes), they present similar issues related to how industrialized humanity generates energy today.  Power plants have pollution issues and can explode and create environmental catastrophes such as what happened at Chernobyl and Fukushima. 

A free radical is an atom, molecule, or ion with an unpaired valence electron or an unfilled shell, and thus seeks to capture an electron.  The electron transport chain used to create ATP in a mitochondrion leaks electrons, which creates free radicals, which will take that electron from wherever they can get it.  Aerobic respiration creates some of the most dangerous free radicals, particularly the hydroxyl radical.  The more hydroxyl radicals created, the more damage inflicted on neighboring molecules.  Another free radical created by that electron leakage is superoxide, which can be neutralized by antioxidants, but there is no avoiding the damage produced by the hydroxyl radical.[111]  Those kinds of free radicals are called reactive oxygen species (“ROS”).  ROS are not universally deleterious to life processes, but if their production spins out of control, the oxidative stress inflicted by the ROS can cripple biological structures.  ROS damage can cause programmed cell death, called apoptosis, which is a maintenance process for complex life.  Antioxidants are one way that organisms defend against oxidative stress, and vitamin C is a standard antioxidant.  Antioxidants usually serve multiple purposes in cellular chemistry, and antioxidant supplements generally do not work as advertised.  They not only do not target the reactions that might be beneficial to prevent, but they can interfere with reactions that are necessary for life processes.  Antioxidant supplements are blunt instruments that can cause more harm than good.[112] 

There is plenty of uncertainty and controversy regarding just how connected the issue may be, but it appears that keeping some DNA at the mitochondria, in order to have more efficient and flexible energy generation, helped lead to the genetic phenomenon known as sexual reproduction.  Bacteria swap DNA in reproduction and have done so since life’s early days, but the process of meiosis, which is when two parent life forms split and recombine their DNA to produce an offspring, is unique to eukaryotes, and that form of reproduction appeared between 1.2 and 1.0 bya.  As with other seminal events, it seems that sexual reproduction using meiosis happened once, and all eukaryotes that reproduce sexually are descended from that one instance.  Protists were the first organisms to reproduce sexually. 

Again, the dates for these events are rather rough, but if the creation of a chloroplast happened once and the creation of sexual reproduction happened once, then sexual reproduction would have needed to come before the chloroplast, as many plants produce sexually.  If it turns out that the chloroplast really is 1.6 billion years old, then the current date for sexual reproduction would need to be pushed back, or the “sex was invented once” idea would have to be discarded, and biologists would probably decide that the date of sex appearing would need to be pushed back, even without fossil evidence of it.

Many principles of evolutionary theory have not changed much since Darwin, and one of them is that when one species gains the “upper hand” in the struggle of life on Earth, as there is only so much sunlight and nutrients to go around, the losers become marginalized or go extinct.  Ultimately, the species with the highest carrying capacity, or ability to extract energy from its environment, wins.[113]  There are many ways, however, to attain that winning carrying capacity.  Another Darwinian concept is that species adapt to their environments (which include other species) to benefit that species, not any other (and Darwin used the concept at the organism level, not the species level).  Darwin’s idea that all life on Earth descended from a common ancestor is a central feature of evolutionary theory.  But Darwin’s idea of gradual changes leading to speciation is confounded by the appearance of mitochondria, which led to complex life.  There was nothing gradual about an archaean swallowing a bacterium and both surviving, and the bacterium eventually became the power plant for all animals.  It was a radical change and a chasm between simple and complex life.[114]

Another evolutionary concept is that all changes had mechanical reasons for happening (again, today’s science has nothing to say about any intent), and each mechanical change required some purpose in improving an organism’s chances of surviving to reproduce, or at least not have unduly impaired it.  As evolution progressed, for each species, it was like taking a road, and the farther down the road a species went in its development, the “lifestyle” opportunities that its biological operation created precluded other kinds of styles.  For instance, trees will never become Ents.  Trees went down the path of roots, lignin, growing taller than their neighbors, and the like.  A plant cannot choose locomotion as a way of life.  It does not generate enough energy for it, for one thing.  Animals went down a very different evolutionary path than plants did, and muscles, brains, livers, and the like have no analogy in plants and, by themselves, plants will not grow muscles or brains anytime soon, although humans have been making radical changes in animals over brief periods of time, such as the many breeds of dog.[115]

The nutrient cycling that life contributes to, and the oxygen that is generated that maintains the ozone layer, was all initially performed by prokaryotes, and will continue to be performed by them long after complex life goes extinct.  Complex life is largely unnecessary for making Earth inhabitable.  Microbes do not need them.[116]  Earth’s biomass today is about half prokaryote and half eukaryote.

During that “boring billion years,” sexual reproduction was invented, plants became possible, and the rise of grazing and predation had eonic significance.  While many critical events in life’s history were unique, one that is not is multicellularity, which independently evolved dozens of times, and some prokaryotes have multicellular structures, some even with specialized organisms forming colonies.[117]  There are various hypotheses to explain why life went multicellular, but the primary advantage was size, which would become important in the coming eon of complex life.  The rise of complex life might have happened faster than the billion years or so after the basic foundation was set (the complex cell, oxygenic photosynthesis), but geophysical and geochemical processes had their impacts.  Perhaps most importantly, the oceans probably did not get oxygenated until just before complex life appeared, as they were sulfidic Canfield Oceans from 1.8 bya to 700 mya.  Atmospheric oxygen is currently thought to have remained at only a few percent at most until about 850 mya, although there are recent arguments that it remained low until only about 420 mya, when large animals began to appear and animals began to colonize land.[118]  Just as the atmospheric oxygen content began to rise, then came the biggest ice age in Earth’s history, which probably played a major role in the rise of complex life.

 

The Cryogenian Ice Age and the Rise of Complex Life

Reconstruction of supercontinent Rodinia at 1.1 bya (Source: Wikimedia Commons)

Chapter summary:

This chapter will provide a somewhat detailed review of the Cryogenian Ice Age and its aftermath, including some of the hypotheses regarding it, evidence for it, and its outcomes, as the eon of complex life arose after it.  The Cryogenian Period ran from about 850 mya to 635 mya.  This review will sketch the complex interactions of life and geophysical processes, and the increasingly multidisciplinary methods being used to investigate such events, which are yielding new and important insights.

The idea of an ice age is only a few hundred years old, and was first publicly proposed as a scientific hypothesis by Louis Agassiz in 1837, who got his first ideas from Karl Schimper and others.[119]  There had also been proposals for ice ages in the preceding decades.  By the 1860s, most geologists accepted the idea that there had been a cold period in Earth’s recent past, attended by advancing and retreating ice sheets, but nobody really knew why.[120]  Hypotheses began to proliferate, and in the 1870s, James Croll proposed the idea that variations in Earth’s orientation to the Sun caused the continental ice sheets.  Because of problems in matching his hypothesis with dates adduced for ice age events, it fell out of favor and was considered dead by 1900.[121]  Croll’s work regained its relevance with the publication of a paper by Milutin Milanković (usually spelled Milankovitch in the West) in 1913, and by 1924, Milankovitch was widely known for explaining the timing of advancing and retreating ice sheets during the current ice age.[122] 

The book that made Milankovitch famous (Croll’s work is still obscure, even though Milankovitch gave full credit to Croll in his work) was co-authored by Alfred Wegener, who a decade earlier first published his hypothesis that the continents had moved over the eons.  As is often the case with radical new hypotheses, aspects of it previously existed in various stages of development, but Wegener was the first to propose a comprehensive hypothesis to explain an array of detailed evidence.  Wegener was a meteorologist working outside of his specialty when he proposed his “continental drift” hypothesis.  His hypothesis was harshly received and dismissed by the day’s orthodoxy, and Wegener died in 1930 while setting up a research station on Greenland’s ice sheet.  His continental drift hypothesis quickly sank into obscurity.  It was not until my lifetime, when paleomagnetic studies confirmed his views, that Wegener’s work returned from exile and plate tectonics became a cornerstone of geological theory.  Ice age data and theory does not pose an immediate threat to the global rackets or "national security," so the history of developing the data and theories has been publicly available. 

Wegener concluded, based on his gathered evidence, that there was a global ice age in the Carboniferous and Permian periods.  He was right.[123]  Nearly 50 years later, in 1964, the same year that the first symposium of the plate tectonic era was held, Brian Harland proposed, based on paleomagnetic evidence, that a global ice age immediately preceded the Cambrian Period, when even the tropics were buried under ice.  That was the first time that a truly global glaciation was proposed, and Harland’s idea developed into what is today called the Snowball Earth hypothesis.

Ice ages are an important realm of scientific investigation.  Humanity’s colossal burning of Earth’s hydrocarbon deposits may well be delaying the ice sheets' return; they have been advancing and retreating in rhythmic fashion for the previous million years.[124]  Today, the current pattern's accepted tipping point has been Earth’s orientation toward the Sun, particularly the eccentricity of Earth’s orbit, which has a roughly 100,000-year cycle.  Although Earth’s orientation is universally considered to be the tipping point variable, it is not the only influence.  The ultimate cause has been steadily declining atmospheric carbon dioxide levels.  Antarctica began developing its ice sheets about 35 mya due to its position near the South Pole and declining carbon dioxide levels.  The current ice age began 2.5 mya and was likely initiated by the formation of Panama’s isthmus three mya, which separated the Atlantic and Pacific oceans and radically altered oceanic currents.  Also, the Arctic Ocean is virtually landlocked.  Those factors all contributed to the current ice age.

When investigating how ice ages begin and end, positive and negative feedbacks are considered.  A positive feedback will accentuate a dynamic and a negative feedback will mute it.  In the 1970s, James Lovelock and the author of today’s endosymbiotic theory, Lynn Margulis, developed the Gaia hypothesis, which posits that Earth has provided feedbacks that maintain environmental homeostasis.  Under that hypothesis, environmental variables such as atmospheric oxygen and carbon dioxide levels, ocean salinity levels, and Earth’s surface temperature have been kept relatively constant by a combination of geophysical, geochemical, and life processes, which have maintained Earth’s inhabitability.  The homeostatic dynamics were mainly negative feedbacks.  If positive feedbacks dominate, then “runaway” conditions happen.  In astrophysics, runaway conditions are responsible for a wide range of phenomena.  A runaway greenhouse effect may be responsible for the high temperature of Venus’s surface.  Climate scientists today are concerned that burning the hydrocarbons that fuel the industrial age may result in runaway climatic effects.  Mass extinctions are the result of Earth's becoming largely uninhabitable by the organisms existing during the extinction event.  The ecosystems then collapse as portions of the food chains go extinct.  Mass extinction specialist Peter Ward recently proposed his Medea hypothesis as a direct challenge to the Gaia hypothesis. 

Gaian and Medean dynamics have both played roles in the development of Earth and its biosphere, and positive and negative feedbacks have had impacts.  Life saved Earth’s oceans with its negative feedback on hydrogen's loss to space, without which life as we know it on Earth probably would not exist.  But there is also evidence that life contributed to mass extinction events.

Investigating the Cryogenian Ice Age led to finding evidence of runaway effects causing dramatic environmental changes, and the Cryogenian Ice Age’s dynamics will be investigated and debated for many years.  The position of Antarctica at the South Pole and the landlocked Arctic Ocean have been key variables in initiating the current ice age, and another continental configuration that could contribute to initiating an ice age is when a supercontinent is near the equator, which was the case during the Cryogenian Ice Age and the one in the Carboniferous and Permian periods.  A hypothesis is that Canfield Oceans can accompany supercontinents, so warm water is not pushed to the poles as vigorously.[125]  A supercontinent near the equator would not normally have ice sheets, which means that silicate weathering would be enhanced and remove more carbon dioxide than usual.  Those conditions could initiate an ice age, beginning at the poles.  It would start out as sea ice, floating atop the oceans. 

Around when Harland first proposed a global ice age, a climate model developed by Russian climatologist Mikhail Budyko concluded that if a Snowball Earth really happened, the runaway positive feedbacks would ensure that the planet would never thaw and become a permanent block of ice.[126]  For the next generation, that climate model made a Snowball Earth scenario seem impossible.  In 1992, a Cal Tech professor, Joseph Kirschvink, published a short paper that coined the term Snowball Earth.  Kirschvink sketched a scenario in which the supercontinent near the equator reflected sunlight, as compared to tropical oceans that absorb it.  Once the global temperature decline due to reflected sunlight began to grow polar ice, the ice would reflect even more sunlight and Earth’s surface would become even cooler.  This could produce a runaway effect in which the ice sheets grew into the tropics and buried the supercontinent in ice.  Kirschvink also proposed that the situation could become unstable.  As the sea ice crept toward the equator, it would kill off all photosynthetic life and a buried supercontinent would no longer engage in silicate weathering.  Those were two key ways that carbon was removed from the atmosphere in the day's carbon cycle, especially before the rise of land plants.  Volcanism would have been the main way that carbon dioxide was introduced to the atmosphere (animal respiration also releases carbon dioxide, but this was before the eon of animals), and with two key dynamics for removing it suppressed by the ice, carbon dioxide would have increased in the atmosphere.  The resultant greenhouse effect would have eventually melted the ice and runaway effects would have quickly turned Earth from an icehouse into a greenhouse.  Kirschvink proposed the idea that Earth could vacillate between icehouse and greenhouse states. 

Kirschvink noted that BIFs reappeared in the geological record during the possible Snowball Earth times, after vanishing about a billion years earlier.  Kirschvink noted that iron cannot increase to levels where they would create BIFs if the global ocean was oxygenated.  Kirschvink proposed that the sea ice not only killed the photosynthesizers, but it also separated the ocean from the atmosphere so that the global ocean became anoxic.  Iron from volcanoes on the ocean floor would build up in solution during the icehouse phase, and during the greenhouse phase the oceans would become oxygenated and the iron would fall out in BIFs.  Other geological evidence for the vacillating icehouse and greenhouse conditions was the formation of cap carbonates over the glacial till.  It was a global phenomenon; wherever the Snowball Earth till was, cap carbonates were atop them.  In geological circles, carbonate layers deposited during the past 100 million years are considered to be of tropical origin, so scientists think that the cap carbonates reflected a tropical environment.  The fact of cap carbonates atop glacial till is one of the strongest pieces of evidence for the Snowball Earth hypothesis.  Kirschvink finished his paper by noting that the eon of complex life came on the heels of the Snowball Earth, and scouring the oceans of life would have presented virgin oceans for the rapid spread of life in the greenhouse periods, and this could have initiated the evolutionary novelty that led to complex life.

Kirschvink is a polymath, was soon pursuing other interests, and left his Snowball Earth musings behind.[127]  Canadian geologist Paul Hoffman had been an ardent Arctic researcher, but a dispute with a bureaucrat saw him exiled from the Arctic.[128]  He landed at Harvard and soon picked Precambrian rocks in Namibia to study, as it was largely unexplored geological territory.  The Namibian strata were 600-700 million years old, instead of the two billion years that Hoffman was familiar with.  In the Namibian desert, he soon found evidence of glacial till among what were considered tropical strata when created.

Glacial till is composed of “foreign” stones that had been transported there by ice.  When ice ages were first conceived, a key piece of evidence was “erratics,” which were large stones found far from their place of origin.  Erratics found in ocean sediments are called dropstones.  Eventually, after plenty of controversy, scientists decided that erratics had usually been deposited by glaciers.[129]  Oceanic dropstones were deposited by melting icebergs and the land-based erratics by retreating glaciers.

Hoffman’s team tested the carbon-13/12 ratios of the cap carbonates and found them to be lifeless.  That was key evidence presented in their 1998 paper that supported Kirschvink’s Snowball Earth hypothesis.[130]  As Kirschvink did, Hoffman and his colleagues argued that BIFs were evidence of Snowball Earth conditions, and they concluded their paper as Kirschvink did, by stating that the alternating icehouse and greenhouse periods would have produced extreme environmental stress on the ecosystems and may well have led to the explosion of complex life in their aftermath.  A few months after publication of the Hoffman team’s paper came another seminal paper, by Donald Canfield.[131]  Those papers resulted in a flurry of scientific investigations and controversy.  Hoffman engaged in feuds as Snowball Earth’s front man.  The Snowball Earth hypothesis has won out, so far.  There is a “Slushball Earth” hypothesis that states that the Cryogenian Ice Age was not as severe as Hoffman and his colleagues suggest, and there are other disputes over the Snowball Earth hypothesis, but the idea of a global glaciation is probably here to stay, with a great deal of ongoing investigation.  The record during the Cryogenian Ice Age shows immense swings in organic carbon burial, coinciding with the formation of late-Proterozoic BIFs.[132]  The Proterozoic Eon is the last one before complex life appeared on Earth. 

Canfield’s original hypothesis, which seems largely valid today, is that the deep oceans were not oxygenated until the Ediacaran Period, which followed the Cryogenian; the process did not begin until about 580 mya and first completed about 560 mya.[133]  The wildest carbon-13/12 ratio swing in Earth’s entire geological record begins about 575 mya and ends about 550 mya, and is called the Shuram excursion.[134]  Explaining the Shuram excursion is one of the most controversial areas of geology today, with numerous proposed hypotheses.  When the controversies are finally resolved, if they are resolved, the Shuram and Lomagundi excursions, even though they go in opposite directions, I suspect will likely be both related to the dynamics of ice ages and the rise of oxygen levels.  Ediacaran fauna, the first large, complex organisms to ever appear on Earth, also first appeared about 575 mya, when the Shuram excursion began.[135]  I strongly doubt that Earth’s first appearance of large complex life at the exact geological timescale moment of the wildest carbon-isotope swing in Earth’s history will prove to be a coincidence.  The numerous competing hypotheses regarding the Shuram excursion include:

 

  • The oxidation of a vast pool of organic carbon in the oceans, aided by the carbon-removal effect of animal feces and dead animals dropping to the ocean floor;[136]
  • The excursion does not mark a genuine event relating to life processes, but is an artifact of geological processes (called diagenesis); this has a high hurdle to overcome, as the excursion has been measured globally and diagenesis is usually a local phenomenon, and no global mechanism has yet been proposed for it;[137]

  • The excursion is the result of an asteroid impact that changed Earth’s tilt;[138]

  • The vaporization of methane hydrates on the ocean floor;[139]

  • It was related to a global glaciation, like previous Snowball Earth glaciations;[140]

  • The excursion was real, but there were others, and none of them significantly impacted Precambrian evolution.[141]

 

Deep-ocean currents, taking atmospheric gases deep into the oceans as they do today, do not seem to have existed during supercontinental times, and atmospheric oxygen was likely only a few percent at most when the Cryogenian Period began.  Canfield’s ocean-oxygenation evidence partly came from testing sulfur isotopes.  As with carbon, nitrogen, and other elements, life prefers the lighter isotope of sulfur, and sulfur-32 and sulfur-34 are two stable isotopes that can be easily tested in sediments.  Canfield proposed that in pre-Cryogenian oceanic depths, sulfate-reducing bacteria, which are among Earth’s earliest life forms and produce hydrogen sulfide as its waste product, abounded.  Hydrogen sulfide gives rotten eggs their distinctive aroma and is highly toxic to plants and animals, as it disables the enzymes used in mitochondrial respiration.  Hydrogen sulfide would react with dissolved iron to form iron pyrite and settle out in the ocean floor, just as the iron oxide did that formed the BIFs.  The sulfate-reducing bacteria will enrich the sulfur-32/34 ratio by 3% and did so before the Cryogenian, but the Ediacaran iron pyrite sediments showed a 5% enrichment.  A persuasive explanation is recycling sulfur in the oceanic ecosystem, which can only happen in the presence of oxygen.[142]

Part of the hypothesis for skyrocketing oxygen levels during the late Proterozoic was that high carbon dioxide levels, combined with a continent that had been ground down by glaciers, and the resumption of the hydrological cycle, which would have vanished during the Snowball Earth events, would have created conditions of dramatically increased erosion, which would have buried carbon (the cap carbonates are part of that evidence) and thus helped oxygenate the atmosphere.  Evidence for that increased erosion also came in the form of strontium isotope analysis.  Two of strontium’s stable isotopes are strontium-86 and 87.  Earth’s mantle is enriched in strontium-86 while the crust is enriched in strontium-87, so basalts exposed to the ocean in the oceanic volcanic ridges are enriched in strontium-86 while continental rocks are enriched in strontium-87.  If erosion is higher than normal, then ocean sediments will be enriched in strontium-87, which analysis of Ediacaran sediments confirmed.  That evidence, combined with carbon isotope ratios, provides a strong indication of high erosion and high carbon burial, which would have increased atmospheric oxygen levels.[143]  There is other evidence of increasing atmospheric oxygen content during the late Proterozoic, such as an increase in rare earth elements in Ediacaran sediments.  Although there is still plenty of controversy, today's consensus is that the Cryogenian is when atmospheric oxygen levels began rising to modern levels, where they have largely stayed, although as this essay will later discuss, oxygen levels have varied widely since the late Proterozoic (from perhaps only a few percent to 35%). 

An increase in atmospheric oxygen usually meant a decline in carbon dioxide, which would have cooled the planet.  Recent data and models suggest that during the Cryogenian Period, global surface temperatures declined from around 40o C to around 20o C, and it has been below 30o C ever since, generally fluctuating between  25o C and 10o C.  Today’s global surface temperature of around 15o C is several degrees warmer than during the glacial periods of the current ice age but is still among the lowest that Earth has ever experienced, and is generally attributed to atmospheric carbon dioxide’s consistent decline during the past 100-150 million years. 

Paleontologists were lonely fossil hunters for more than a century, but in my lifetime they found allies in geologists, and with the rise of DNA sequencing and genomics, molecular biologists have provided invaluable assistance.  In 1996, a paper was published that created a huge splash in paleontological circles.[144]  Molecular biologists used the concept of the “molecular clock” of genetic divergence among various species.  Their work concluded that the stage was set for animal emergence hundreds of millions of years before they appeared in the fossil record, particularly during the Cambrian Explosion.  That paper initiated its own explosion of genetic research, and the current range of estimates has the genetic origins of animals somewhere between 1.2 bya and 700 mya, but this field is in its infancy and more results are surely coming.[145]  From an early optimism that molecular clocks could finely calibrate the timing of events, scientists have come to admit that “molecular clocks” do not reliably keep time.  Today, molecular evidence is used more to tell what happened than when.  The geological and archeological record is considered more accurate for dating, and that evidence is used for calibrating molecular evidence.  Even though “molecular clocks” keep far from perfect time, they are being used to do some timekeeping, when they can be bounded by other timing evidence, with a kind of interpolation of the data points.

In particular, the synergies of molecular biology and paleontology have identified the importance of Hox genes in early animals.  In bilaterally symmetric animals, Hox genes dictate body development and are effectively identical in a fly and a chicken, which diverged from their common ancestor nearly 700 mya.  Hox genes became an anchor in animal development and the basics are still unchanged after more than 600 million years.

In summary, today’s orthodox late-Proterozoic hypothesis is that the complex dynamics of a supercontinent breakup somehow triggered the runaway effects that led to a global glaciation.  The global glaciation was reversed by runaway effects primarily related to an immense increase in atmospheric carbon dioxide.  During the Greenhouse Earth events, oceanic life would have been delivered vast amounts of continental nutrients scoured from the rocks by glaciers, and the hot conditions would have combined to create a global explosion of photosynthetic life.  A billion years of relative equilibrium between prokaryotes and eukaryotes was ultimately shattered, and oxygen levels began rising during the Cryogenian and Ediacaran periods toward modern levels.  Largely sterilized oceans, which began to be oxygenated at depth for the first time, are now thought to have prepared the way for what came next: the rise of complex life. 

Fossils are created by undisturbed organism remains that become saturated with various chemicals, which gradually replace the organic material with rock by several different processes of mineralization.[146]  Few life forms ever become fossils but are instead consumed by other life.  Rare dynamics lead to fossil formation, usually by anoxic conditions leading to undisturbed sediments that protect the evidence and fossilize it.  Scientists estimate that only about 1%-2% of all species that ever existed have left behind fossils that have been recovered.  Geological processes are continually creating new land, both on the continents and under the ocean.  Seafloor strata do not provide much insight into life’s ancient past, particularly fossils, because the process recycles the oceanic crust in “mere” hundreds of millions of years.  The basic process is that, in the Atlantic and Pacific sea floors in particular, oceanic volcanic ridges spew out basalt and the plates flow toward the surrounding continents.  When oceanic plates reach continental plates, the heavier mafic (basaltic) oceanic plates are subducted below the lighter felsic (granitic) continental plates.  Parts of an oceanic plate were entirely subducted into the mantle more than 100 mya and left behind plate fragments.  On the continents, however, as they have floated on the heavier rocks, tectonic and erosional processes have not obliterated all ancient rocks and fossils.  The oldest “indigenous” rocks yet found on Earth are more than four billion years old.  Stromatolites have been dated to 3.5 bya, and fossils of individual cyanobacteria have been dated to 1.5 bya.[147]  There are recent claims of finding fossils of individual organisms dated to 3.4 bya.  The oldest eukaryote fossils found so far are of algae dated to 1.2 bya.  The first amoeba-like vase-shaped fossils date from about 750 mya, and there are recent claims of finding the first animal fossils in Namibia, of sponge-like creatures which are up to 760 million years old.[148]  Fossils from 665 mya in Australia might be the first animal fossils, and some scientists think that animals may have first appeared about one bya.  The first animals, or metazoans, probably descended from choanoflagellates.  The flagellum is a tail-like appendage that protists primarily used to move and it could also be used to create a current to capture food.  Flagella were used to draw food into the first animals, which would have been sponge-like.  When the first colonies developed in which unicellular organisms began to specialize and act in concert, animals were born, and it is currently thought that the evolution of animals probably only happened once.[149]  In interpreting the fossil record, there are four general levels of confidence: inevitable conclusions (such as ichthyosaurs were marine reptiles), likely interpretations (ichthyosaurs appeared to give live birth instead of laying eggs), speculations (were ichthyosaurs warm-blooded?), and guesses (what color was an ichthyosaur?).[150]

During the eon of complex life, the geologic time scale is divided by the distinctive fossils found in the sedimentary layers attributed to that time.  Before the eon of complex life (that ancient time before complex life first appeared, which represents about 90% of Earth’s existence so far, is called the Precambrian supereon today), fossils were microscopic and rare.  Over time, geophysical forces eradicate sedimentary layers, and for the earliest animals, their fossils are found in only a few places on Earth.  The first animal fossils of significance formed about 600 mya and are strange creatures to modern eyes.  They were first noticed in 1868 in Newfoundland, but the fledgling paleontological profession dismissed them, not recognizing them as fossils.[151]  In Namibia in 1933, those Precambrian fossils were again noted but given a Cambrian chronology because the day’s prevailing hypothesis placed the beginning of animal life during the Cambrian Explosion.  In 1946, in the Ediacara Hills in Australia, more such strange fossils were found in what were thought to be Precambrian rocks, but it was not until 1957, when those fossils were found in England, in rocks positively identified as Precambrian, that the first period of animal life, the Ediacaran, was on its way to recognition (it was not officially named the Ediacaran until 2004, for the first new period recognized since the 19th century).  In China, the Doushantuo Formation has provided fossils from about 635 mya to 550 mya, which covers the Ediacaran Period (c. 635 to 541 mya), and Ediacaran fossils have been found in a few other places.  Microscopic algae spores and animal embryos abound in Doushantuo cherts, and the spores look like little suns and other fanciful shapes.  Almost all of them went extinct within a few million years of appearing in the fossil record, for an “invisible” mass extinction.[152]  That mass extinction directly preceded the appearance of the first large organisms that Earth ever saw: Ediacaran fauna (also called “Ediacaran biota” in certain scientific circles, as there is debate whether those Ediacaran fossils were animal remains[153]). 

Early Ediacaran fossil finds were often dismissed as pseudofossils because they did not fit the prevailing idea of an animal or plant, and Dickinsonia left the most famous Ediacaran fossils.  Today, the most likely interpretation seems to be that Dickensonias flopped themselves down on bacterial mats and fed on them.  When one finished eating a mat, it flopped its way to another.  It was a bilateral-like creature and is today classified into an extinct phylum with other Ediacaran fauna.  It has reasonably been speculated that Dickinsonia got its oxygen through diffusion across its surface, and that oxygen levels had to be at least 10% of today's to achieve that.[154]  Charnia looked like a plant but almost certainly was not, and is classified into another extinct phylum.  Phyla are body plans, and Ediacaran fauna are indeed strange looking.  There is debate whether the Ediacaran fauna were plants, animals, or neither, and that debate will not end soon.  Spriggina resembled a trilobite and may have been its ancestor.  Paths in the sediments, called feeding traces, have been found, but there was no deep burrowing in the Ediacaran Period.  In the last few million years of the Ediacaran, the first skeletons appeared, particularly of Cloudinids.[155]  That characteristic Ediacaran fauna suddenly appeared in the fossil record about 575 mya and all abruptly disappeared about 542 mya.  Below are images of those Ediacaran forms, which can appear so bizarre to people today.  (Source for all images: Wikimedia Commons)

There has been controversy regarding why Ediacaran fauna quickly disappeared and even if their disappearance qualifies as a mass extinction.[156]  One idea is that their disappearance was due to predation by what became Cambrian fauna, and another is that they ate their food sources to extinction, but it appears more likely that it may have been an extinction brought on by anoxic oceans.  Cambrian fauna filled the vacant niches and then some when the ocean became oxygenated again.  Although Ediacaran fauna did not move much, their existence was probably owed to some oxygenation of the oceans, and although their metabolisms would have been slow compared to the animals that followed them, they may not have been able to survive in anoxic oceans.  Ediacaran anoxic events are also when the first Middle East oil deposits were formed.  The Proto-Tethys Ocean appeared in the Ediacaran, followed by the Paleo-Tethys and the Tethys, and those oceanic basins eventually all disappeared and their seafloors were subducted by colliding continents.  Those subducted basins became the primary source of Middle East oil, which are extracted from Earth’s most gigantic hydrocarbon deposits.

As with all “big idea” hypotheses such as those that gird the foregoing narrative of a global glaciation and rise of complex life, there are challenges aplenty coming from various corners, and some are:

 

  • There was not really a Snowball Earth, but several regional plateau glaciations have been misinterpreted as a global glaciation and the reappearing BIFs were only local in nature;[157]
  • There was not really a Snowball Earth, and a naturally wandering axis of rotation has created the illusion of tropical glaciation;[158] another version is that the magnetic poles wandered more than currently believed and made the paleomagnetic evidence invalid, which created an illusion of tropical glaciation;

  • The trigger for the Snowball Earth episodes was the drawdown in atmospheric carbon dioxide caused by life processes; one hypothesis is that land plants did it, as they colonized the continents hundreds of millions of years before popularly supposed, and another is that early animal life did it;[159]

  • Reconstructions of the oxygen record are subject to a wide range of error, so the levels used to make life-related arguments may be invalid;[160]

  • Animal activities may have been responsible for ventilating the oceans, especially near shore, so animals were a cause, not a consequence, of oxygenating the oceans;[161]

  • Even if rising oxygen levels in the atmosphere and oceans coincided with the rise of complex life, it was not necessarily a causal relationship; some animals can respire anaerobically (at up to four times the usual rate for anaerobic respiration and fermentation); perhaps the rise of complex life happened in an anaerobic environment and animals only switched to aerobic respiration when oxygen became available;[162]

  • Canfield’s sulfur evidence may not be evidence of an oxygen increase but of an increase in burrowing animals in the ocean sediments; Canfield himself has broken with the results of his mentor's model (GEOCARBSULF), which had near-modern levels of oxygenation reached by the Cambrian Period, and believes as of 2014 that oxygen levels were no more than 5% by the Cambrian Explosion and did not reach modern levels until around 420 mya, after complex life began to colonize land;[163]

  • Oceanic salinity may have prevented complex life forming in the ocean, and maybe complex life first evolved on land and only entered the ocean when it was safe to do so, but the fossil record is too sparse to currently prove it; maybe even life itself first evolved in fresh water, not in oceanic volcanic vents;[164]

  • Atmospheric oxygen levels really did not change around the ventilation episode; oxygen may have been no more important to the appearance of complex life than water or photosynthesis were;[165]

  • The coming eon of complex life had no single underlying cause but was the result of fortuitous circumstances and dynamics that happened when they did.[166]

 

Some hypotheses are stronger, others weaker, and some have already come and gone (and might be resurrected one day, as Birkeland’s hypothesis was?).  The coming generation of research may resolve most of these issues, but new ones will undoubtedly arise and there is obviously a long way to go before significant consensus will be reached on those ancient events.

Again, the purpose of this chapter's presentation is to cover, in some depth, the scientific process and the kinds of controversies and numerous competing hypotheses that can appear and to show how intersecting lines of evidence, brought from diverse disciplines and using increasingly sophisticated tools, are providing new and important insights, not only into the distant past, but which can also have modern-day relevance.

Readers for the collective task that I have in mind need to become familiar with the scientific process, partly so they can develop a critical eye for the kinds of arguments and evidence that attend the pursuit of FE and other fringe science/technology efforts.  For the remainder of this essay, I will attempt to refrain from referring to too many scientific papers and getting into too many details of the controversies.  Following my references will help readers who want to go deeply into the issues, and many of them are as deep and controversial as the Snowball Earth hypothesis and aftermath has proven to be, or attempts to explain the Shuram excursion.  These are relatively new areas of scientific investigation, partly due to an improved scientific toolset and ingenious ways to use them.  It is very possible that the controversies in those areas will diminish within the next generation as new hypotheses account for increasingly sophisticated data, and paradigmatic changes in the near future are nearly certain.  But science is always subject to becoming dogmatic and hypotheses can prevail for reasons of wealth, power, rhetorical skill, and the like, not because they are valid.  The history of science is plagued with that phenomenon, and probably will be as long as humanity lives in the era of scarcity.

As will become a familiar theme in this essay, the rise and fall of species and ecosystems is always primarily an energy issue.  The Ediacaran extinction is a good example: Ediacaran fauna either became an energy source for early Cambrian predators, ran out of food energy, ran out of the oxygen necessary to power their metabolisms, or lacked some other energy-delivered nutrient.  After the extinction events, biomes were often cleared for new species to dominate, which were often descended from species that were marginal ecosystem members before the extinction event.  They then enjoyed a golden age of relative energy abundance as their competitors were removed via the extinction event. 

For this essay’s purposes, the most important ecological understanding is that the Sun provides all of earthly life’s energy, either directly or indirectly (all except nuclear-powered electric lights driving photosynthesis in greenhouses, as that energy came from dead stars).  Today’s hydrocarbon energy that powers our industrial world comes from captured sunlight.  Exciting electrons with photon energy, then stripping off electrons and protons and using their electric potential to power biochemical reactions, is what makes Earth’s ecosystems possible.  Too little energy, and reactions will not happen (such as ice ages, enzyme poisoning, the darkness of night, food shortages, and lack of key nutrients that support biological reactions), and too much (such as ultraviolet light, ionizing radiation, temperatures too high for enzyme activity), and life is damaged or destroyed.  The journey of life on Earth has primarily been about adapting to varying energy conditions and finding levels where life can survive.  For the many hypotheses about those ancient events and what really happened, the answers are always primarily in energy terms, such as how it was obtained, how it was preserved, and how it was used.  For life scientists, that is always the framework, and they devote themselves to discovering how the energy game was played.

 

Speciation, Extinction, and Mass Extinctions

Earth’s Largest Mass Extinction Events

Major Extinction Event

Minor Extinction Event

Date

Percent of Species or Genera that Went Extinct

Suspected Primary Cause(s)

Aftermath Dynamics

 

Microscopic organisms

May have happened numerous times before eon of complex life.

 

Changing sea temperatures and chemistry.

The last microscopic mass extinction directly preceded the rise of the first animals that could be seen with the naked eye.

 

Ediacaran

c. 542 mya

Unknown, but almost all Ediacaran forms disappeared.

Anoxia

Cambrian Explosion

 

Mid-Cambrian

c. 517 mya

Unknown, but small shelly fauna largely disappear. 

Anoxia and changing sea levels.

Trilobite radiation

 

Dresbachian

c. 502 mya

40% of marine genera

Anoxia

End of Golden Age of Trilobites, and brachiopods diminished.

 

End-Cambrian

c. 485 mya

Unknown, but half of trilobite species go extinct.  Might be regional, but could be a major mass extinction.

Rising sea levels and anoxia.

Ordovician radiation

Ordovician–Silurian

 

c. 443 mya

c. 85% of all species

Temperature and sea level changes and anoxia.

Ecosystem functioning not fundamentally altered.

 

Ireviken

c. 433

50% of trilobite and 80% of conodont species in seafloor event.

Climate and sea level changes.  It was a late ice age event.  Chemistry and/or currents changes or anoxia.

Disaster taxa appear afterward, followed by recovery.

 

Mulde

c. 427 mya

Seafloor communities devastated

Climate change, sea level changes, and anoxia.

 

Lau

c. 424 mya

Seafloor communities devastated

Climate change, sea level changes, and anoxia.

Late Devonian

 

c. 375 to 360 mya

c. 70% of all species

Series of extinctions.  Sea level changes and anoxia.  Mountain-building and volcanism could have triggered ice age that caused it.

Arthropod and vertebrate colonization of land halted for 14 million years.

 

Mid-Carboniferous

c. 325 mya

Marine extinction

Sea level changes related to ice age and continental uplift related to continents colliding to form Pangaea. 

End of Mississippian and beginning of Pennsylvanian epochs of Carboniferous Period. 

 

Carboniferous

c. 307 mya

Rainforest collapse

Ice age

The rise of reptiles.

Permian–Triassic

 

c. 270 to 250 mya

c. 90-96% of all species

Series of extinctions.  Volcanism, warming, sea level changes, and anoxia.  Formation of Pangaea probably the ultimate cause.

Beginning of a new era. 

 

Carnian-Pluvial

c. 230 mya

Ammonoid and conodont mass extinction.  Near-extinction of therapsids, and extinction of synapsid that would have been a dinosaurian competitor. 

Volcanism, mountain-building.

Dinosaurs begin to dominate, and mammals first appear several million years later.  Stony corals first appear.  Some argue that this extinction is more significant than the end-Triassic extinction.

Triassic–Jurassic

 

c. 200 mya

c. 70-75% of all species

Volcanism, warming, sea level changes, and anoxia.

The dominance of dinosaurs.

 

Toarcian

c. 183 mya

Reefs and ammonites devastated.

Volcanism, anoxia.

Carbonate hardgrounds become common in calcite seas.

 

End-Jurassic

c. 145 mya

Reef collapse, bivalves had about a 20% extinction.

Falling sea levels

Cretaceous period rise of ornithischians.

 

Aptian

c. 116 mya

Marine event.  Rudist bivalve domination temporarily halted.

Volcanism

Rudists subsequently dominated, displacing coral reefs.

 

Cenomanian

c. 93 mya

Marine event which may have marked the final extinction of ichthyosaurs.  About 25% of marine invertebrate species went extinct.  Rudist reefs decline. 

Undersea volcanism, anoxia.

Biomes recovered largely unchanged, although world continued cooling for nearly the next 40 million years.

Cretaceous–Paleogene

 

c. 66 mya

c. 75% of all species

Bolide impact, and perhaps also volcanism and sea level changes.

The end of dinosaurs and the rise of mammals.

 

Paleocene-Eocene

c. 56 mya

Seafloor communities devastated, up to 50% of seafloor foraminifera species go extinct.

Volcanism, release of methane hydrates from ocean floor, possibly related to change in ocean current. 

Warmest epoch in hundreds of millions of years, and great radiations of mammals.  A Golden Age of Life on Earth. 

 

Mid-Eocene

c. 50-49 to 38-37 mya

Warm-climate species migrate or go extinct.  Greatest mass extinction of Cenozoic Era so far.

Cooling related to transition from Greenhouse Earth to Icehouse Earth. 

Cold-adapted species dominate biomes. 

 

End-Eocene

c. 34 mya

Half of European mammal genera, all early whales.

Migration of Asian mammals to Europe, Icehouse Earth conditions in oceans.

Relatively cold Oligocene Epoch begins.

 

Mid-Miocene

c. 14.8-14.5 mya

Warm-climate species migrate or go extinct. 

Mountain-building and carbon sequestration due to silicate weathering.

Earth has not been as warm since then.  Miocene apes migrate back to Africa, which might include humanity’s ancestor.

 

Late Pliocene - Atlantic

c. 3.5 to 2.5 mya

65% of North American bivalve species, Florida’s reefs.

Closure of gap between Atlantic and Pacific Oceans between the Americas, and resultant Gulf Stream dynamics which may have initiated current ice age.

Current ice age in Northern Hemisphere.

 

Late Pliocene

c. 3.0 to 2.7 mya

The majority of mammalian species. 

Land bridge to North America forms.

Mammals that migrated from North America dominate South American biomes. 

Quaternary/Holocene

 

c. 50 kya to present

May reach 50% or higher by 2100 and maybe far sooner, and far higher later.  Eventual Permian-level extinction is possible.

Humanity, warming.

Future extinctions still preventable by humans, and humans can create a radically different aftermath.

Chapter summary:

In his Origin of Species, Charles Darwin sketched processes by which species appear and disappear, today called speciation and extinction.  Origin of Species is a landmark in scientific history and is still immensely influential.  But it was also afflicted by false notions that are still with us.  Europe’s emergence from dogma and superstition has been a long, fitful, only partially successful process.  In the 1500s, Spanish mercenaries read a legal document to the unfortunate Indians that they conquered and annihilated that stated that Creation was about five thousand years old, as scholars of the time simply added up the Book of Genesis’s “begats.”  The Old Testament is filled with tales of genocide, miracles, and disasters, with a global flood that the faithful Noah survived.  As geology gradually became a science and processes such as erosion and sedimentation were studied, the Judeo-Christian belief of Earth's being five thousand years old was discarded and the concept of geologic time arose in Europe.

In the early 19th century, a dispute was personified by Charles Lyell, a British lawyer and geologist, and Charles Cuvier, a French paleontologist.  Their respective positions came to be known as uniformitarianism and catastrophism.  Just as the British prevailed in their global imperial competition with the French, so did uniformitarianism prevail in scientific circles.  Under the comforting uniformitarian worldview, there was no such thing as a global catastrophe.  Changes had only been gradual, and only the present geophysical, geochemical, and biological process had ever existed.  The British Charles Darwin explicitly made Lyell’s uniformitarianism part of his evolutionary theory and he proposed that extinction was only a gradual process.  Cuvier was the first scientist to suggest that organisms had gone extinct, which contradicted the still-dominant Biblical teachings, even in the Age of Enlightenment.[167]  Although Cuvier did not subscribe to the evolutionary hypotheses that predated Darwin, his catastrophic extinction hypothesis was informed by his fossil studies.  But Lyell and Darwin prevailed.  Suggesting that there might have been catastrophic mass extinctions in Earth’s past was an invitation to be branded a pseudoscientific crackpot.  That state of affairs largely prevailed in orthodoxy until the 1980s, after the asteroid impact hypothesis was posited for the dinosaurs’ demise.[168]  An effort led by a scientist publishing outside of his field of expertise (a Nobel laureate in this instance) removed gradualism from its primacy.  Only since the 1980s have English-speaking scientists studied mass extinctions without facing ridicule from their peers, which has never been an auspicious career situation.  Since then, many minor and major mass extinction events have been studied, but the investigations are still in their early stages, partly due to a dogma that prevailed for more than a century and a half, and Lyell’s uniformitarianism is still influential.  The ranking of major mass extinctions is even in dispute, or even how they should be ranked, and a mid-Carboniferous Period extinction was recently argued as being greater than the Ordovician–Silurian extinction.[169]

Speciation has probably been more controversial than extinction.  To be fair to Darwin, genetics was not yet a science when Origin of Species was published in 1859.  It was not until the 1866 publication of an obscure paper by Silesian friar Gregor Mendel that the science of genetics began, but Mendel’s work was dismissed and ignored by mainstream science until the 20th century.  Darwin went to his grave unaware of Mendel’s work.  Today, speciation is primarily considered to be a genetic event.[170]  But similar to how proteins have several dimensions of structure that dictate their function, and emergent properties that appear at higher levels of complexity, the DNA code by itself does not explain life, although the popular Selfish Gene Hypothesis frames life and evolution as a competition between genes.[171]   

Before humans began to alter “natural” evolution with selective breeding, genetic engineering, and the like, speciation has been largely thought to be the result of populations becoming genetically isolated, primarily through geographic isolation, and isolated populations continue to evolve and adapt to their environments.  Eventually, the separated populations become separate species.  Even defining what a species is is still controversial, but the general concept is that if two sexually reproducing organisms can breed and produce fertile offspring, they are of the same species.  In light of evolutionary theory, human races are simply genetically isolated populations that have evolved as they adapted to their environments, but all races can interbreed, so humanity is a single species.  Recent DNA studies suggest that white skin is an evolutionary adaptation to northern climates, and white skin may be only six thousand years old, and blue eyes and light hair are similarly new and developed in the same vicinity.[172]  As Europe’s conquest of the world and subsequent Industrial Revolution have ended a great deal of genetic isolation, the adaptive differences seen in the races have been gradually disappearing as multiracial offspring have increased.  If humanity attains the FE epoch, “race” will disappear along with geographic isolation.

Liebig’s Law states that life can only grow as fast as its scarcest nutrient, and nutrient availability is clearly the limiting factor in many ecological situations.  In the oceans today, most marine life lives near land (99% of the global fish catch is caught near land), as nutrient runoffs from land feed oceanic ecosystems.  The runoff is seasonal and so is the fish catch, the deposition of marine sediments, and the like.  Nitrogen and phosphorus are two particularly critical nutrients; blooms and die-offs are based on those elements’ availability.  In the industrial age, with phosphorus and nitrogen artificially added in agriculture, the runoff has created great algal blooms (which create hypoxic “dead zones”) and other events, and even artificially introduced carbon is a suspected variable. 

Since the most dramatic instances of speciation seem to have happened in the aftermath of mass extinctions, this essay will survey extinction first.  A corollary to Liebig’s Law is that if any critical nutrient falls low enough, the nutrient deficiency will not only limit growth, but the organism will be stressed.  If the nutrient level falls far enough, the organism will die.  A human can generally survive between one and two months without food, ten days without water, and about three minutes without oxygen.  For nearly all animals, all the food and water in the world are meaningless without oxygen.  Some microbes can switch between aerobic respiration and fermentation, depending on the environment (which might be a very old talent[173]), but complex life generally does not have that ability; nearly all aerobic complex life is oxygen dependent.  The only exceptions are marine life which has adapted to varying levels of oxygen.  Birds can go where mammals cannot, flying over the Himalayas, for instance, or being sucked into a jet engine at several kilometers above sea level, due to their superior respiration system.  If oxygen levels rise or fall very fast, many organisms will not be able to adapt, and will die. 

Biologists consider extinctions to be due to failure to adapt to environmental changes, and the “environment” includes other organisms.[174]  Exactly how species go extinct is still poorly understood, but the idea that organisms that capture the most energy win the battle for survival is a common understanding among biologists, and they see ecosystems organized along the position in the food chain that each organism occupies.  A popular model used for analyzing predator-prey relationships makes the relationship explicit.  There are many interacting variables, including those environmental nutrients, both inorganic and those provided by life forms.  The ability of an organism or species to adapt is partly dependent on how specialized it is and how unique its habitat is.  Absolute numbers, geographic distribution, position in the food chain (higher in the food chain is riskier), mobility, and reproductive rates all impact extinction risk.  During the Cambrian Period, about 80% of all animals were immobile.  Today, 80% of all animals are mobile.[175]  The immobile animals were at higher extinction risk, for obvious reasons.

The evolutionary game for a species is for enough of its members to survive long enough to produce viable offspring.  Organisms have adopted myriad survival and reproduction strategies, with astonishing diversity.  There are many ways to win or lose that game, but every species eventually loses.  More than 99.9% of all species that have ever lived on Earth became extinct.  A mammalian species has a life expectancy of around a million years, while a marine invertebrate species has one of about five-to-ten million years.  Today’s global extinction rate is more than 100 times the “normal” rate (“background rate”), and perhaps far greater, such as 10,000 times, due to human domination of the ecosphere.  The current rates could rival and equal the rates during the greatest mass extinction of all: the Permian extinction.[176]

There are “normal” extinction scenarios, and the “happy ending” extinction is when a species lives, evolves, and there comes a time when it would no longer be able to produce viable offspring by breeding with its ancestors.  There obviously would not be a “bright-line” demarcation for such an event, or any way to currently test such an event, but it has happened countless times.  Another normal extinction begins when a species splits into isolated populations, such as by tectonic plates moving away from each other.  Old World and New World monkeys became separated when monkeys from Africa migrated to South America, before the Atlantic Ocean grew to its present size.  Isolated populations of a species would continue to evolve and eventually could no longer interbreed, which made them different species by definition, and perhaps neither population could breed with its ancestors at the time the populations became separated.  Both populations might continue to thrive, but one might find itself in unfavorable conditions and go extinct while the other continued living.  If those isolated populations were still the same species, the population that went extinct would be called locally extinct, but if they were separate species, then the disappearing population would be a species extinction. 

Often, a species will not become extinct but its population will be reduced to small number, called a “bottleneck,” usually in refugia where they can ride out the storm, to have their population expand again when conditions improve.  That isolation can cause speciation but can also cause extinction.  When a bottleneck happens, the genetic diversity of the population will largely vanish, which can make it more vulnerable.  A bottlenecked population can go extinct, can speciate, can undergo an adaptive radiation when conditions improve, or can remain in its refugia and can become a living fossil.  The coelacanth is a living fossil that found and remained in such refugia, as it outlived all of its cousins by hundreds of millions of years.  Coelacanths have a similar strategy to the nautilus, which spends most of its time in its deep-water refugia, to rise at night to feed on the reefs, and all of its cousins long ago went extinct.  Humans seem to have gone through a bottleneck, as have many other animals alive today, most of which are in threatened status today.  In the Devonian extinctions, armored fish species were reduced by half during the first extinction event and the remaining population became bottlenecked.  From that bottlenecked situation, the second Devonian extinction event annihilated the remaining armored fish.

Scientists often measure extinction rates at the family and genus levels of the taxonomy; families and genera are far harder to kill off than species.  Some genera/families beat the odds and survived for hundreds of millions of years.  They are called living fossils, and usually all of their close relatives went extinct long ago.  The ubiquitous and lowly horsetail is a living fossil that first appeared nearly 400 mya.  There have been recent calls to retire the "living fossil" designation, as the survivors of their lines have evolved somewhat over the years.  However, it was not all that much, as they were very recognizable decedents of nearly identical-looking ancestors, and if those "living fossils" were graphically represented on the tree of life, they might instead be called the last leaves on their branch.  Perhaps "sole survivor" conveys the meaning better.  However scientists want to term it, the fact is that those "living fossils" have an ancient lineage, have not appreciably changed in millions of years, and the large "family" that they descended from all went extinct; their branch is bare except for them.  The survivors evolved since their close relatives died out, but there is nothing close to them on their branch of the tree of life.

Some kinds of organisms found great success with their strategies and they marginalized other kinds and even drove them to extinction, to only die off themselves in a mass extinction event, and the previously marginalized life forms flourished in the post-catastrophic biome.  The rise of mammals might have never happened without the dinosaurs’ demise.  Mass extinction events account for less than five percent of all species extinctions during the eon of complex life, but they had a profound impact on complex life’s history; the rise of mammals is only one of many radical changes.  Not only would a class of animals such as mammals thrive when their dinosaur overlords were gone, but the direction of mammal evolution was also influenced.  It took millions of years, even tens of millions of years, for ecosystems to approach their former level of abundance and diversity after a mass extinction event, and the new biomes could appear radically different from the pre-extinction biome.  The geologic periods in the eon of complex life usually have mass extinctions marking their boundaries.

Many assemblages of organisms had their “golden ages” in fresh biomes, then to cycle through a plateau, decline, and then finally suffer marginalization or extinction.  Sometimes the decline was relatively slow, with its ups and downs, and at other times it could be over in a flash, such as the dinosaurs’ exit. 

The extinction of Ediacaran fauna was the first mass extinction of organisms that could be seen with the naked human eye.  There was an extinction of microscopic eukaryotes soon before the eon of complex life began, and there may have been mass extinctions of microbes before then, but the evidence is so thin for anything before then that scientists may never know just how many mass extinctions there were.  However, bacteria and archaea, those biochemical wizards, can exist in environments far too harsh for complex life and those communities do not have the apparent instability of complex life’s food chains, so there may have been few mass extinctions in Precambrian times.  Cyanobacteria have not fundamentally changed in billions of years, which means that its mode of living has always worked well enough to ensure its survival.  No animals have anything close to such a lengthy pedigree.

Mass extinctions always have critical geophysical aspects to them, and often geochemical.  Continental shelves under shallow seas, which are home to most marine life, are vulnerable to sea level and oceanic current changes.  Stagnant waters, or waters that have too many nutrients dumped into them, can lose their oxygen, which triggers anoxic events that kill complex life.  A continental shelf exposed to the atmosphere by a falling sea level would obviously lose its marine life, and that marine life might have had nowhere else to go.  Sea levels can rise or fall for different reasons.  The most obvious reason has been advancing and retreating ice sheets, as water is removed from or added to the oceans, but the aggregate continental landmass has always grown (possibly sporadically), continents can rise and can fall during the journeys of their tectonic plates, and the ocean’s collective basin has fluctuated in size, usually falling as water was hydrated into rocks, and also falling when tectonic plates collide to form supercontinents and rising again as they fragmented.  Generally, when sea levels fell, the continental shelves lost their marine life, and when they rose, anoxic conditions often accompanied them.  There is evidence that the ozone layer has been periodically damaged, which stressed all plants and animals that the Sun directly shined on.[177]  The positions of the continents, both in relation to each other and their proximity to the equator or poles, can have dramatic effects, including impacts on global climate.  Global climate changes and moving continents can turn rainforests into deserts and vice versa.

There is also evidence that life itself can contribute to mass extinctions.  When the GOE eventually oxygenated the oceans, organisms that could not survive or thrive around oxygen (called obligate anaerobes) retreated to the anoxic margins of the global ocean and land.  When anoxic conditions appeared, particularly when Canfield Ocean existed, the anaerobes could abound once again, and when sulfate-reducing bacteria thrived, usually arising from ocean sediments, they produced hydrogen sulfide as a waste product.  Since the ocean floor had already become anoxic, the seafloor was already a dead zone, so little harm was done there.  The hydrogen sulfide became lethal when it rose in the water column and killed off surface life and then wafted into the air and asphyxiated life near shore.  But the greatest harm to life may have been inflicted when hydrogen sulfide eventually rose to the ozone layer and damaged it, which could have been the final blow to an already stressed ecosphere.  That may seem a fanciful scenario, but there is evidence for it.  There is fossil evidence of ultraviolet-light-damaged photosynthesizers during the Permian extinction, as well as photosynthesizing anaerobic bacteria (green and purple), which could have only thrived in sulfide-rich anoxic surface waters.  Peter Ward made this key evidence for his Medea hypothesis, and he has implicated hydrogen sulfide events in most major mass extinctions.[178]  An important aspect of Ward’s Medea hypothesis work is that about 1,000 PPM of carbon dioxide in the atmosphere, which might be reached in this century if we keep burning fossil fuels, may artificially induce Canfield Oceans and result in hydrogen sulfide events.[179]  Those are not wild-eyed doomsday speculations, but logical outcomes of current trends and growing understanding of previous catastrophes, proposed by leading scientists.  Hundreds of hypoxic dead zones already exist on Earth, which are primarily manmade.  Even if those events are “only” 10% likely to happen in the next century, that we are flirting with them at all should make us shudder, for a few reasons, one of which is the awesome damage that it would inflict on the biosphere, including humanity, and another is that it is entirely preventable with the use of technologies that already exist on the planet. 

Mass extinction events can seem quite capricious as to what species live or die.  Ammonoids generally outcompeted their ancestral nautiloids for hundreds of millions of years.  Ammonoids were lightweight versions of nautiloids, and they often thrived in shallow waters while nautiloids were banished to deep waters.  Both dwindled over time, as they were outcompeted by new kinds of marine denizens.  In the Permian and Triassic mass extinctions, deep-water animals generally suffered more than surface dwellers did, but the nautiloids’ superior respiration system still saw them survive.  Also, nautiloids laid relatively few eggs that took about a year to hatch, while ammonoids laid more eggs that hatched faster.  However, the asteroid-induced Cretaceous mass extinction annihilated nearly all surface life while the deep-water animals fared better, and nautiloid embryos that rode out the storm in their eggs were survivors.  The Cretaceous extinction wiped out the remaining ammonoids while nautiloids are still with us and comprise another group of living fossils, although that status is disputed in 2014.[180]  Lystrosaurus was about the only land animal of significance that survived the Permian extinction and it dominated the early Triassic landmass as no animal ever has.  It comprised about 95% of all land animals.  Why Lystrosaurus, which was like a reptilian sheep?  Nobody knows for sure, but it may have been the luck of the draw.[181]  Perhaps relatively few bedraggled individuals existed in some survival enclave until the catastrophe was finished, and then they quickly bred unimpeded until the supercontinent was full, for the most spectacular species radiation of all time, at least until humans arrived on the evolutionary scene. 

Many causes for mass extinctions have been suggested.  Cuvier speculated that extinctions might have regular periodicity, and other scientists have proposed that hypothesis.  Around 30 million years is the average time between mass extinctions, which set scientists speculating whether galactic dynamics could be responsible.  Gamma ray bursts from supernovas have been proposed as one possible agent, as have bolide events, but the periodicity hypothesis has fallen out of favor.[182]  The periodic nature of mass extinctions could be because it takes millions of years for complex ecosystems to recover from the previous extinction events and build themselves into unstable states again, when new events cause the ecosystems to collapse.[183]

Before the era of mass extinction investigation that began in the 1980s, a hundred hypotheses were presented in the scientific literature for the dinosaur extinction, but it was a kind of scientific parlor game.  Scientists from all manner of specialties concocted their hypotheses.[184]  But even during the current era of scientific study of mass extinctions, much is unknown or controversial and even the data is in dispute, let alone its interpretation.  Dynamics may have conflated to produce catastrophic effects, such as increasing atmospheric carbon dioxide concentration warming the land and oceans to the extent that otherwise stable methane in hydrates on the ocean floor and in permafrost would be liberated and escape into the atmosphere.  That situation is currently suspected to have contributed to the Permian, Triassic, and Paleocene-Eocene extinctions, as well as helping end the Cryogenian Ice Age.  Today, there is genuine fear among climate scientists that those dynamics might return in the near future, as global warming continues and hydrocarbons are burned with abandon, which could contribute to catastrophic runaway conditions.  Wise scientists admit that humanity is currently conducting a huge chemistry experiment with Earth, and while the outcomes are far from certain, the risk of catastrophic outcomes is very real and growing.

Recent environmental studies show that disturbed ecosystems can have cascading failures, as the removal of one part of a food chain can collapse the entire chain in cascading failures, and entire ecosystems can go extinct.  Cascades in today's world usually begin when the apex predator is removed (by humans, and called a trophic cascade[185]), but not always.  Those cascading events can happen in aquatic and terrestrial environments.  Food chains are essentially energy chains made possible by aerobic respiration, and the more complex they are, the more energy is required to sustain them.  The leading hypothesis for why complex civilizations collapse is also an energy-scarcity dynamic.  Also, the most compelling findings that I have encountered regarding degenerative disease in humans shows that if individual cells no longer have their nutritional needs met by the organism, they stop acting out their role as specialized cells and “go rogue.”  It may be difficult-to-impossible for scientists to reconstruct and test cascading failure hypotheses in ancient mass extinction events, but they may have played a major role in them, if not the dominant role.

Mass extinction events may be the result of multiple ecosystem stresses that reach the level where the ecosystem unravels.  Other than the meteor impact that destroyed the dinosaurs, the rest of the mass extinctions seem to have multiple contributing causes, and each one ultimately had an energy impact on life processes.  The processes can be complex and scientists are only beginning to understand them.  This essay will survey mass extinction events and their aftermaths in some detail, as they were critical junctures in the journey of life on Earth.

In 1972, Niles Eldridge and Stephen Jay Gould published their theory of punctuated equilibrium, which has generated plenty of controversy.  The basic idea is that species usually evolve slowly and even remain in a kind of stasis, except in exceptional times, when they can evolve relatively quickly.  Those exceptional times are often when new ecological niches become available, such as a new biological feature that allows exploitation of previously unavailable niches, or after an ecosystem is wiped clean by a mass extinction.  If a creature finds a way of life that works and it can keep exploiting/defending its unique niche, and the niche does not disappear, it can keep doing it for hundreds of millions of years without any significant changes, such as the horsetail, nautilus, and coelacanth have done.[186] 

Gene duplication is an important kind of genetic innovation that leads to speciation, which begins when a gene is duplicated, seemingly in error, and gets a “free ride,” like a spare part that never gets used.  The spare can then “experiment,” which can lead to a new and useful gene that perhaps codes for a new biological feature that enhances an organism’s ability to survive or reproduce.  About 15% of humanity‘s genes arose through gene duplication events, and in eukaryotes, gene duplication is around 1% per gene per million years.[187]  In the wake of mass extinctions, new species appear at high rates in what is called an adaptive radiation.  A leading hypothesis is that those post-extinction times allow for a golden age when life is easy, without the resource competition typical in more crowded biomes.  In such environments, organisms with duplicate genes and other genetic “defects” survive, and after long enough, those mutations become useful and lead to new species.  The most famous such adaptive radiation was the Cambrian Explosion, although its character was different from other radiations, when new body plans were invented as never before.[188]

Oxygen levels have fluctuated far more than temperature, ocean salinity, and pH have during the eon of complex life.  Peter Ward proposed that fluctuating atmospheric oxygen levels have not only contributed to mass extinction scenarios, but adapting to low oxygen levels has been a key stimulus for biological innovation.[189]  In summary, speciation is a reaction of organisms to challenge and opportunity which is eventually reflected in their DNA. 

 

The Cambrian Explosion

Global temperatures during the eon of complex life (Source: Wikimedia Commons)

Global carbon dioxide concentrations during the eon of complex life (Source: Wikimedia Commons)

World map when Cambrian Period began (c. 540 mya) (Source: Wikimedia Commons) (map with names is here)

 

Chapter summary:

Until Ediacaran fossils were recognized for what they were, the Cambrian Period (c. 541 to 485 mya) was considered to have produced the earliest known fossils, and that situation vexed scientists from Darwin onward.[190]  If animals just came into existence from nothing, the Creationist arguments of Darwin’s time may have had some validity.  Darwin attributed the lack of Precambrian fossils to the geological record’s imperfection.  As this essay’s previous sections have shown, scientists have filled many gaps and Darwin’s theory has held up well. 

The Cambrian Period, however, is of eonic significance and still a source of great controversy.  The Cambrian Explosion was unique and the development of the first complex, modern-looking ecosystem.[191]  Although the Cambrian Explosion is the most spectacular event in the fossil record, it is questioned whether it was really an explosion at all, and many contenders for the “cause” of the explosion have been offered.  Various hypotheses fell by the wayside over the years, but the hunt for one “cause” may be futile.  One factor may have triggered its more dramatic manifestations, but several dynamics played their roles.  There are going to be proximate and ultimate causes for events such as that.  First and foremost, the Cambrian Explosion was about size, which was aided by oxygenating the seafloor, which interacted with developmental changes (from egg to adult) and new ecological relationships.[192]  The currently predominant hypotheses feature geophysical and geochemical processes interacting with biological ones.[193]  The increase in organism size that marked the rise of complex life is today thought to be a response to predation, which led to life’s “arms race.”[194]  The competition between organisms, locked in predator/prey, parasite/host, grazer/grazed dynamics, is thought to be behind a great deal of evolutionary innovation called coevolution, as organisms adapted to each other.  The Red Queen hypothesis posits that the constant battle between those competing life forms led to sexual reproduction and other innovations.[195]

During the Cambrian Explosion, an ecosystem developed in which life on the sea floor, surface, and water column all interacted for the first time.  All but one of the environmental factors currently and prominently considered were energy dynamics, as the environment provided either too much or too little energy, and the nutrient hypothesis (calcium in this case) will be revisited numerous times in this essay.  A lack of nutrients, mineral and otherwise, always meant that the energy-driven dynamics that delivered the nutrients were curtailed.  If enough energy is properly applied, all nutrients can be abundant.

Before the rise of humanity and industrial agriculture, the interplay of life, climate, and land masses created the seasonal runoffs that fed oceanic ecosystems.  However, during the Cambrian Explosion the land was largely barren, as life had yet to significantly invade land.  Also, continental shelves have always been key hosts for oceanic ecosystems, as sunlight could reach the seafloor and nutrients were closer to the surface.  When supercontinents broke apart or formed as the tectonic plates danced across Earth’s crust, shallow seas were often created, which were usually quite life-friendly.  Those ancient shallow seas and swampy continental margins have great importance to today’s humanity, as our fossil fuels were usually created there.  Earth’s coal beds were created in swampy floodplain conditions, usually near coasts, and the oil deposits were created by black shale and marlstone that formed in shallow anoxic waters.  The Tethys Ocean and its predecessors (1, 2) had a half-billion-year history that began in the Ediacaran, and the Tethys finally disappeared less than 20 mya.  The shallow margins of those tropical oceans, and the anoxic events that dotted the eon of complex life, formed most of today’s oil deposits, and particularly Middle East oil.  Numerous shallow tropical seas characterized the Cambrian Period.

The first skeletons appeared in the Ediacaran, and Cambrian Period skeletons became a key aspect of the coming arms race between predator and prey.  Food chains appeared in which about ten percent of an organism’s energy was transferred to the animal that ate it.  Unlike the internal skeletons that characterized fish, amphibians, reptiles, birds, and mammals, the first skeletons were external.  Hard shells protected from predation, and the bigger the animal, the more likely it would survive (but a bigger animal also meant a bigger energy windfall if it could be eaten).  But size presented immense challenges.  Similar to how complex cells needed to solve the energy generation and distribution problem before they could grow, increasing size presented numerous problems to early complex life.  How could a large organism supply energy and other nutrients to its cells?  Remove waste?  Move?  Life solved the problems by making structures and organs from specialized cells.  By the Cambrian Period’s end, animals had developed skeletons, gills, muscles, brains, circulatory systems, digestive and eliminative systems, nervous systems, respiratory systems, and internal organs which included eyes, livers, kidneys, etc.

Just as the aftermath of the appearance of complex life was uninteresting from a biochemical perspective, as the amazingly diverse energy-generation strategies of archaea and bacteria were almost totally abandoned in favor of aerobic respiration, biological solutions to the problems that complex life presented were greatest during the Cambrian Explosion, and everything transpiring since then has been relatively insignificant.  Animals would never see that level of innovation again.  While investigating those eonic changes, many scientists have realized that the dynamics of those times might have been quite different from today’s, as once again Lyell’s uniformitarianism may be of limited use for explaining what happened.[196]  Also, scientists generally use a rule-of-thumb called Occam’s Razor, or parsimony, which states that with all else being equal, simpler theories are preferred.  Karl Popper, a seminal theorist regarding the scientific method, preferred simpler theories as they were easier to falsify.  However, this issue presents many problems, and in recent times, theories of mass extinction or speciation have invoked numerous interacting dynamics.  Einstein noted that the more elegant and impressive the math used to support a theory, the less likely the theory depicted reality.  Occam’s Razor has also become an unfortunate dogma in various circles, particularly organized skepticism, in which the assumptions of materialism and establishment science are defended, and often quite irrationally.  Simplicity and complexity have been seesawing over the course of scientific history as fundamental principles.  The recent trend toward multidisciplinary syntheses has been generally making hypotheses more complex and difficult to test, although scientists’ improving toolset and ever-increasing and more precise data makes the task more feasible than ever, at least situations in which vested interests are not interfering.

Phyla consist of body plans, which scientists have used to classify all life forms, and all significant animal phyla had appeared by the Cambrian Period’s end.[197]  The Cambrian Explosion has been difficult to explain and there is still great controversy and many unanswered questions, and it has also been difficult to explain why significant change stopped after the explosion.  Once the basic body plans appeared and biomes were filled, new plans never appeared again.  Why did all fundamental change stop?  The emerging view is the same for why complex life went all in with aerobic respiration and never changed since then.  Not only could innovation confer great benefits, but once that path was embarked on, further travel along the developmental path made it continually less feasible to backtrack, start over, and take another path, or choose a fundamentally different path.  The history of life’s choices was reflected in organisms in several ways, and the source of that inertia began to be understood when biology and chemistry at the cellular and subcellular levels were investigated, particularly after DNA was sequenced and studied.  The fact that Hox genes have not significantly changed in several hundred million years points to the issue.  Hox genes have not changed because they control key developmental steps in embryonic development.  Not only do Hox genes work, there are no practical ways to significantly change them, as they lay the animal’s structural foundation.  Hox genes are called regulatory genes, and the nature of gene regulatory networks seems to be why animals have not fundamentally changed since the Cambrian Explosion.[198] 

Imagine a family having a custom home built and, after it was built, they decided that they wanted a basement, four extra stories, central gas heating rather than baseboard electric heating, and a swimming pool on the third floor.  It would not be feasible to renovate the home to give it those new features, especially if the family was already living in it.  They would need to build a new house from scratch, with a new foundation, and they would have to find a temporary home during the construction period.  But an animal has to live in its body all the time.  There is no way to redesign and rebuild an animal’s foundation while it lives in its body, and the biological superstructure built on the foundation was designed for that foundation.  A new superstructure would also have to be designed and built on the new foundation.  A six-chambered heart, for instance, could not just be invented and put into a human chest and work, or a second brain, or a third arm.  The kinds of changes that are feasible have to adhere to the basic structural and biochemical foundations that the phyla represent.

Once animals arrived on the evolutionary scene and filled most possible niches, new biological foundations could not be built, with superstructures built atop them, and hope to compete for resources that were already being consumed in the food chains.  Developing the original animal body plans took millions of years.  There were many other possible body plans that could have been developed in the early days of animals, which might have worked wonderfully, but those chosen ones worked well enough for survival and reproduction, and once chosen, there was no going back.  There really could not be, unless all animal life was wiped out and protists could start over, as they are the last common ancestors of animals (and eliminating all animal life would lead to great plant extinctions for starters, such as flowering plants).  The biological commitments to those basic modes of existence had their own inertia, and it starts at the root, with the DNA. 

The primary unit of taxonomic organization is a clade, which consists of a single ancestor and all of its descendants.  The study of body features has been augmented by recent findings in molecular biology.  Many organisms have had their cladistic classification changed, and many more will in the future.[199]  Many common features among diverse organisms are due to convergent evolution and not ancestry, as organisms independently developed similar solutions to life’s challenges. 

Ediacaran traces show that some animals were mobile before the Cambrian Explosion.  Sponges were probably the first animals, but they were immobile except for their flagella drawing water through them, which carried food and oxygen in and waste out.  The first creatures that we would recognize as animals were probably worms crawling atop ocean sediments.  As lowly as the worm might seem, it would have needed muscles, bilateral symmetry, a circulatory and digestive/excretory system, and a nervous system run by a brain; that distant ancestor probably possessed Earth’s first brain.[200]  Some early worms may have even had rudimentary eyes.  And of possibly eonic importance, worms probably made the first poop.  The evolution of feces-producing animals may have been a seminal event in the organic carbon burial process.  Sponges may have also been largely responsible for initially removing oceanic carbon, which helped increase atmospheric oxygen and helped ventilate the oceans.[201]  Until then, organic carbon from dead life forms would not have settled to the ocean floor, but would have floated in the water column and been recycled by other life forms.  Although the hypothesis is considered marginally valid today, feces sinking to the ocean floor may have been how life’s burial of carbon began, as well as robbing sulfate-reducing bacteria in the water column of their nutrients and thus enabling oceanic waters to remain oxygenated.[202]  Ediacaran fauna did not burrow into ocean sediments, but deep burrowing was characteristic of Cambrian sediments.  There is debate today whether Cambrian burrowing was a consequence or cause of oxygenating the ocean floor.

As with those small worms that crawled along and burrowed into the newly oxygenated seafloor (or helped oxygenate it), many small animals with shells and mineralized parts appeared in the late Ediacaran, and a misnomer was coined to account for them termed small shelly fauna.  Those small animals also thrived in the Cambrian, and many of them were ancestors to their larger descendants, which showed more intermediate steps in the “explosion.”[203]

The Cambrian Explosion’s iconic animal was the trilobite.  As a child, I read every paleontology text in my elementary school’s library, and I have fond memories of imagining trilobite lives.  Was there love among the trilobites?  Among the protists?  The bacteria?  To a scientist, those questions might be unanswerable and even meaningless, but a mystic might pursue them.  I will not wax too mystically in this essay (I do it elsewhere), but that may well be the big question of life on Earth and an enduring mystery to humanity.  The nature of consciousness and love in the Cambrian, or the lack thereof, as much as it may always be a mystery, does not invalidate life’s arc through the evolutionary process; it only challenges materialism.

Creationist critiques of the evolutionary corpus, which all-too-often attempt to portray the Book of Genesis as literally true, often use the eye as evidence of their Creationist notions.  The eye is too complex and function-specific to be some kind of evolutionary development, so goes Creationist reasoning.  Even Darwin confessed to the problems that eyes posed for his theory of natural selection, stating that the notion of eyes' being the product of natural selection seems “absurd.”[204]  However, the evolutionary path to the fully developed eye appears pretty clear to today’s scientists.[205]  Below is the current conception of the evolutionary path of eyes.  (Source: Wikimedia Commons)

Eyes began with pigments such as chlorophyll that captured photons that initiated electrical impulses through chemical cycles in a new kind of specialized cell: the nerve cell.  Neurons are energy hogs and “high-tension electric lines” in animals.  Human brain tissue uses ten times the energy that non-organ tissues elsewhere in the body do.  The first eyes probably only detected light, and perhaps even infrared light, so that organisms could remain the proper distance from life-giving/destroying volcanic vents, for instance.  Hydrothermal vent shrimp today have such infrared sensors, which can be likened to naked retinas.[206]  The development of an eye with a lens was not a great evolutionary leap from rudimentary eyes, and a recent calculation shows how eyes with lenses could have developed from scratch in about a half-million years of evolution.[207]  Protozoa may have had the first precursors to eyes.  Once the eye evolved, its benefit was overwhelmingly obvious, and virtually all animals that live where vision would help them have eyes.  Animals that adopted subterranean existences have lost their vision and even their eyes.  It is thought today that the development of eyes was a key innovation in the arms race that would soon characterize the eon of animals, and might have even triggered it.  The Pax6 gene is common to all animals with eyes.  As with those other early life events, that gene supports the widely accepted idea that vision evolved only once.[208]  The purpose of all senses is to detect environmental information, which is in turn processed by the brain.  Even brainless plants can detect light and modify their behavior, such as plants turning and growing toward sunlight. 

The first brains are considered to have appeared with early mobile animals, which were probably worms, but precursors to nervous systems exist in unicellular eukaryotes.  Experiments were performed long ago that showed that flatworms can learn.  Animal behavior began with protists, and protozoans have numerous behaviors, from predation and parasitism to defensive activities.  Even materialist philosophers have argued that atoms possess consciousness.[209]  If a worm can learn, it would seem to have consciousness of a sort.  Perhaps it is not as complex as mine or yours, but it surely seems to be consciousness.  Worms have brains and can learn.

The Cambrian Explosion marked the rise of arthropods, which may be the most successful animal body plan ever, which accounts for more than 80% of all animal species today.  Arthropods such as the trilobite left spectacular fossils, and were once thought to dominate the Cambrian Period, but in 1909 the Burgess Shale was discovered; it is one of the world’s most famous fossil beds.  The Burgess Shale preserved the soft parts of Cambrian organisms, which is very rare, and interest was renewed in the Burgess Shale in the 1960s, as the unique fossils coming from them began to be appreciated.  Mining the Burgess Shale for fossils will continue for the foreseeable future, and new and important findings are expected.  Recent finds in China and elsewhere have greatly improved scientific understanding of the Cambrian Explosion.

Grazing and predation far predated the Cambrian Explosion, and it took on new forms as animals became large.  Trilobites, for instance, rolled up like pill bugs to protect themselves from predators, and trilobites could be predators themselves.  The Burgess Shale produced the first complete fossil of Anomalocaris, which is a cousin of the bizarre-looking Opabinia, and Anomalocaris probably was the Cambrian Period’s apex predator, and Chinese specimens reached up to two meters in length; it was the leviathan of its time.  It is controversial whether Anomalocaris could have preyed upon armored arthropods or shellfish, as its mouth may have been unsuited for it.  But it might have grabbed trilobites and torn them apart, which may have led to their pill-bug defensive strategy.

An important evolutionary principle is organisms' developing a new feature for one purpose and then using that feature for other purposes as the opportunity arose.  As complex life evolved in the newly oxygenated seafloors, several immediate survival needs had to be addressed.  To revisit the hierarchy of nutrients that a human needs, if an oxygen-dependent animal did not have access to oxygen, it meant immediate death.  Obtaining oxygen would have been the salient requirement for early complex life that adopted aerobic respiration as its primary respiration process, which is how nearly all animals today respire.  While animals in low-oxygen environments have adapted to other ways of respiring (or perhaps never relinquished them in the first place), they are all sluggish creatures and would have quickly lost in the coming arms race.  Collagen, which is a critical connective tissue in animals, requires oxygen for its synthesis, and was one of numerous oxygen-dependencies that animals quickly adopted during the Cambrian Explosion.[210] 

Diffusion works for animals that are no more than a couple of millimeters thick, but for larger animals a respiration system was necessary.  The rise of the arthropods has been an enduring problem for paleobiologists.  Why was the arthropod so successful, particularly in the beginning?  Segmented animals dominated Cambrian seas, and segmentation provides for repeated features.  Segments obviously became important for locomotion but, for arthropods, segmentation appears to have conferred the more important advantage of distributed oxygen absorption.  Each trilobite leg had an attached gill, and leg motion constantly drew fresh oxygenated water over each gill.  Arthropods never developed the kinds of lungs that vertebrates have, or the pump gills of fish and other aquatic animals.  Early arthropods breathed by moving their legs.  Peter Ward’s recent hypothesis is that segments were first used for respiration, to provide a large gill surface area, and using the segments for locomotion came later.  For trilobites, the same functionality that pushed water over gills was also coopted for food intake.[211]  Also, the leg-mounted gill was necessary because of an arthropod’s body armor; oxygen could not be absorbed through tough exoskeletons. 

Every aerobic aquatic animal had to solve the problem of extracting oxygen from the water, and there was diversity in that accomplishment.  Key Cambrian animals such as sponges and corals had very high-surface-area-to-body-volume ratios, which allowed diffusion to provide their oxygen.  Immobile animals such as sponges and coral had to position themselves where oxygenated water flowed past or through them.  Sponges work like chimneys, designed to passively draw water through them.  The position and structure of reefs facilitated those oxygen-providing dynamics, so corals helped create the conditions that sustained them; the calcified exoskeletons of corals dissuaded predation and built the reefs. 

The Cambrian’s global ocean contained far less oxygen than today’s.  Being newly and probably inconsistently oxygenated by oceanic currents was only part of the problem.  The Cambrian oceans were warmer than today’s oceans, perhaps far warmer, such as 40o C and higher for the tropical ones.  Water’s ability to absorb oxygen declines as it gets warmer.  Water heated from 10o C to 40o C will lose 40% of its ability to absorb oxygen.  The phenomenon of warmer water absorbing less oxygen contributed to many instances of anoxic waters during the eon of complex life, and particularly in the warmer, earlier periods.

Members of another phylum, Brachiopoda, which superficially resemble clams, were successful in the Cambrian, but if their shells are opened, they look very different inside.  Inside the shell is mostly empty space, with ciliated tentacles that perform a dual function of filtering food and absorbing oxygen.[212]  The cilia pump water through the shell and over the tentacles, which allows such animals to be immobile.

Another winner in the Cambrian Period was the mollusk phylum, which today comprises nearly a quarter of all marine animals.  As with arthropods and corals, mollusks developed predation-defending armor, and their variation was shells.  Mollusks include the cephalopod, bivalve, and gastropod classes.  Like brachiopods, mollusks developed “power gills,” whereby they actively pumped water across their gills using cilia, and bivalves usually also use their gills to catch food.  One early class of mollusks, which may be the first mollusks, had the repeated gill structure of the trilobites, but their gills lined the inside of their shells, which supports the idea that shells may have been developed for improving respiration first and predation-protection second.[213]  There is even evidence that a gastropod-like animal might have lived on the seashore about 510 mya and might have been the first animal to visit land.

But the most impressive dual-use innovation in mollusks is what cephalopods invented.  Their gill pumps are quite muscular and jets water over their gills.  That jet also propels the animal.  Jet propulsion is not an energy-efficient means of transportation, but the cephalopod’s ability to pass oxygen-bearing water over its gills is unmatched.  Cephalopods can live in waters too hypoxic for fish to survive.[214]  In the coming Ordovician Period, cephalopods would be apex predators of marine biomes and would hold that distinction for a long time.  Cephalopods are today’s most intelligent invertebrates; the octopus performs surprising feats of intelligence and it has the largest brain-to-body-size ratio of all invertebrates.  It is thought that the skills needed for predation stimulated cephalopodan intelligence.  Today, the nautilus is the only survivor of that lineage of Ordovician apex predators.

But the branch of the tree of life that readers might find most interesting led to humans.  Humans are in the chordate phylum, and the last common ancestor that founded the Chordata phylum is still a mystery and understandably a source of controversy.  Was our ancestor a fish?  A sea squirt?[215]  Peter Ward made the case, as have others for a long time, that it was the sea squirt, also called a tunicate, which in its larval stage resembles a fish.  The nerve cord in most bilaterally symmetric animals runs below the belly, not above it, and a sea squirt that never grew up may have been our direct ancestor.  Adult tunicates are also highly adapted to extracting oxygen from water, even too much so, with only about 10% of today’s available oxygen extracted in tunicate respiration.  It may mean that tunicates adapted to low oxygen conditions early on.  Ward’s respiration hypothesis, which makes the case that adapting to low oxygen conditions was an evolutionary spur for animals, will repeatedly reappear in this essay, as will challenges to that hypothesis.  Ward’s hypothesis may be proven wrong or will not have the key influence that he attributes to it, but it also has plenty going for it.  The idea that fluctuating oxygen levels impacted animal evolution has been gaining support in recent years, particularly in light of recent reconstructions of oxygen levels in the eon of complex life, called GEOCARBSULF and COPSE, which have yielded broadly similar results, but their variances mean that much more work needs to be performed before confidently placing oxygen levels on the geologic timescale can be done, if it ever can be.[216]  Ward’s basic hypotheses is that when oxygen levels are high, ecosystems are diverse and life is an easy proposition; when oxygen levels are low, animals adapted to high oxygen levels go extinct and the survivors are adapted to low oxygen with body plan changes, and their adaptations helped them dominate after the extinctions.[217]  The chart used to support his hypothesis has a pretty wide range of potential error, particularly in the early years, and it also tracked atmospheric carbon dioxide levels.  The challenges to the validity of a model based on data with such a wide range of error are understandable.  But some broad trends are unmistakable, as it is with other models, some of which are generally declining carbon dioxide levels, some huge oxygen spikes, and the generally seesawing relationship between oxygen and carbon dioxide levels, which a geochemist would expect.  The high carbon dioxide level during the Cambrian, of at least 4,000 PPM (the "RCO2" in the below graphic is a ratio of the calculated CO2 levels to today's levels), is what scientists think made the times so hot.[218]  (Permission: Peter Ward, June 2014)

As will be explored in this essay, all of the first four major mass extinctions of complex marine life have anoxia as a suspected contributing cause, so oxygen is a major area of interest among extinction specialists.  Whether oxygen levels were also significant contributing causes of evolutionary innovation is another area of interest today.  Again, the energy-generating superiority of aerobic respiration led to food chains.  Even if the first animals did not respire anaerobically, they adapted to aerobic respiration early on and then became dependent on it.  There would be no going back for animals; all except those few adapted to hypoxic and anoxic environments went “all in” with aerobic respiration.

An irony of fossilization is that conditions hostile to life usually left the best-preserved fossils, because nothing disturbed the sediments, which were anoxic and often sulfidic.  In the sea sediments that mark the geologic periods, white limestone and black shale are typical layers.  Limestone means oxygenated oceans, and black shales and mudstones mean anoxic conditions.  The black color means reduced carbon, as the ecosystems could not recycle the carbon and it was instead preserved into the sediments which have been the primary source of the oil and gas burned in today’s industrialized world.

Supercontinents tend to result in Canfield Oceans and land near the poles could help initiate an ice age.  For the coming geologic periods, the configurations of the continents were critical variables for determining the ecosystems that existed, whether there were anoxic oceans, greenhouse conditions, ice ages, extinction events, or adaptive radiations.  Helpful animations exist to make the configurations easier to visualize. 

The Cambrian Explosion had several phases to it, with explosions of life and mass extinctions, and a general atmospheric oxygen rise accompanied it.  Anoxic conditions coincided with extinctions.  Prokaryotes would not be that affected by what complex life was doing (although anaerobes were generally driven underground and into the seafloor), but the rise of complex life led to new ecosystems.  Before the rise of animals, the seafloor was smooth and “stiff,” but burrowing animals had profound impact on seafloor ecosystems and may have played a prominent role in creating the ecosystems themselves.  Corals created new ecosystems, as life terraformed Earth.

A recent study shows a more dramatic rise and fall during the Cambrian than the GEOCARBSULF model does, with oxygen levels seesawing and doubling to around 30% in the Late Cambrian.[219]  Those varying levels coincide with evolutionary radiations and extinctions, and questions are raised whether they were triggering causes or not.  They may have been related, and many of today’s specialists suspect that they played key causative roles.

Around 530 mya, the first brachiopods, reef-building animals, and fish appear in the fossil record, and trilobites first appear in the fossil record about 521 mya, only a few million years before a mass extinction about 517 mya, which wiped out those early reef-building organisms and nearly all of the small shelly fauna.  As happened with Ediacaran fauna, those early extinctions extinguished major portions of the ecosystems.  With the rise of DNA studies, scientists are trying to recover the tree of life’s lost portions, looking for “ghost ancestors,” which did not leave fossils that have been discovered.[220]  This is a new area of study, with current findings quite speculative, but we can be confident that many clades were born and went extinct, all the way up to the phylum level and maybe even higher, particularly in the Ediacaran and Cambrian periods, without leaving a trace in today’s known fossil record.  Specialists in these areas are always calling for more fossil-hunting, analysis with new tools, and the like.  At about 502 mya, another extinction event wiped out about 40% of marine genera, probably triggered by anoxia.

The Middle Cambrian years were the Golden Age of Trilobites, when they reached peak dominance.  It is thought that they filled vacant niches in the wake of those early mass extinctions.[221]  The early corals went extinct and the rise of demosponges followed it (those early corals are currently classified as sponges, although the issue is controversial[222]).  Sponge reefs dominated in later times, and sponges have perhaps been the most successful early animals and still thrive today.  Below is an artist's conception of the Cambrian seafloor.  (Source: Wikimedia Commons)

There is evidence that rising and falling sea levels, probably the result of a periodically growing and shrinking ice cap at the South Pole (as the continent Gondwana was there), contributed to the radiations and extinctions that marked the Cambrian.  Trilobites went through several boom-and-bust phases in the Cambrian.  Many extinctions were more local than global, but at the end of the Cambrian, most trilobites went extinct and would never dominate again.  They survived until the greatest mass extinction of all, the Permian extinction, and then disappeared from Earth, at least until the rise of paleontology and reconstructions to fascinate children and adults.  The leading hypothesis is that rising seas caused anoxia and led to the end-Cambrian extinctions at about 485 mya.[223]  That this may have coincided with a rise in atmospheric oxygen is not necessarily contradictory; all the oxygen in the world will be useless to deep-ocean and seafloor life if there are not mechanisms, primarily currents, to introduce atmospheric gases into the oceans.  Surface life can thrive in high-oxygen conditions while the seafloor dies from lack of oxygen, especially when the surface rises farther above the seafloor.  Oxygenation and anoxia during the Cambrian may well have been sporadic and regional, and research to unravel the dynamics is ongoing.[224]  If the evidence was better, the Cambrian extinction could rank among the Big Five, but we may never know.[225]  The older the fossils, the less likely they will survive subsequent geological processes.  Cambrian fossil beds discovered so far are uncharacteristically rich, and the next period, the Ordovician, is relatively impoverished.  It is suspected that unique geological and fossil-preservation processes led to the Cambrian’s gold mine of fossils.

In summary, the deadly waltz of predator and prey characterized the Cambrian, and complex ecosystems were born.  Again, from a biochemical and morphological perspective, all events since the Cambrian have been relatively insignificant, but are still fascinating and led to the bipedal ape writing these words.

It can be helpful at this juncture to grasp the cumulative impact of life's forming by harnessing energy gradients, inventing enzymes, inventing photosynthesis, inventing distributed energy generation centers that made complex cells possible, and inventing aerobic respiration.  Pound-for-pound, the complex organisms that began to dominate Earth’s ecosphere during the Cambrian Period consumed energy about 100,000 times as fast as the Sun produced it.[226]  Life on Earth is an incredibly energy-intensive phenomenon, powered by sunlight.  In the end, only so much sunlight reaches Earth, and it has always been life’s primary limiting variable.  Photosynthesis became more efficient, aerobic respiration was an order-of-magnitude leap in energy efficiency, the oxygenation of the atmosphere and oceans allowed animals to colonize land and ocean sediments and even fly, and life’s colonization of land allowed for a great leap in biomass.  Life could exploit new niches and even help create them, but the key innovations and pioneering were achieved long ago.  If humanity attains the FE epoch, new niches will arise, even of the artificial off-planet variety, but all other creatures living on Earth have constraints, primarily energy constraints, which produce very real limits.  Life on Earth has largely been a zero-sum game for several hundred million years, but the Cambrian Explosion was one of those halcyonic times when animal life had its greatest expansion, not built on the bones of a mass extinction so much as blazing new trails. 

The twin ideas of efficiency and resilience are important.  Efficiency is about getting more for less, particularly energy.  Although aerobic respiration’s energy efficiency allowed for food chains to develop, they end up creating interactions and dependencies, and the entire structure can lose its resilience when compared to simpler systems.  Remove one part of the food chain and the entire ecosystem can collapse, and it can be any part of the chain, from top to bottom.  Making systems more efficient, as the last bits of energy are wrung from the system, reduces their resilience to the real world’s surprises.  That dynamic is probably a key contributing factor of mass extinctions during the eon of complex life.  Modern ecosystems studies are making the connections clear and are being applied to the dynamics of human civilizations; C. S. Holling’s work has been seminal in this regard.[227]  Complex ecosystems pass through adaptive cycles of exploitation, conservation, release, and reorganization, and three dimensions of interaction are involved: potential, connectedness, and resilience.[228]  In general, simple systems are more stable than complex ones, which is another reason why any mass extinctions of prokaryotes, if there were any, would have been far less cataclysmic than those of complex life. 

All species live within their niches, which are always primarily energy niches, in which an organism can obtain enough energy and preserve it for long enough to produce viable offspring.  There are usually energy tradeoffs; efficiency could be sacrificed for rate of ingestion, so that efficiency was reduced but input was increased enough so that the increased cost of obtaining it was worthwhile, such as with hindgut fermenters.  The primary measure of an organism’s success is its energy surplus, which is related to resilience.  As an example, today a trout can live in a fast-moving current where food quickly arrives, which is efficient from an input perspective, but the energy spent swimming to maintain a presence in the current reduces the net energy surplus.  A slower stream will provide less food per unit of time, but it also takes less energy to live there.  In trout studies, the dominant trout will live where the optimal energy tradeoff exists, which leads to the greatest energy surplus.  Less dominant trout will be pushed into the faster water, and the least competitive trout will be pushed into calm water and slowly starve.  No species will last for long if it does not have a high enough energy surplus so that it can survive the vagaries of existence.  The energy surplus issue has not been emphasized in biology during the past century, as the “fitness” of a species has been emphasized, but it is the key variable for understanding species fitness.[229]

Also, just as no fundamentally new body plans appeared after the Cambrian Explosion, modern ecosystems seem constrained by body size.  Body sizes have similar “slots,” and body sizes outside of those slots are relatively rare.  However, successful innovation usually happens at the fringes.[230]  The fringes are where survival is marginal and innovations carry a high risk/reward ratio.  Most innovations fail, but a successful one can become universally dominant, such as those biological innovations that are considered to have happened only once.  There have been countless failed biological innovations during life’s history on Earth, many of which might have seemed brilliant but did not survive the rigors of living.

The rise of life was based on energy, information, and the ability to manipulate them.  Just as the foundation of complex life remained basically unchanged since the Cambrian Explosion, energy systems form the foundations for all ecosystems and civilizations.  While the superstructure can change and can seem radical at times, the foundation dictates what kind of superstructure can exist.  A huge superstructure built on a small foundation, if it can be built at all, will not be very resilient (the first earthquake or storm levels it), and will not last long.  Today, industrialized civilization is burning through its foundational energy sources a million times as fast as they were created and will largely deplete all of them in this century at the current trajectory.  On the geologic timescale, the rise and fall of humanity may happen in the blink of an eye and create more ecosystem devastation than the asteroid that wiped out the dinosaurs; it would happen faster than all previous mass extinctions other than that asteroid’s effect.  Arthropods may then come to rule the world once again.

 

Complex Life Colonizes Land

World map in late Devonian (c. 370 mya) (Source: Wikimedia Commons) (map with names is here)

 

Chapter summary:

With the extinction that ended the Cambrian Period, animal life’s greatest period of innovation was finished, but the next geological period, the Ordovician (c. 485 to 443 mya), still had dramatic changes.  The Ordovician would not see any new phyla of note, but the Ordovician was a time of great diversification, as new niches were created and inhabited.  They reached modern levels of abundance and diversity.  Food chains became complex and could be called food webs.  More so than the Cambrian Explosion, the Ordovician “explosion” was an adaptive radiation.[231] 

The continental configuration when the Ordovician began was like the Cambrian’s, with shallow hot tropical seas.  The Paleo-Tethys Ocean began forming in the Ordovician.  The first reefs that would impress modern observers were formed in the Ordovician.  Different animals built the corals (1, 2, 3) than Cambrian reef builders; but there were no schools of fish swimming around them, as the Ordovician predated the rise of fish.  Fish existed (1, 2, 3), but they were armored, without jaws, and lived on the seafloor.  The first sharks may have appeared in the Ordovician, but because they had cartilaginous skeletons, the fossil record is equivocal.  Some fish had scales, and an eel-like fish might have even had the first teeth.  Teeth and claws were early energy technologies; energy applied by muscles could be concentrated to hard points or plates that could crush or penetrate other organisms or manipulate the environment. 

Planktonic animals became prevalent and were critical aspects of the growing food chains.  Trilobites and brachiopods flourished, but the Ordovician’s most spectacular development might have been the rise of the mollusk.  Bivalves exploded in number and variety, and nautiloid cephalopods became the apex predators of Ordovician seas, and some were gigantic.  One species reached more than three meters long and another reached six meters or more.  The largest trilobite yet found lived in the late Ordovician.  Below is an artist's conception of the Ordovician seafloor.  (Source: Wikimedia Commons)

Gigantism is a controversial subject.  Islands often produce giant and dwarf species, which results from energy dynamics; in general, on islands, large species tend to get smaller and small species tend to get larger.  A landmark study of polar gigantism among modern seafloor crustaceans concluded that the oxygen level was the key variable.[232]  Recall that colder water can absorb more oxygen.  Size is a key “weapon” used in evolution’s arms race.  The bigger the prey, the better it could survive predation, and the bigger the predator, the more likely it would kill a meal.  Since the 1930s, there have been continual controversies over size and metabolism, energy efficiency, complexity, structural issues such as skeleton size and strength, and so on.[233]  In its final cost/benefit analysis, complex life decided that bigger was better, and much larger animals lived in the Ordovician than in the Cambrian.  Bigger meant more complex, and more complexity meant more parts, usually more moving parts, and those required energy to run.  Whether increasing size was due to more oxygen availability, more food availability, greater metabolic efficiency, reduced risk of predation, or increased predatory success, it was always a cost/benefit analyses and the primary parameter was energy: how to get it, how to preserve it, and how to use it.[234]  The "analysis" was probably never a conscious one, but result of the "analysis" was what survived and what did not.

Peter Ward suggested that the superior breathing system of nautiloids led to their dominance.[235]  Nautiloids do not appear in the fossil record until the Cambrian’s end.  Only one family of nautiloids survived the end-Cambrian extinction and they quickly diversified in the Ordovician to become dominant predators.  They replaced arthropods atop the food chain.  During the Ordovician, nautiloids developed a sturdy build and began spending time in deep waters, where their superior respiration system enabled them to inhabit environments that would-be competitors could not exploit. 

Although the Ordovician’s shallow seas were fascinating abodes of biological innovation, of perhaps more interest to humans was the first colonization of our future home: land.  Land plants probably evolved from green algae, and although molecular clock studies suggest that plants first appeared on land more than 600 mya, the first fossil evidence of land plants appeared about 470 mya, in the mid-Ordovician, which would have been moss-like plants, and they seem to have preceded land animals by about 40 million years.[236]

The Ordovician was characterized by diversification into new niches, even creating them, but those halcyonic times came to a harsh end in the first of the Big Five mass extinctions: the Ordovician-Silurian mass extinction.  The event transpired about 443 mya, and was really two extinction events that combined to comprise the second greatest extinction event ever for marine animals.  About 85% of all species, nearly 60% of all genera, and around 25% of all families went extinct.[237]  The ultimate cause probably was the drifting of Gondwana over the South Pole, which triggered a short, severe ice age.  As our current ice age demonstrates, ice sheets can advance and retreat in cycles, and they appeared to do so during the Ordovician-Silurian mass extinction.  There is evidence that the ice age was triggered by the volcanic event that created the Appalachian Mountains.  Newly exposed rock from volcanic mountain-building is a carbon sink due to basalt weathering (as contrasted with silicate weathering – volcanoes spew basalt) of that fresh volcanic rock.  The combination of Appalachian volcanism ending and subsequent sequestering of atmospheric carbon dioxide may have triggered an ice age.  The ice age waxed and waned for about 40 million years, but some events were calamitous.

Two primary events drove the first phase of the Ordovician-Silurian mass extinction: the ice age caused the sea level to drop drastically and the oceans became colder.  When sea levels fell at least 50 meters, the cooling shallow seas receded from continental shelves and eliminated entire biomes.[238]  Many millions of years of “easy living” in warm, shallow seas were abruptly halted.  Several groups were ravaged, beginning with the plankton that formed the food chain’s base.  About 50% of brachiopod and trilobite genera went extinct in the first phase, and cool-water species filled the newly vacant niches.  Bivalves were largely found in seashore communities, were scourged when the seas retreated, and lost more than half of their genera.  Nautiloids were also hit hard, and about 70% of reef and coral genera went extinct.  The retreating seas somehow triggered the extinctions, and whether it was due to simply being exposed to the air or changing and cooling currents, nutrient dispersal patterns, ocean chemistry, and other dynamics is still debated, and those extinction events are being subjected to intensive research in the early 21st century.

After as little as a half-million years of bedraggled survivors adapting to ice age seas, the ice sheets retreated and the oceans rose.  The thermohaline circulation of the time may have also changed, and upwelling, anoxia, and other dramatic chemistry and nutrient changes happened.  Those dynamics are suspected to be responsible for the second wave of extinctions.  There also seem to have been hydrogen sulfide events.[239]  Atmospheric oxygen levels may have fallen from around 20% to 15% during the Ordovician, which would have contributed to the mass death.  Seafloor anoxia seems to have been particularly lethal to continental-shelf biomes, possibly all the way to shore.  It took the ecosystems millions of years to recover from the Ordovician-Silurian mass extinction, but basic ecosystem functioning was not significantly altered in the aftermath, which is why a mass extinction during the Carboniferous has been proposed as a more significant extinction event.  The first major oil deposits of the Middle East were laid down by the anoxic events that ended the Ordovician.  Most oil deposits were formed in the era of dinosaurs and the processes of oil deposit formation were similar; they were related to oceanic currents.  When currents came to shore via the bottom and the prevailing winds blew the top waters offshore, it became a nutrient trap and anoxic sediments could form.  When the winds blew onshore and left via the bottom, the waters became clear and are known as nutrient deserts.  The oscillation between nutrient traps and nutrient deserts can be seen in oil deposit sediments.[240]  In the mid-20th century, Soviet scientists revived an old hypothesis that oil was not formed from organic marine sediments, a variation of which was also championed by Thomas Gold, but improving tools and investigation invalidated those hypotheses.  No petroleum geologists today seriously consider the abiogenic origin of hydrocarbons.  Oil sediment formation events seem related to mantle and crust processes that created high sea levels and anoxic events, and the last great one was in the Oligocene, which formed more than 10% of the world's oil deposits.[241]

The Silurian Period, which began 443 mya, is short for the geologic time scale, lasting “only” 24 million years and ending about 419 mya.  The Silurian was another relatively hot period with shallow tropical seas, but Gondwana still covered the South Pole.  But the ice caps eventually shrank, which played havoc with the sea level and caused minor extinction events (1, 2, 3), the last of which ended the Silurian and also created more Middle East oil deposits.  Reefs made a big comeback, extending as far as 50 degrees north latitude (farther north than where I live in Seattle).  According to the GEOCARBSULF model, oxygen levels rose greatly during the Silurian and rebounded from a low in the mid-Ordovician; it may have reached 25% by the early Devonian, which followed the Silurian.  Coincident with rising oxygen levels, more giants appeared.  Scorpion-like eurypterids were the largest arthropods ever, and the largest specimen reached nearly three meters near the Devonian’s oxygen highpoint.  The first land-dwelling animals - spiders, centipedes, and scorpions - came ashore during the Silurian between 430 mya and 420 mya.  The first insects appeared about that time and all of the first insects flew.[242]  As of 2014, Donald Canfield believed that the gigantism among arthropods and other oxygen effects were due to Earth's atmosphere beginning to reach modern levels for the first time in the eon of complex life, not that it reached higher than modern levels.[243]  I expect the oxygen controversy to outlive me. 

Beetles first appeared in the fossil record in the late Carboniferous.  Arthropods became dominant predators once again, although cephalopods patrolled the reefs as apex predators.  Brachiopods reached their greatest size ever at that time, although the succeeding Devonian Period has been called the Golden Age of Brachiopods.[244]  As oxygen levels rose, trilobites lost segments and, hence, gill surface area, which may have been an ultimately extinctive gamble.  When the Devonian extinction happened during anoxic events, trilobites steeply declined and thereafter only eked out an existence until the Permian extinction finally eliminated them from the fossil record.  Fish began developing jaws in the Silurian, which was a great evolutionary leap and arguably the most important innovation in vertebrate history.  Jaws, tentacles, claws… prehensile features were advantageous, as animals could more effectively manipulate their environments and acquire energy.  On land the colonization began, as mossy “forests” abounded, and the first vascular plants made their appearance, although they were generally less than a hand-width tall when the Silurian ended, and nothing reached even waist-high.

Oxygen levels appeared to keep rising into the early Devonian (c. 419 mya to 359 mya) and then declined over most of the period.  The Devonian marked the dramatic rise of land plants and fish in what is called the Golden Age of Fishes, and that period saw the first vertebrates that enjoyed a terrestrial existence.  Armored fish supplanted arthropods and cephalopods during the Devonian as the new apex predators and weighed up to several tons.  Sharks also began their rise.  The Devonian has been called the Golden Age of Armored Fish.[245]  Rising oxygen levels have been proposed as causing the spread of plants and large predatory fish, and a school of thought challenges high-oxygen reasons for many evolutionary events.  Nick Butterfield is a prominent challenger.[246]

Bony fish (both ray-finned and lobe-finned) first appeared in the late Silurian and abounded in the Devonian.  All bony fish could breathe air in the Devonian, which provided more oxygenated blood to their hearts.[247]  Ray-finned fish largely lost that ability and their lungs became swim bladders, which aided buoyancy, like gas-filled nautiloid shells.  Ray-finned fish can respire while stationary (unlike cartilaginous fish, and sharks most famously) and are the high-performance swimmers of aquatic environments; they comprise about 99% of all fish species today, although they were not dominant during the Devonian.  All fish devote a significant portion of their metabolism to maintaining their water concentrations.  In salt water, fish have to push out salt, and in fresh water, they have to pull in water, using, on average, about 5% of their resting metabolism to do so.  Brine shrimp use about a third of their metabolic energy to manage their water concentration. 

Today’s lungfish are living fossils that first appeared at the Devonian’s beginning, which demonstrates that the ability to breathe air never went completely out of fashion.  That was fortuitous, as one class of lobe-finned fish developed limbs and became our ancestor about 395 mya.  The first amphibians appeared about 365 mya.  In the late Devonian, lobe-finned and armored fish were in their heyday.  The first internally fertilized fish appeared in the Devonian, for the first mother that gave birth.[248]  A lightweight descendent of nautiloids appeared in the Devonian, and ammonoids subsequently enjoyed more than 300 million years of existence.  They often played a prominent role, until they were finally rendered extinct in the Cretaceous extinction.  Nautiloids retreated to deep-water ecosystem margins and still exploit that niche today.

Land colonization was perhaps the Devonian’s most interesting event.  The adaptations invented by aquatic life to survive in terrestrial environments were many and varied.  Most importantly, the organism would no longer be surrounded by water and had to manage desiccation.  Nutrient acquisition and reproductive practices would have to change, and the protection that water provided from ultraviolet light was gone; plants and animals devised methods to protect themselves from the Sun’s radiation.  Also, moving on land and in the air became major bioengineering projects for animals.  Breathing air instead of water presented challenges.  The pioneers who left water led both aquatic and terrestrial existences.  Amphibians had both lungs and gills, and arthropods, whose exoskeletons readily solved the desiccation and structural support problems, evolved book lungs to replace their gills, which were probably book gills.

All such developments had to happen in water, first, for a successful move to land.[249]  The evidence seems to support the idea that life first began to colonize land via freshwater ecosystems, which provided a friendlier environment than seashores do.  The first arthropods ashore were largely detritivores, eating dead plant matter, and what followed added live plants and early detritivores to their diets.[250]  The land-based ecosystems that plants and arthropods created became nutrient sources that benefited shoreline and surface communities, but the vertebrate move to land was not initiated by the winners of aquatic life.  To successful aquatic animals, the shore was not a new opportunity to exploit but a hazardous boundary of existence best avoided.  Tetrapodomorphs probably made the vertebrate transition to land as marginal animals eking out a frontier existence.[251]  The fins that became limbs originally developed for better swimming, and further muscular-skeletal changes enabled them to exploit opportunities on land.  Two key reasons for the migration onto land may have been for basking (absorbing energy) and enhanced survival of young from predation (preserving energy).[252]  The five digits common to limbed vertebrates were set around this time; early tetrapodomorphs had six, seven, and eight digits, and the digital losses were probably related to using feet on land.[253]

But plants had to migrate before animals did, as they formed the terrestrial food chain’s base.  Along with desiccation issues, plants needed structures to raise them above the ground, roots, a circulatory system, and new means of reproduction.  Large temperature swings between day and night also accompanied life on land.  Plants developed cuticles to conserve moisture, a circulatory system that piped water from the roots up into the plant and transported nutrients where they were needed, and plant photosynthesis needed water to function.  Vascular plants pumped water through their tissues in tubes by evaporating water from their surface tissues and pulling up more new water behind the evaporating water via the “chain” of water’s hydrogen bonds.  The last common ancestor of plants and animals reproduced sexually, and sexual reproduction is how nearly all eukaryotes reproduce today, although many ways exist to reproduce asexually.  The first vascular plants are considered to have attained their height in order to spread their spores.[254]  The Rhynie chert in Scotland is the most famous fossil bed that records complex life’s early colonization of land.

The early Devonian was a time of ground-hugging mosses and a strange, lichen-like plant that towered up to eight meters tall.  The oldest vascular plant division (“division” in plants is equivalent to “phylum” in animals) still existing first appeared about 410 mya, and today’s representatives are mostly mosses.  In the late Devonian, horsetails and ferns appeared and still exist.  Seed plants also developed in the late Devonian, which enabled plants to quickly spread to higher and dryer elevations and cover the landmasses, as seed plants did not need a water medium to reproduce as spore-based systems did.  In spore systems, which are partly asexual but have a sexual stage, a water film was required for the sperm to swim to the ovum.  The first trees appeared about 385 mya (1, 2), could be ten meters tall, and formed vast forests, but reproduced with spores and so needed moist environments.  The first rainforests appeared in the Devonian and reached their apogee in the Carboniferous.  Those rainforests produced Earth’s first thick coal beds.  The Devonian was the Cambrian Explosion for plants and enabled animals to colonize land.  The plants that best succeeded in the Devonian were those with the highest energy efficiencies, which involved size, stability, photosynthesis, internal transport, and reproduction.[255]  Plants had different dynamics of extinction than animals did, as plants are more vulnerable to climate change and extinction via competition, but are less vulnerable to mass extinction events than animals.[256]

One of the most important plant innovations was lignin, which is a polymer whose original purpose appears to have been creating tubes for water transport, and was also used to help provide structural support so that trees could grow tall and strong.  Without lignin, there would not have been any true forests and probably not much in the way of complex terrestrial ecosystems.  Lignin was also responsible for forming the coal beds that powered the early Industrial Revolution, but that coal-bed formation would not happen in earnest until the next geologic period, the aptly named Carboniferous.  It took more than a hundred million years for organisms to appear that could digest lignin.  A class of fungus gained the ability to digest lignin about 290 mya, and by that time, most of what became Earth’s coal deposits had already been buried in sediments.[257]  As with other seminal developments in life’s history, the ability to digest lignin seems to have evolved only once.  The enzyme that fungi use to digest lignin has also been found in some bacteria, but fungi are the primary lignin-digesters on Earth.

From a biomass perspective, the Devonian’s primary change was the proliferation of land plants.  Below is an artist's conception of a Devonian forest.  (Source: Wikimedia Commons)

Land plants comprise about half of Earth’s biomass today and prokaryotes provide the other half.  Terrestrial biomass is 500 times greater than marine biomass, and terrestrial plants have about a thousand times the biomass of terrestrial animals, so animals constitute less than 0.1% of Earth’s biomass.  The ecologies of marine and terrestrial environments are radically different.  Virtually all primary producers in marine environments are completely eaten and comprise the food chain’s foundation, while less than 20% of land plant biomass is eaten.

Creating the huge biomass of land-based ecosystems meant that carbon was removed from the atmosphere.  Also, root systems were a new phenomenon, with dramatic environmental impact.  Before the rise of vascular plants, rain on the continents ran to the global ocean in sheets and braided rivers.  Every rainfall ran toward the oceans in a flash flood, as happens in deserts today.  Plant roots stabilized riverbanks and form the rivers that we are familiar with today.  Also, roots broke up rock, accelerated weathering, and created soils.  Plants break down rock five times as fast as other geophysical processes will.[258]  The forests and soils created a huge “sponge” that absorbed precipitation, which the resultant ecosystems depended on.  Vast nutrient runoffs from land into the ocean were stimulated by plants’ colonization of land, which in turn stimulated ocean life.  The reefs of the Devonian were the greatest in Earth’s history and reached about ten times the area of today’s reefs, with a total area about equal to half of Europe, of about five million square kilometers (two million square miles).[259]

Plants and trees created a “boundary layer” of relatively calm air near the ground that became the primary abode of most land animals.  Also, forests created a positive feedback in which moisture was recycled in the forests and kept them moister than purely ocean-sourced precipitation would.  Today, somewhere between 35% to 50% or more of the rain that falls in the Amazon rainforest is recycled water via transpiration.[260]  Transpiration also cools the plants via the latent heat of vaporization, as well as the resultant cloud cover.[261]  Transpiration, by the way it sucks water from the soils, maintains a negative pressure on soils and keeps them aerated.  Waterlogged soils cannot support the vast ecosystems of forest soils, so trees are needed to maintain the soil’s dynamics that support the base of the forest ecosystem.  Rainforest processes thus create positive feedbacks that maintain the rainforest.  Conversely, the rampant deforestation of Earth’s rainforests in the past century has created negative feedbacks that further destroyed the rainforests. 

Forests were a radical innovation that has not been seen before or since.  Trees were Earth’s first and last truly gigantic organisms, and the largest trees dwarfed the largest animals.  Why did trees grow so large?  It seems to be because they could.  Land life gave plants opportunities that aquatic life could not provide, and plants “leapt” at the chance.  Lignin, first developed for vascular transport, became the equivalent of steel girders in skyscrapers.  In the final analysis, trees grew tall to give their foliage the most sunlight and to use wind and height to spread their seeds, and in the future that height would help protect the foliage from ground-based animal browsers.  The height limit of Earth’s trees is an energy issue: the ability to pump water to the treetops.[262]  Arid climates prevent trees from growing tall or even growing at all.  Energy availability limits leaf size, too.[263]  From an ecosystem’s perspective, the great biomass of forests was primarily a huge store of energy; trees allowed for prodigious energy storage per square meter of land.  That stored energy ultimately became a vast resource for the forest ecosystem, as it eventually became food for other life forms and the basis for soils, which in turn became sponges to soak up precipitation and recycle it via transpiration.  Trees created the entire ecosystem that depended on them.

Energy enters ecosystems primarily via the capture of photon energy by photosynthesis.  Only so much sunlight reaches Earth and photosynthesis can only capture so much.  The energy “budget” available for plants has constraints, and the question is always what to do with it.  An organism can break bonds between atoms and release energy or bind atoms together to build biological structures, which uses energy (exothermic reactions release energy, while endothermic reactions absorb energy).  Photosynthesis is endothermic, and in biological systems, endothermic reactions are also called anabolic, as they invest energy to build molecules, which is how organisms grow.  Catabolic reactions break down molecules in exothermic reactions that release energy for use.  Plants faced the same decisions that societies face today: consumption or investment?  Only with an energy surplus can there be investments, such as for infrastructure.  Plants invested in trunk-and-branch infrastructure to place their energy-collecting and seed-spreading equipment in the best possible position.  Plants race for the sky, and trees represent the biggest energy investment of any type of organism.  On average, today’s plants use a little more than half of the energy that they capture via photosynthesis (called gross primary production) for respiration.  Growing forests use most of that gross primary production to grow (called net primary production), and when the structural limits have been reached, most energy is consumed via respiration to run life processes within the infrastructure.[264]  Animal development is similar.  When humans began building cities and urban infrastructures, the basic process was the same.

Most marine phyla were unable to manage the transition to land and remain aquatic to this day.  Arthropods found a way, and scorpions, spiders, and millipedes were early pioneers.  The insect and fish clades comprise the most successful terrestrial animals today, as fish led to all terrestrial vertebrates.  Gastropods made it to land, mainly as snails and slugs, as did several worm phyla, but the rest of aquatic life generally remained water-bound.  Also, many animal clades have moved back-and-forth between water and land, usually hugging the shoreline, sometimes in a single organism’s life cycle, which blurred the terrestrial/aquatic divide at times.  The first fish to venture past shore seem to have accomplished it in the mid-Devonian, and colonizing land via freshwater environments was a prominent developmental path.

Although the first insects appeared in today’s fossil record about 400 mya, they were fairly developed, which meant that they have an older lineage, probably beginning in the Silurian.  The first land animals would have been vegetarians, as something had to start the food chain from plants, and early insects were adapted for plant-eating.  Plants would have then begun to co-evolve with animals as they tried to avoid being eaten.

When life colonized land, global weather systems began dramatically impacting life, as land plants and animals would be at the mercy of the elements as never before, and forests and deserts formed.  The continents also began coming together and eventually formed Pangaea in the Permian, and converging plates meant subduction and mountain-building.  Mountains in the British Isles and Scandinavia were formed in the Devonian, the Appalachians became larger, and the mountains of the USA’s Great Basin also began developing.  Colliding tectonic plates can build mountains, and mountain ranges greatly impacted weather systems during terrestrial life’s future, which also profoundly influenced oceanic ecosystems.

As with previous critical events, such as saving the oceans and life on Earth itself, life helped terraform Earth.  But the late Devonian is an instance when the rise of land plants may have also had Medean effects.  Carbon dioxide sequestering, which reduced the atmosphere’s carbon dioxide concentration by up to 80%, may have cooled Earth’s surface enough so that an ice age began and another one of Earth’s mass extinctions began.  As with the Ordovician extinction, the ultimate cause for the Devonian extinctions seems to have been rising and falling sea levels, associated with growing and receding ice caps, as Gondwana still covered the South Pole.  Devonian extinction events began happening more than 380 mya, and a major one happened about 375 mya, called the Kellwasser event.  The reasons for the Kellwasser event are today generally attributed to the water becoming cold and anoxic.[265]  A bolide impact has been invoked in some scientific circles, but the evidence is weak.[266]  Mountain-building and volcanic events also happened as continents began colliding to eventually form Pangaea (and the resultant silicate and basaltic weathering removed carbon dioxide from the atmosphere), and those dynamics may have been like what precipitated the previous major mass extinction.[267]  Black shales abounded during and after the Kellwasser event, and they are always evidence of anoxic conditions and how the oil deposits initially formed.  However, the Kellwasser event anoxia may have not only been due to low atmospheric oxygen, but was also the result of eroding the newly exposed land and the detritus of the new forest biomes, which created a vast nutrient runoff into the oceans that may have initiated huge algal blooms that caused anoxic events near shore.[268]

Unlike the short, severe Ordovician events, the Devonian extinctions may have stretched for up to 25 million years, with periodic pulses of extinction.  The Kellwasser event seems to be comprised of several extinction events, and when they ended, at least 70% of all marine species went extinct and the greatest reefs in Earth’s history were 99.98% eradicated.  It took 100 million years before major reef systems again appeared.[269]  Armored fish and jawless fish lost half of their species, and armored fish were rendered entirely extinct in the event that ended the Devonian.

What was most relevant to humans, however, was the almost-complete extinction during the Kellwasser event of the tetrapods that had come ashore.  Tetrapods did not reappear in the fossil record until several million years after the Kellwasser event, and has even been referred to as the Fammenian Gap (the Fammenian Age is the Devonian’s last age).[270]  The Kellwasser event also appeared to be a period of low atmospheric oxygen content, and some evidence is the lack of charcoal in fossil deposits.  Recent research has demonstrated that getting wood to burn at oxygen levels of less than 13-15% may be impossible.[271]  Because all periods of complex land life show evidence of forest fires, it is today thought that oxygen levels have not dropped below 13-15% since the Devonian, but during the “charcoal gap” of the late Devonian, when the first landlubbing tetrapods went extinct, oxygen levels reached their lowest levels since the GOE, which must have impacted the first animals trying to breathe air instead of water.  During the Kellwasser event, there is no charcoal evidence at all, which leads to the notion that oxygen levels may have even dropped below 13%.[272]  This drop may be related to severe climatic stresses on the new mono-species forests, which are probably related to the ice age that the forests helped bring about due to their carbon sequestering.  That is an attractively explanatory scenario, but the controversy and research continues.  The first seed plants probably appeared before the Kellwasser event, but it was not until after the Fammenian Gap that seed plants began to proliferate.[273]

The Kellwasser event ended the first invasion of land by vertebrates and created an evolutionary bottleneck.  Some stragglers survived the Kellwasser event, but the fossil record for the next seven million years has been devoid of tetrapod fossils with the exception of one species.[274]  After the Fammenian Gap ended about 368 mya, tetrapods renewed their invasion of land, and those tetrapods with many toes appeared in the fossil record during the second invasion.  Ichthyostega was Earth’s largest land animal in those days.  The tetrapods of the time may have not yet been true amphibians, but they were making the adjustments needed to become true land animals, such as losing their gills and improving their locomotion.  No new arthropods appeared on land during that time.

After several million years of adaptation, tetrapods seemed ready to become the dominant land animals, but then came the second major Devonian extinction event, today called the Hangenberg event.  While the ice age conditions around the Kellwasser event are debated, there is no uncertainty about the Hangenberg event; there were massive, continental ice sheets, accompanied by falling sea levels and anoxic events, as evidenced by huge black shales.[275]  The event’s frigidity was probably a key extinction factor, and anoxia was the other killing mechanism.  The Hangenberg event had devastating consequences; it meant the end of armored fish, the near-extinction of the new ammonoids (perhaps only one genus survived), oceanic eurypterids went extinct, trilobites began to make their exit as seafloor communities were devastated, lobe-finned fish reached their peak influence, and archaeopteris forests collapsed.[276]

Trees first appeared during a plant diversity crisis, and the arrival of seed plants and ferns ended the dominance of the first trees, so the plant crises may have been more about evolutionary experiments than environmental conditions, although a carbon dioxide crash and ice age conditions would have impacted photosynthesizers.  The earliest woody plants that gave rise to trees and seed plants largely went extinct at the Devonian’s end.  But what might have been the most dramatic extinction, as far as humans are concerned, was the impact on land vertebrates.  During the Devonian extinction about 20% of all families, 50% of all genera, and 70% of all species disappeared forever.

There seems to have been convergent evolution among early tetrapods, but they were beaten back twice during the late- and end-Devonian extinction events, and what emerged the third time was different from what preceded it.[277]  As with many mass extinction events, evolution’s course was significantly altered in the extinction’s aftermath.  As with studies of human history, events are always contingent and not foreordained in Whiggish fashion.  Although the increase in “intelligence” may well be an inherent purpose of being in physical reality, the evolutionary path to the man writing these words had false starts, “detours,” singular events, expansions, bottlenecks, catastrophes, and the like.  Evolutionary experiments on other planets probably had radically different outcomes.  A mystical source that I respect once stated that there are one million sentient species in our galaxy, with a diversity that is staggering, and from what I have been exposed to (and here), I will not challenge it.

 

Making Coal, the Rise of Reptiles, and the Greatest Extinction Ever

World map in early Carboniferous Period (c. 340 mya) (Source: Wikimedia Commons) (map with names is here)

World map at end of Carboniferous Period (c. 300 mya) (Source: Wikimedia Commons) (map with names is here)

World map in late Permian Period (c. 260 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

The period succeeding the Devonian is called the Carboniferous (c. 359 to 299 mya), for reasons that will become evident.  The Hangenberg event cut short the second attempt of vertebrates to invade land and there was another 14-million-year gap in the fossil record called the Tournaisian Gap, which is part of Romer’s Gap (which is considered to be about a 30-million-year gap).[278]  After all mass extinctions, it took millions of years for ecosystems to recover, even tens of millions of years, and markedly different ecosystems and plant/animal assemblages often replaced what existed before the extinction.  The Devonian spore-forests were destroyed, and outside of the peat swamps, the tallest trees in the Tournaisian Gap were about as tall as I am, and even in the swamps, the tallest trees were about ten meters tall, as they were before the Hangenberg event.[279]

Peter Ward led an effort to catalog the fossil record before and after Romer’s Gap, which found a dramatic halt in tetrapod and arthropod colonization that did not resume until about 340-330 mya.  Romer’s Gap seems to have coincided with low-oxygen levels of the late Devonian and early Carboniferous.[280]  If low oxygen coincided with a halt in colonization, just as the adaptation to breathing air was beginning, the obvious implication is that low oxygen levels hampered early land animals.  Not just the lung had to evolve for the up-and-coming amphibians, but the entire chest cavity had to evolve to expand and contract while also allowing for a new mode of locomotion.  When amphibians and splay-footed reptiles run, they cannot breathe, as their mechanics of locomotion prevent running and breathing at the same time.  Even walking and breathing is generally difficult.  This means that they cannot perform any endurance locomotion but have to move in short spurts.  This is why today’s predatory amphibians and reptiles are ambush predators.  They can only move in short bursts, and then have to stop, breathe, and recover their oxygen deficit.  In short, they have no stamina.  This limitation is called Carrier’s Constraint.  The below image shows the evolutionary adaptations that led to overcoming Carrier's Constraint.  Dinosaurs overcame it first, and it probably was related to their dominance and the extinction or marginalization of their competitors.  (Source: Wikimedia Commons)

The heart became steadily more complex during complex life’s evolutionary journeys.  Fish hearts have one pump and two chambers.  Amphibians developed three-chambered hearts, wherein oxygenated and deoxygenated blood are not structurally separated but mix.  That arrangement is obviously not as energy-efficient as separating oxygenated and deoxygenated blood.  Some later reptiles evolved four-chambered hearts, which their surviving descendants, crocodilians and birds, possess, and somewhere along the line, mammals also evolved four-chambered hearts, perhaps before they became mammals.

While oxygen level changes of the GEOCARBSULF model show early fluctuations that the COPSE model does not, both models agree on a huge rise in oxygen levels in the late Devonian and Carboniferous, in tandem with collapsing carbon dioxide levels.  There is also virtually universal agreement that that situation is due to rainforest development.  Rainforests dominated the Carboniferous Period.  If the Devonian could be considered terrestrial life’s Cambrian Explosion, then the Carboniferous was its Ordovician.  In the Devonian, plants developed vascular systems, photosynthetic foliage, seeds, roots, and bark, and true forests first appeared.  Those basics remain unchanged to this day, but in the Carboniferous there was great diversification within those body plans, and Carboniferous plants formed the foundation for the first complex land-based ecosystems.  Ever since the Snowball Earth episodes, there has almost always been a continent at or near the South Pole, and the ice ages that have prominently shaped Earth’s eon of complex life probably always began with ice sheets at the South Pole, and the current ice age arguably is the only partial exception, but today’s cold period really began about 35 mya, when Antarctic ice sheets began developing. 

The first tree forests formed in the late Devonian, and bark is the great innovation that led to forming the Carboniferous’s vast coal deposits.  Compared to modern trees, Carboniferous trees seemed to go overboard on bark, at least partly to discourage arthropods.  Today’s trees generally contain at least four times as much wood as bark.  Those early trees had about ten times as much bark as wood, and the bark was about half lignin.  Lepidodendron trees dominated the Carboniferous rainforest and could grow 30 meters tall.  Because it took more than a hundred million years for life to learn to break down lignin, that early lignin did not degrade via biological processes.  The early Carboniferous was warm, even with a small ice cap at the South Pole, and Earth’s first rainforests appeared in the late Devonian and again proliferated in the Carboniferous.  The Carboniferous lasted from about 360 mya to 300 mya and was the Golden Age of Amphibians, as the rainforest was largely global in extent and swamps abounded.  Amphibians were the Carboniferous’s apex predators on land, and some reached crocodile size and acted like them.

Artists have been depicting Carboniferous swamps for more than a century, and the cliché image includes a giant dragonfly.  That giant dragonfly represents a key Carboniferous issue and perhaps why the period ended.  That giant, and others like it, appeared in the fossil record about 300 mya, when oxygen levels were Earth’s highest ever, at somewhere between 25% and 35%.  The almost universally accepted reason for that high oxygen level is that burying all of that lignin for the entire Carboniferous Period removed carbon dioxide from the atmosphere in vast amounts.  Today, the estimate is that carbon dioxide fell from about 1,500 PPM at the beginning of the Carboniferous to 350 PPM by the end, which is lower than today’s value.  That tandem effect of sequestering carbon and freeing oxygen not only may have led to huge arthropods and amphibians, but also intensified the ice age that ended the Carboniferous.  The idea that high oxygen levels led to those giants was first proposed more than a century ago and dismissed, but has recently come back into favor.  Flying insects have the highest metabolisms of all animals, but they do not have diaphragmatic lungs as mammals have, or air sac lungs as birds have, and although they may have some way of actively breathing by contracting their tracheas, it is not the bellows action of vertebrate lungs.  The two primary hypotheses for early insect gigantism is that high oxygen, as well as a denser atmosphere (the nitrogen mass would not have fallen, so increased oxygen would have added to the atmosphere’s mass), would have enabled such leviathans to fly, and the other is that flying insects got a head start in the arms race and could grow large until predators that could catch them evolved.  The late Permian had an even larger dragonfly, when oxygen levels had crashed back down.  The evolution of flight is another area of great controversy, and insects accomplished it long before vertebrates did.  The general idea is that flight structures evolved from those used for other purposes.  For insects, wings appear to have evolved from aquatic “oars,” and gills became lungs.[281]  Reptiles did not develop flight until the Triassic, and only glided in the Permian.[282]

But it was not only flying insects that became huge: giant millipedes, scorpions, and other arthropods also lived in the Carboniferous, such as mayflies with half-meter wingspans.  The giant millipede (more than two meters long) has been featured in popular culture as a nightmare creature, although it was vegetarian.  The largest freshwater fish ever lived in the Carboniferous and reached seven meters long.[283]  The high-oxygen hypothesis is challenged for giant insects and giant animals in general, and the controversy will probably continue for many more years.[284]

The Carboniferous also marked the rise of reptiles, which first appeared between 320 and 310 mya.  The very term reptile has become rather informal with the rise of cladistics, as birds and mammals descended from “reptiles” but are not called that.  The term paraphyletic refers to groupings such as reptiles, in which part of the clade is not classified in the named group; monophyletic clades (beginning with the last common ancestor and including all descendants) are tidier and scientists often prefer them.[285]  Although the issue, as usual, is controversial today, it seems that amphibian and reptilian ancestors may have descended from different groups of tetrapods, and some seemingly transitional animals added to the controversy.[286]  But the idea that reptiles are descended from amphibians is still prominent.  Most importantly, reptiles were the first amniotes, a clade that includes birds and mammals, which do not need to lay their eggs in water and allowed reptiles to become independent of rainforests and swamps.  Reptiles then colonized niches previously unavailable to amphibians.  The first reptiles were small and ate insects, and laying eggs in trees may have been a solution to arboreal life.[287]  Seed plants and amniotes could reproduce on dry land, and their success greatly expanded terrestrial ecosystems.

Amniotes are primarily classified by the number of holes in their skulls.  The earliest reptiles may have had skulls like amphibians, with only holes for eyes and nostrils.  In some early reptiles, a hole developed behind the eye, probably for attaching jaw muscles, and animals with such skulls are called synapsids; mammals evolved from that line, and are essentially the only survivors of it.  Near the Carboniferous’s end at about 300 mya, skulls with two holes behind the eye developed, probably for anchoring more powerful jaw muscles.  Animals with those skulls are called diapsids, and one line of diapsid descendants eventually ruled Earth as dinosaurs.  Dinosaurs had the greatest terrestrial jaws of all time, which is the primary energy acquisition equipment of vertebrates.  Complex life’s arms race reached its ultimate expression in dinosaurs, with the fearsome teeth and jaws of the late-Cretaceous’s Tyrannosaurus rex matched against the spear-and-shield arrangement of Triceratops.  Jurassic dinosaurs such as Stegosaurus, with its thagomizer, would not have been easy meals for predators such as Allosaurus.  Turtles are today generally considered to be diapsids that lost their skull holes, and would otherwise seem to be anapsids.

In the oceans, the Carboniferous is called the Golden Age of Sharks, and ray-finned fish arose to a ubiquity that they have yet to fully relinquish.  Ray-finned fish probably prevailed because of their high energy efficiency.  Their skeletons and scales were lighter than those of armored and lobe-finned fish, and their increasingly sophisticated and lightweight fins, their efficient tailfin method of propulsion, changes in their skulls, jaws, and new ways to use their lightweight and versatile equipment accompanied and probably led to the rise and subsequent success of ray-finned fish in the Carboniferous and afterward.[288]  Foraminifera, which are amoebic protists, rose to prominence for the first time in the Carboniferous.  Reefs began to recover, although they did not recover to pre-Devonian conditions; those vast Devonian reefs have not been seen again.  Today’s stony corals did not appear until the Mesozoic Era.  Trilobites steadily declined and nautiloids developed the curled shells familiar today, and straight shells became rare.  The first soft-bodied cephalopods, which were ancestral to squids and octopi, first appeared in the early Carboniferous, but some Devonian specimens might qualify.  Ammonoids flourished once again, after barely surviving the Devonian Extinction.  This essay is only focusing on certain prominent clades, and there are many animal phyla and plant divisions.  The early Carboniferous, for example, is called the Golden Age of Crinoids, which are a kind of echinoderm, which is a phylum that includes starfish.[289]  The crinoids had their golden age when the fish that fed on them disappeared in the end-Devonian extinction.  Earth’s ecosystems are vastly richer entities than this essay, or any essay, can depict.

In the early Carboniferous, the continents were still somewhat dispersed but began merging into the supercontinent called Pangaea.  The period from the Late Devonian extinction event to the late Permian about 260 mya is also called the Karoo Ice Age, which had various stages of ice sheet development.  It was the last ice age before the current ice age.  About 325 mya, there was a marine extinction that some have argued should be a Big Five mass extinction, but others are doubtful, and the authors of the argument re-ranked that extinction to sixth in significance.[290]  It was caused by fluctuating sea levels due to the ice sheet advances and retreats and the continental uplift that resulted from the continents colliding to form Pangaea.  The Mississippian Epoch ended with that extinction and the Pennsylvanian Epoch began.  That growing ice cap eventually destroyed the Carboniferous rainforest.  Cooler oceans have less evaporation and therefore produce drier climates; that dynamic began reducing the Carboniferous rainforest, breaking them up into “patches” that kept shrinking, to eventually result in the rainforests' collapse.  Only a few rainforest pockets survived into the Permian Period.  As usual, scientists have proposed several contributory causes of rainforest collapse, but climate change is probably the ultimate cause.  The collapse of the rainforest ended the dominance of amphibians and flora and fauna adapted to warm, wet environments.  The cooler, dryer conditions that ended the Carboniferous led to the dominance of seed plants and amniotes.

When the Carboniferous rainforest collapsed beginning about 307 mya, Earth’s oxygen levels were at their highest ever.  About 75% of Earth’s coal deposits were formed in the Carboniferous, with most of it laid down in the 25-million-year Pennsylvanian Epoch.  There will never be a coal-forming period like that again on Earth, as organisms developed the ability to decay lignin about 290 mya.  Even if humans burned all fossil fuel deposits, carbon dioxide levels will never again reach the levels that preceded the Carboniferous, at many times today’s concentrations.

The Permian Period (c. 299 to 252 mya) ended with the greatest mass extinction in the eon of complex life.  The Carboniferous rainforests not only collapsed, but great deserts formed in the interior of the newly formed supercontinent of Pangaea.  Pangaea was a little scattered when it formed, with huge ice sheets at the South Pole, but by the end of the Permian, the ice age was finished and another ice age would not appear for more than 200 million years.  The continent that became North America and Europe collided with Gondwana, and a gigantic mountain range formed as a result, called the Central Pangaean Mountains.  Those mountains created climatic effects, and great deserts formed on each side of that range.  Remnants of that range include the Appalachians and part of the Atlas Mountains.  The Ural mountain range began forming during the creation of Pangaea, and the Tethys Ocean formed during the late Permian. 

Conifer forests, which I have spent my life happily hiking through, first appeared in the Permian.  Devonian forests were 10 meters tall, Carboniferous rainforests were 30 meters tall, and Mesozoic conifers reached 60 meters tall and even sequoias appeared.  Conifers were one of the early seed plants and used pollen to fertilize their seeds; that method that did not need the water that spores did.  As conifers appeared during an ice age, they are well-adapted to cold climates, which is why conifer forests are so prevalent today.  As discussed later in this essay, conifers were later displaced by flowering plants, which engaged in an unprecedented symbiosis with animals, and conifers were pushed to Earth’s cold margins.[291]  Tree ferns declined after the Carboniferous, but still exist today.

In water environments, there are not diurnal temperature changes as there are on land, so regulating body temperature was not a significant issue for aquatic animals.  The rise of reptiles created a new kind of animal, and regulating body temperature became a major challenge, and particularly in an ice age climate.  The early Permian was the Golden Age of Synapsids, as they dominated the land masses (and became the largest non-amphibious land animals to that time).  Thermoregulation was a prominent trait, with huge “sails” on the backs of large synapsids.  Dimetrodon was popular with children’s models of ancient animals (I had one in my childhood collection, along with mammoths and stegosaurs).  Animals made many adaptions to land’s temperature swings.  Today’s mammals and birds are warm-blooded, and controversy has raged whether dinosaurs were warm-blooded.  Keeping a body’s temperature within certain ranges can allow for optimal enzyme functioning.  Humans, for instance, can only survive within a narrow range of body temperature.  High temperatures kill humans because key enzymes begin falling apart and vital reactions cease.  If temperatures are too low, activation energies for vital reactions are not reached.  But maintaining an ideal body-temperature is costly; mammals and birds consume about 10-to-15 times the energy of today’s reptiles.[292]  A snake can live for a month on a good meal, while a mammal must constantly eat or hibernate.  As with other life features, those synapsid sails may have had a dual function, and the most popular hypothesis today is that it was used for “display” to attract a mate.  Sexual selection has been a major source of evolutionary change (it is almost certainly why men are larger and stronger than women), and those tremendous sails may have been an early example of enhancing a feature to attract a mate.  Dimetrodon also had different-sized teeth, which were probably distant ancestors to mammalian teeth. 

During the Permian, synapsids had great radiations, typical of golden ages.  Synapsids developed many evolutionary novelties, and one of them led to therapsids first appearing about 275 mya in the mid-Permian, just as oxygen levels began crashing, according to GEOCARBSULF.  Synapsids began to overcome Carrier’s Constraint by developing stiffer backbones, so they no longer had the serpentine gait of lizards.[293]  Therapsids were the direct ancestors of mammals and further overcame Carrier’s Constraint by evolving a more erect posture; their legs were more under them rather than splayed to their sides.  This improved their breathing ability, and that it happened during Earth’s most spectacular oxygen crash is probably no coincidence.  However, they inherited a posture that put most body weight on their front legs, so they had a “wheelbarrow” gait that still hampered their ability to breathe and run, although it was better than their synapsid ancestors.[294]  From a high of 25%-35% at the end of the Carboniferous, oxygen crashed down to around 15% by the Permian’s end.  Animals that could adapt to lower oxygen levels could dominate, and therapsids did just that and completely displaced pelycosaur synapsids, which included Dimetrodon, and huge dinocephalians dominated the mid-Permian.  The largest amphibian ever also lived in the high oxygen times of the mid-Permian.  As oxygen levels crashed in the late Permian, land animals became smaller.[295]  In the mid-Permian, synapsids began to develop a secondary palate that allowed them to breathe and chew at the same time.  Therapsid jaws became more powerful and their teeth became more diverse than synapsid teeth.  Such innovations typically improved an animal’s energy efficiency, and thus were favored innovations.  Dimetrodon disappeared about 272 mya, and at 270 mya there was a mass extinction today called Olson’s Extinction which hit land and sea animals hard as well as land plants.  The cause is still a mystery, although climate change has been recently presented as a candidate.  Therapsids then dominated land animals until the Permian extinction, and Olson’s Extinction was arguably that calamity’s first event.

One of Peter Ward’s recent hypotheses is that animals that adapted to the changing conditions, particularly when oxygen levels crashed, survived the catastrophes to dominate the post-catastrophic environment.  In the late Permian, several therapsid lines developed turbinal bones, which may have been used for respiratory water retention in a world where oxygen levels were crashing.[296]  This is a controversial issue, and related to the controversy over when reptiles developed endothermy.  The therapsid ancestors of mammals, cynodonts, first appeared about 260 mya, and had many mammalian features.

The earliest diapsid appeared in the late Carboniferous and looked like a modern lizard.  It also had some canine-type teeth.  Diapsids, however, were marginal animals in the Permian, as that was the time of synapsid and therapsid dominance.  Diapsids would not rise to prominence until the Triassic.

In the oceans, reefs finally began to make a comeback in the late Permian, and the remnants of those reefs can be seen in Texas today.  Tabulate and rugose corals were abundant, as were ammonoids and echinoderms.  Articulate brachiopods (with two shells that can open and close, like a clam’s) were also doing fine.  Fish (ray-finned fish and sharks), however, were the dominant sea animals.  Trilobites were a mere shadow of their former selves, eking out an existence on the seafloor, like the way that nautiloids eked out their existence in deep waters while ammonoids dominated the surface.  And then came the Great Dying. 

The Permian extinction, like the prior major extinctions, was more than one event and had more than one cause.  The Cretaceous extinction is what most people think about when mass extinctions are mentioned (as it was Hollywood-spectacular and ended one fascinating line of animals and paved the way for mammals to dominate), and it led to the existence of humans, but the Permian extinction was the Big One.  Before the taboo against investigating mass extinctions began lifting in the 1970s and 1980s, specialists generally thought that the Permian extinction only impacted the oceans and left terrestrial ecosystems unaffected.  The picture has radically changed since the 1980s, and the terrestrial extinctions are now acknowledged as similarly catastrophic.[297]  The Permian extinction is Earth’s only mass extinction of insects, and although plants are not normally vulnerable to mass extinctions, land plants also barely survived the Permian extinction.  But the extinction came in phases, and each may have had different causes.  There is great ongoing controversy and research regarding the issues. 

The ultimate cause of the Permian extinction was probably the formation of a supercontinent.  When Pangaea finally formed, new dynamics appeared.  One was that there became only one major ocean, the Panthalassic, and the Paleo-Tethys and nascent Tethys oceans were largely landlocked.  Those landlocked smaller oceans would have become like lakes, with little current in them (the Black Sea is the favored analogy today), and the Panthalassic Ocean (from which the Pacific Ocean eventually formed) did not have continents to divert them during their journey from the equator to the poles, so today’s circuitous thermohaline circulation would not have existed, which is shown below.  (Source: Wikimedia Commons) 

The Panthalassic’s currents were slow and lazy, and the deep-water oxygenation of today’s oceans would have been quite different, and perhaps largely ceased to exist.  Also, when supercontinents form, the sea level falls as the oceanic basin expands, and the late Permian's sea levels are thought to be among the lowest in the eon of complex life.  The many shallow seas of complex life’s earlier periods also disappeared with the formation of Pangaea (nearly 90% of the continental shelves became exposed), which were the abode of most marine life.[298]  That new land exposed the swamps and deltas formed in the Carboniferous, and the oxidation of those carbonaceous deposits drew down atmospheric oxygen and increased carbon dioxide.  The merging of continents also results in mountain-building and volcanism.  

Also, the formation of Pangaea (which is controversial regarding what processes led to its formation) may have led to the dynamics that broke it apart.  The Hawaiian Islands are part of a volcanic island chain that began forming more than 80 mya, and is due to a hotspot bubbling up from Earth’s mantle.  Although the issue is far from settled, a prominent hypothesis is that the formation of Pangaea plugged hotspots and prevented heat from venting from Earth’s core, which led to a swelling and fracturing Pangaea.[299]  Part of the evidence for that hypothesis was relatively sudden and widespread volcanism sprouting up around Pangaea, which followed a known fracture pattern around such crustal upwellings.  The volcanism and resultant fracture lines formed today’s continents.[300]  As can be seen in the map of Earth’s landmasses during the late Permian, what became China and Siberia were on the northeast margins of Pangaea, bordering the Paleo-Tethys Ocean, and two volcanic events arising from China and Siberia are currently favored as key proximate causes of the Permian extinctions.

The ecosystems may not have recovered from Olson’s Extinction of 270 mya, and at 260 mya came another mass extinction that is called the mid-Permian or Capitanian extinction, or the end-Guadeloupian event, although a recent study found only one extinction event, in the mid-Capitanian.[301]  In the 1990s, the extinction was thought to result from falling sea levels.[302]  But the first of the two huge volcanic events coincided with the event, in China.  There can be several deadly outcomes of major volcanic events.  As with an eruption in the early 1800s, massive volcanic events can block sunlight with the ash and create wintry conditions in the middle of summer.  That alone can cause catastrophic conditions for life, but that is only one potential outcome of volcanism.  What probably had far greater impact were the gases belched into the air.  As oxygen levels crashed in the late Permian, there was also a huge carbon dioxide spike, as shown by GEOCARBSULF, and the late-Permian volcanism is the near-unanimous choice as the primary reason.  That would have helped create super-greenhouse conditions that perhaps came right on the heels of the volcanic winter.  Not only would carbon dioxide vent from the mantle, as with all volcanism, but the late-Permian volcanism occurred beneath Ediacaran and Cambrian hydrocarbon deposits, which burned them and spewed even more carbon dioxide into the atmosphere.  Not only that, great salt deposits from the Cambrian Period were also burned via the volcanism, which created hydrochloric acid clouds.  Volcanoes also spew sulfur, which reacts with oxygen and water to form sulfurous acid.  The oceans around the volcanoes would have become acidic, and that fire-and-brimstone brew would have also showered the land.  Not only that, but the warming initiated by the initial carbon dioxide spike could have then warmed up the oceans enough so that methane hydrates were liberated and create even more global warming.  Such global warming apparently warmed the poles, which not only melted away the last ice caps and ended an ice age that had waxed and waned for 100 million years, but deciduous forests are in evidence at high latitudes.  A 100-million-year Icehouse Earth period ended and a 200-million-year Greenhouse Earth period began, but the transition appears to have been chaotic, with wild swings in greenhouse gas levels and global temperatures.  Warming the poles would have lessened the heat differential between the equator and poles and further diminished the lazy Panthalassic currents.  The landlocked Paleo-Tethys and Tethys oceans, and perhaps even the Panthalassic Ocean, may have all become superheated and anoxic Canfield Oceans as the currents died.  Huge hydrogen sulfide events also happened, which may have damaged the ozone layer and led to ultraviolet light damage to land plants and animals.  That was all on top of the oxygen crash.  With the current state of research, all of the above events may have happened, in the greatest confluence of life-hostile conditions during the eon of complex life.  A recent study suggests that the extinction event that ended the Permian may have lasted only 60,000 years or so.[303]  In 2001, a bolide event was proposed for the Permian extinction with great fanfare, but it does not appear to be related to the Permian extinction; the other dynamics would have been quite sufficient.[304]  The Permian extinction was the greatest catastrophe that Earth’s life experienced since the previous supercontinent existed in the Cryogenian.[305]

Siberian volcanism (which formed the Siberian Traps) is considered to have been the main event.  The Chinese volcanism of ten million years earlier was a prelude, with other minor events between them, in a series of blows that left virtually all complex life devastated when it finally finished.  To give some perspective on the volcanism's magnitude, when Mount Tambora erupted in 1815 and caused the Year Without a Summer, it is estimated that the eruption totaled 160 cubic kilometers of ejecta.  The Siberian Traps episode lasted a million years and, although it was more of a lava event than an explosion (although there were also plenty of explosions), the total ejected lava is estimated at one-to-four million cubic kilometers.

The Chinese eruption was the preview and it devastated marine environments, and a brief review of the casualties will make it clear.  Tabulate and rugose corals were brought to the brink of extinction, and ammonoids, echinoderms, articulated brachiopods, gastropods, and complex foraminiferans suffered similarly, while fish, bivalves, and small foraminiferans did relatively well.[306]

After the mid-Permian extinction, marine life recovered and there were many radiations to fill empty niches, but coral reefs did not recover.  Between the two big extinction events, extinction levels were highly elevated, which suggests that some of those aforementioned dynamics were still wreaking havoc, with possible cascade effects.  Critics of extinction hypotheses often say: “Correlation is not necessarily causation.”  While there can be great merit to that position, it seems to be overused by various critics.  When the guns are as smoking as volcanic events were, and they often “correlate” with mass extinctions, they are increasingly hard to deny as being at least immediately causative.[307]

The end-Permian extinction correlated rather precisely with the eruption of the Siberian Traps, which continued for a million years and spewed millions of cubic kilometers of basalt.  The end-Permian extinction was the final blow for many ancient organisms.  My beloved trilobites made their final exit from Earth during the end-Permian extinction, as did tabulate and rugose corals, spiny sharks, and the last freshwater eurypterids.  Articulate brachiopods completely vanished from the fossil record, but reappeared in the Triassic via ghost ancestors, but brachiopods never recovered their former abundance and have lived a marginal existence ever since.  Glass sponges and bryozoans disappeared along with the reefs, while complex foraminiferans and radiolarians also vanished, and all of them staged comebacks in the Triassic via ghost ancestors.  Bivalves suffered relatively modestly (“only” about 60% of bivalve genera went extinct) and quickly recovered, fish were barely affected, and gastropods were devastated but quickly recovered.  Ammonoids went through their typical boom-and-bust pattern during the Permian extinctions, while nautiloids kept dwindling but scraped by in their deep-water exile.  In the final tally, more than 95% of all marine species went extinct.  Not only was the death toll tremendous, but the post-Permian oceans were so different from before that the Permian extinction marks the end of an era, which began with the Cambrian Explosion.  The Paleozoic Era ended with the Permian extinction and the Mesozoic Era began.

On land, the devastation was similar.  Again, insects suffered their only mass extinction, and several orders of insects vanished from the fossil record after the Permian; those gigantic flying insects of Paleozoic times also vanished forever.  Permian conifer forests gave way to deciduous forests in the wake of global warming, and early gymnosperms and seed ferns were largely replaced as lycophytes made a comeback in the early Triassic.  The lycophyte radiation in the wake of the Permian extinction is typical of what are called disaster taxa, which are the first organisms to colonize disturbed environments.  Reptiles and amphibians lost nearly two-thirds of their families, which translates to more than 90% of all species.  All large herbivores and predators went extinct, along with gliding reptiles.  In total, the Permian extinctions wiped out about 90-96% of all species, more than 80% of all genera, and nearly 60% of all families.  Nothing else in the history of complex life comes close and puts the Permian extinction in a category all its own. 

Although the overwhelming devastation of the Permian extinction seemed to play no favorites and whatever survived was the luck of the draw, recent research has demonstrated that even with such a catastrophe, certain life forms were more resilient than others, related to biological “buffers” in their life processes.  In marine environments, the warming, anoxia, and acidification would have wiped out species vulnerable to them, and corals were and still are particularly susceptible to those changes.  Those conditions wiped out the corals in the Permian extinction, and they are the first ecosystems being devastated today, with similar conditions of warming, anoxia, and acidification.[308]  Whether it was the ability to move to safer environs or the ability to buffer chemical changes, the more resilient organisms had a better survival rate than others.

 

The Reign of Dinosaurs

World map in mid-Jurassic (c. 170 mya) (Source: Wikimedia Commons) (map with names is here)

World map in mid-Cretaceous (c. 105 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

The period following the greatest extinction event ever is called the Triassic (c. 252 to 201 mya).  The Triassic was also the Mesozoic Era’s first period (the other two were the Jurassic and Cretaceous).  The Mesozoic is also known as the Golden Age of Reptiles, but most people think of it as the reign of dinosaurs.  However, dinosaurs did not yet exist when the Triassic began.

There was a “coal gap” in the early Triassic, and depending on the framework and which scientist is asked, it took Earth’s ecosystems 10 million years (when the environment recovered enough to sustain normal ecosystems), 30 million years (when terrestrial ecosystem diversity recovered), or 100 million years (when marine ecosystem diversity recovered) to recover from the Permian extinction.  On land, the forests slowly recovered, and disaster-taxa lycophytes dominated the early Triassic.  Seed ferns dominated the Southern Hemisphere, and palm-tree-resembling cycads and ginkgo trees (which first appeared in the late Permian, of which the living fossil Ginkgo biloba is the only surviving member) also prospered.  In the Triassic’s Northern Hemisphere, on what became North America, Europe, and Siberia, conifer forests recovered and blanketed the land.

From the Permian extinction’s devastation arose a reptilian sheep called Lystrosaurus.  Fossil hunters of early Triassic sediments have been frustrated for many years, as nearly 95% of preserved early Triassic land animal remains are Lystrosaurus, because it was about the Permian extinction’s only land animal survivor.  There has been debate for many years about why it survived when almost nothing else did.  No single animal ever dominated Earth’s land masses as thoroughly as Lystrosaurus did during the early Triassic.  Lystrosaurus was probably a burrower (many have likened Lystrosaurus to a pig because of that burrowing), which may have provided the shelter needed to survive the Permian holocaust.  It may also have been a generalist herbivore and could eat most surviving plants.[309]  But some think that its survival, when almost every other species died, was due to luck.  Luck is a surprisingly common proposed explanation for evolutionary events and outcomes, and some creatures seemed to be in the right place at the right time while others were in the wrong place at the wrong time.  The spread of Lystrosaurus was also aided by two other facts: the land masses formed one continent, so Lystrosaurus could simply walk to dominance of Earth; and few predators capable of eating a Lystrosaurus survived.  One swamp denizen ate Lystrosaurus (being semi-aquatic may have also helped species survive the Permian extinction), as did another carnivore, but not much else did.  Lystrosaurus was a therapsid, as were the dominant land animals before the Permian extinction.

The Golden Age of Lystrosaurus lasted only about a million years before it was displaced by much larger herbivorous reptiles, and diapsids, particularly archosaurs, began displacing therapsids early in the Triassic.  A cynodont descendant, Thrinaxodon, burrowed and was possibly a direct ancestor of mammals.[310]  If it was not our direct ancestor, it was a close cousin to it.  Proto-mammals were displaced and largely driven underground during the Triassic, and many of them resembled rats and other rodents.  About 225 mya, which was about halfway through the Triassic, early mammals first appeared, although there is plenty of fierce controversy over exactly which animal could be called a mammal.[311]  But reptiles starred in the Mesozoic’s tale, dinosaurs in particular.  Mammals were small, marginal creatures, and until the late Mesozoic, they only emerged from their burrows at night to feed.

In Triassic seas, ammonoids recovered from the brink of extinction at the Permian’s end to live in their golden age while still periodically booming and busting.  It took ten million years after the Permian’s end for reefs to begin to recover, and when they did, they were formed by stony corals, which evolved from their tabulate and rugose ghost ancestors.  Stony corals also built today’s reefs.  Bivalves dominated biomes in which brachiopods once flourished, and have yet to relinquish their dominance.  Before the Permian extinction, about two-thirds of marine animals were immobile.  That number dropped to half during the Triassic, ecosystems became far more diverse, and a marine “arms race” began in the late Triassic.  Predators invented new shell cracking and piercing strategies, and prey had to adapt or go extinct.  The few surviving brachiopods and crinoids were driven to ecosystem margins, and the Jurassic and Cretaceous would see the appearance of shell-cracking crabs and lobsters.

The Tethys Ocean grew during the Triassic, and in the Jurassic there were no more island barriers on the Tethys’s east end.  The Paleo-Tethys was finally squeezed out of existence by islands that became part of Eurasia.  The shallow margins of the Tethys became the greatest oil source in Earth’s history.  The Proto-Tethys and Paleo-Tethys oceans also formed oil deposits, but about 70% of the world’s oil deposits initially formed during the Mesozoic’s anoxic events, primarily along the Tethys’s margins.  In the Middle East, Caspian Sea, Western Russia, North Africa, Gulf of Mexico, and Venezuela virtually all of the oil deposits were laid down by dying and preserved organisms along Tethyan shores.  In the early Triassic, along the west end of what became North America, oceanic plate subduction under continental plates initiated a series of volcanic and mountain-building events that continue to this day.  The foundations of the Sierra Nevada mountain range were formed then.  I have spent my fair share of time hiking through them. 

Low-oxygen Mesozoic oceans saw the rise of unusual biomes.  In methane seeps in the Mesozoic’s global ocean floor, bivalves and brachiopods formed symbiotic relationships with chemosynthetic organisms that digested methane.[312]  All over the world, scientists have been amazed to find rock layers almost entirely comprised of shells of those innovative, low-oxygen surviving shelled animals.[313]

As with cliché images of Carboniferous rainforests that depict giant dragonflies, the cliché dinosaur image has volcanoes in the background (1, 2).  The Mesozoic began and ended with tremendous volcanic eruptions, and major eruptions dotted the Mesozoic.  Those eruptions vented vast amounts of carbon dioxide into the atmosphere and were responsible for the high carbon dioxide levels that dominated the Mesozoic, according to GEOCARBSULF and its subsequent corrections, which made it such a hot era.  Hot seas also do not hold as much oxygen as cold seas, which contributed to the anoxic events that continually visited Mesozoic oceans, particularly the Tethys.  Hot, low-oxygen air is hostile to animal life, and during the Triassic, many reptiles beat the heat by migrating back to the oceans where their ancestors hailed from.[314]  Those seagoing reptiles soon dominated Earth’s oceans in complex life's greatest migration from land to sea.  Ichthyosaurs, which looked like reptilian dolphins, first appeared about 245 mya and survived for about 150 million years.  The ancestors of plesiosaurs also appeared when ichthyosaurs did.  By 215 mya, some ichthyosaurs became gigantic; one species reached more than 20 meters in length and had Earth’s largest eyes ever, at about the size of dinner plates.[315]  Ichthyosaurs hunted the squid’s ancestors (which could become fairly large), Earth’s other big-eyed animals, but feasted on a wide variety of prey as the late Triassic oceans’ apex predators.  Also, a shellfish-eating cousin of plesiosaurs lived in the Triassic.  Aquatic reptiles overcame Carrier’s Constraint, and many aquatic reptiles of the Mesozoic seem to have become warm-blooded and also gave live birth.

So far, this essay has dealt lightly with regional differences and largely confined the discussion to polar, temperate, and tropical conditions in the seas, and rainforest versus dryer conditions on land.  While Pangaea existed, barriers to species diffusion on land were relatively modest, hence Lystrosaurus's dominance.  But Pangaea began to break up at the Triassic’s end, and continental differences in plants and animals often became significant in later times.  Although the formation of Pangaea had profound impacts, because land life was relatively young, the differences and resultant changes due to the removal of oceanic barriers were less spectacular than would happen in the distant future, such as when South America connected to North America.

For an example of how geography impacted early animal evolution, therapsids are thought to have evolved in non-tropical Permian climates.  That non-tropical beginning influenced therapsid evolution and particularly strategies for regulating body temperature.  Therapsids were rather stocky and had short limbs and tails, which is a cold-weather adaptation seen in mammals today.  There is plenty of speculation and research on the issue of therapsid thermoregulation because mammals are the therapsid line’s last survivors.  Diapsids, on the other hand, evolved in warmer climates, were relatively gracile, and had particularly long tails.[316]  That long tail was critical for the appearance of bipedal reptiles, as it shifted their center of gravity over their hips. 

Until my lifetime, scientists thought of dinosaurs as slow and stupid, but that view has changed.  In the 1970s, scientists realized that prior depictions of bipedal dinosaurs such as Tyrannosaurus rex erroneously depicted them with upright postures.  Their actual posture had the tail, spine, and head all on a line largely parallel with the ground.[317]  Not until the release of Jurassic Park did the public begin to see more realistic portrayals of bipedal dinosaur posture.  That posture may have been critical for the success of dinosaurs, as becoming bipedal, with their legs in an upright position under their bodies, allowed them to overcome Carrier’s Constraint.  Also, the notion of overcoming Carrier’s Constraint transformed the view of dinosaurs from lumbering, slow creatures to nimble runners.  The dinosaur line is considered monophyletic, and the first dinosaurs were bipeds.  All quadrupedal dinosaurs re-evolved their four-legged stances from the original bipedal posture, which is obvious in that nearly all quadrupedal dinosaurs had rear legs longer than their front ones.[318] 

The view of dinosaurian intelligence has also changed radically in the past generation, as evidence has been discovered that some dinosaurs were significantly encephalized (particularly the line that led to birds), as well as evidence for parenting and herd behaviors, and pack hunting.[319]  Dinosaurs had the first hands, even with opposable thumbs.[320]  Recent work on encephalization suggests that animals were well on their way toward human-level encephalization hundreds of millions of years ago, and were prevented from attaining it far earlier, such as 70 mya, due to the Permian extinction.  The world might be populated with sentient, civilized, and even space-faring reptiles today if events had played out slightly differently, such as that asteroid missing Earth 66 mya (or technologically advanced dinosaurs preventing its impact).

The direct ancestors of dinosaurs, archosauromorphs, first appeared in the late Permian, and some beleaguered specimens survived into the Triassic as ghost ancestors.  Until recently, the first true dinosaur was widely considered to be Eoraptor, which appeared about 231 mya.  Eoraptor looks like a miniature Tyrannosaurus Rex and in fact is in the terrestrial dinosaur line that culminated with the Lizard King, called theropods.  A study published in 2013, however, made the case that Nyasasaurus, dated to 243 mya, is either the first dinosaur yet discovered or a close cousin to it.[321]  Birds are also probably part of the theropod clade, as the only survivor of that line and the only surviving dinosaurs.  Eoraptor was about a meter long and weighed ten kilograms.  The time from the first diapsids to the first dinosaurs spanned nearly 100 million years, but there was nothing spectacular about them then, as their early years were dominated by amphibians, then synapsids, and then therapsids.  Why dinosaurs rose to prominence has been a source of controversy and debate, but the contending answers are energy-based.

Carrier’s Constraint and the first dinosaurs’ bipedal posture is currently an issue of great interest, as it may explain why dinosaurs prevailed over therapsids.  According to GEOCARBSULF and COPSE, the early Triassic was a period of low oxygen following the Permian crash, down to 15% or so from the early Permian’s 25-35%.  Peter Ward’s hypothesis is that dinosaur ancestors evolved their bipedal posture and overcame Carrier’s Constraint in the Triassic’s low-oxygen environment.[322]  With running no longer interfering with breathing, quick dinosaurs displaced lethargic therapsids in the Triassic.  Even quadrupedal dinosaurs had postures with their legs directly under them, which overcame Carrier’s Constraint.  The standard hypothesis is that speed and stamina allowed dinosaurs to prevail (and their ability to breed in large numbers and quickly grow was a great advantage over mammals[323]), but they also first appeared and increased their spread after another mass extinction event about 230 mya, which may have resulted from volcanism and/or mountain-building in Alaska and along the west coast of Canada, with their attendant climatic effects.[324]  Today, a few competing hypotheses explain the rise of dinosaurs: their superior respiration and speed, their ability to rapidly breed and grow, or their opportunism when a mass extinction at 230 mya eliminated therapsid herbivores and left the biomes open for herbivorous dinosaurs called sauropodomorphs to appear and dominate by the Triassic’s end.[325]  Their probable descendants, the sauropods, are Earth’s largest land animals ever.  The question of why dinosaurs became so large is a central issue today and may well be related to another hot topic: the development of endothermy in dinosaurs. 

Birds are warm-blooded and today’s reptiles are cold-blooded.  Thermoregulation is a vast, complex issue, and warm-bloodedness or cold-bloodedness appears to be a result of evolutionary cost-benefit outcomes.  The first vertebrates that left Earth’s waters often basked, the first dominant reptiles had energy-regulating sails, and therapsids may have at least dabbled in chemical means of internal temperature regulation, although the evidence is thin.[326]  But the evidence for dinosaurian internal temperature regulation is strong, and the surviving therapsid line, the mammals, also developed internal temperature regulation.

The Triassic began hot and ended hot, and the Jurassic and Cretaceous were also hot, so staying warm was not a significant issue for dinosaurs.  Marine reptiles stayed cool by becoming aquatic, and for land-based dinosaurs, features such as Stegosaurus plates apparently replaced the sails of synapsids for both heating and cooling, and like the synapsid sail, those Stegosaurus plates may have also been used for display.[327]  Also, like the cliché, many large herbivorous dinosaurs lived near cooling swamps, although the issue has been controversial.  Cooling swamps and protective water holes that we see in the tropics today were a major aspect of Mesozoic landscapes.  But the thermoregulatory aspect that most work is directed toward today is how dinosaurs kept warm.  There is compelling evidence that dinosaurs regulated their body temperature in myriad ways, including internal chemistry.  All bipedal animals today are endotherms and they all have four-chambered hearts, as dinosaurs did.  Feathers, dinosaurs living near the poles (1, 2), and oxygen-isotope studies of dinosaur bones all support the idea that dinosaurs engaged in internal temperature regulation, but one of the more intriguing areas is that of dinosaur growth.  Like tree rings, bones have seasonal growth rings and they have been read for many dinosaur fossils.  They have been used to determine dinosaurian life expectancies.  Tyrannosaurus rex could live to be about 30, giant sauropods could live to be 50, and smaller dinosaurs, as with smaller mammals, lived shorter lives.  The tiny ones only lived three-to-four years and the mid-sized ones lived seven-to-fifteen years.[328]  Growth rates also provide thermoregulation evidence.  Tyrannosaurs had juvenile growth spurts and largely stopped growing as adults, and sauropods had growth rates equivalent to today’s whales, which are Earth’s fastest growing animals.[329]  But there is also evidence of ectothermic dynamics.  The great size of dinosaurs would have led to relatively easy ways to stay warm, as large animals have a greater mass-to-surface area ratio, like the way in which complex cells overcame the energy generation issue.  Also, in the generally hot Mesozoic times, staying warm would have been fairly easy, particularly for huge dinosaurs. 

As scientists know with mammals, although optimal performance can be attained with endothermy, it comes with a great energetic cost.  As with plants, an animal can spend its energy budget on consumption (metabolism) or investment (growth).  An intriguing hypothesis is that growing large was part of an energy strategy, as the benefits of size (reduced risk of predation, ease of conserving body heat and consequently less need for a high metabolism, ability to access new food sources, such as foliage high above the ground) outweighed their costs (energy devoted to growth instead of metabolism, the need to constantly feed).  Their size and the warm climate meant that large dinosaurs did not need as intense internal energy generation as mammals do, for instance, and dinosaurs may have been mesotherms, with internal energy regulation greater than ectotherms, but not as great as endotherms (mammals and birds).[330] 

In light of GEOCARBSULF's depiction of low Mesozoic oxygen levels, Peter Ward addressed a controversial issue regarding how dinosaurs breathed.[331]  Birds have an air sac breathing system with an inflexible septate lung, which is highly superior to the mammalian alveolar bellows lung.  At 1600 meters elevation, today’s birds are about twice as efficient at extracting atmospheric oxygen as mammals are.  Flying is the most aerobically demanding activity on Earth and a bird’s air-sac breathing system is a primary reason why they can fly, and flying over the Himalayas is an energetic feat far beyond what any mammal can accomplish.  The high-performance respiration that birds possess is also why they live far longer than similarly sized mammals, but is related to their efficient mitochondria.  When a mammal breathes, it inhales oxygenated air and exhales carbon dioxide, but it is not a very efficient system, as fresh and depleted air mix in the lungs.  The air sac system, on the other hand, passes fresh oxygenated air along the lungs with each breath.  One might say that birds constantly inhale.  Animations of the air sac system can help us understand it.  Since birds evolved from dinosaurs, and indeed are dinosaurs, just when this innovation developed is of great interest to paleobiologists.  If the early Mesozoic were the low-oxygen times that GEOCARBSULF depicts, then the air sac system would have been a logical adaptation to oxygen-poor air.

The issue of avian and dinosaurian air sacs and when they evolved has been the focus of a rancorous dispute that was only recently resolved and hinged on the hollow parts of bones, which is a phenomenon called skeletal pneumaticity.  The controversy involved dinosaur bone pneumaticity and how it may have been related to birds.  In a landmark paper in 2005, it was shown that birds have their most important air sacs where nobody thought they were, near a bird’s tail, not its head.  Not only that, pneumatic bones are all related to the air sac system, and birds have the same pneumatic bones as saurischian dinosaurs did.[332]  The obvious implication is that the air sac system evolved in theropods and sauropods, when dinosaurs first appeared.  If the air sac system appeared with the first dinosaurs, it is one more big reason why dinosaurs prevailed over the less respiratorily gifted therapsids.  Such a highly effective respiration system evolving in a low-oxygen environment is a tantalizing hypothesis.

Ornithischians, a great clade of herbivorous dinosaurs, appeared soon after theropods did, but were initially marginal dinosaurs and did not begin becoming abundant until the late Jurassic.  If dinosaurs all have the same common ancestor, ornithischian dinosaurs quickly diverged, with their different hips, and so far, there is no good evidence that ornithischians breathed with the air sac system, and they became the dominant herbivores in the relatively high-oxygen Cretaceous.[333]  The ornithischian advantage was a superior eating system.  Ornithischians were the only dinosaurs that chewed their food.[334]  Chewing squeezes more calories from plant matter and may be why ornithischians surpassed sauropods in the Cretaceous.  Sauropods did not chew their food but had rock-filled gizzards, as birds and reptiles do today.  Sauropods began becoming gigantic in the late Triassic.  Only rare ornithischians without chewing teeth had gizzards.  Sauropods also had the smallest proportional brains of any dinosaur.[335]  The most encephalized dinosaurs were dromaeosaurs, some of which were featured as clever killers in Jurassic Park.  Theropods were the most encephalized dinosaurs, which is an early example of predators having larger brains in order to outsmart their prey.  Ornithopods were in second place only to theropods in encephalization and were among the most successful Cretaceous herbivores.  A fascinating aspect of some ornithopods was their seeming ability to communicate by bugling with a horn in their head’s crest.[336]  This kind of evidence strongly supports the idea of herd behavior in herbivorous dinosaurs.  There is also evidence of a dinosaur stampede, which has been keenly contested (1, 2) in recent years.[337] 

Below are examples of the only three kinds of dinosaurs known.  (Source: Wikimedia Commons)

Long before birds learned to fly, non-dinosaurian reptiles did, and the first pterosaurs flew about 220 mya.  They also had an air sac respiration system.  Although they obviously flew, just how they flew has been controversial.  They were probably warm-blooded, and by the late Cretaceous, pterosaurs became Earth’s largest flying animals ever, with ten-meter wingspans.  Pterosaurs may have been the dinosaurs’ closest relatives.[338] 

The mass extinction at 230 mya coincided with a volcanic event and the initial building of mountains in what became Central Asia.  Ammonoids, bivalves, and other marine denizens were hit hard, and on land it was nearly the final exit for therapsids (cynodonts and dicynodonts), and what would have been the chief diapsid competitor to early sauropods, rhynchosaurs, suddenly went extinct, possibly by losing their food source.  Extinction specialist Michael Benton has argued that the mass extinction at 230 mya was greater in ways than the end-Triassic extinction, which is considered one of the Big Five extinctions.[339]  The rise of dinosaurs to dominance coincided with the mid-Triassic mass extinction, and mammals first appeared a few million years later.  Although the “slate's being cleared” by a mass extinction may well have given dinosaurs their opportunity, they also left many contemporaries far behind.  Mammals would be rat-like, largely nocturnal fringe dwellers for 160 million years after they first appeared, while dinosaurs ruled Earth.  Stony corals also first appeared after the mid-Triassic extinction, and turtles first appeared about 220 mya.

Although the Triassic was a period of great evolutionary novelty (such as a reptile that was mostly neck), and even called an “explosion” in some corners, when air sac lungs, dinosaurs, mammals, modern corals, and flying and marine reptiles appeared, it was not nearly the boom as when mammals rose after the Cretaceous extinction.  GEOCARBSULF shows that oxygen levels were low during the Triassic, rebounding a little from the Permian extinction, and then collapsing to perhaps their lowest level of the entire eon of complex life.  Peter Ward proposed that the low oxygen levels during the Triassic and Jurassic kept dinosaurs from “exploding” as mammals did after the Cretaceous extinction.[340]  GEOCARBSULF’s crash of oxygen levels coincides with the end-Triassic extinction at about 201 mya.  The cause of the end-Triassic mass extinction, as with all other extinction events, is debated today, and climate change and volcanic eruptions are among the primary suspects (the volcanic eruptions spewed “only” hundreds of thousands of cubic kilometers of lava as compared to the Permian’s millions), along with rising and falling sea levels.  GEOCARBSULF’s carbon dioxide values show a carbon dioxide spike, which would have caused global warming, as happened during the Permian extinction, and could have triggered methane hydrate vaporization and hydrogen sulfide events.  A recent study makes the similarity explicit between the end-Permian and end-Triassic extinction events, with ominous parallels to current events.[341]  Vented carbon dioxide from volcanic events also made the oceans near shore acidic.  Extensive anoxic events visited the oceans in the late Triassic, particularly along the Tethys’s periphery, and Triassic anoxia formed Southern Iraq’s oldest oil deposits.

The breakup of Pangaea at the Triassic’s end not only initiated volcanic events right in the heart of Pangaea, but the weather systems would have been altered.  In general, the Triassic was a dry period on Pangaea (with some mid-Triassic extinctions possibly related to its becoming wetter on land), and the Jurassic was wetter and had the ubiquitous Mesozoic jungles depicted by Hollywood.

The end-Triassic extinction once again nearly drove ammonoids to extinction and perhaps only one genus survived.  The reefs that began to recover in the late Triassic were again eradicated and did not reappear until more than 10 million years later.  Bivalves, brachiopods, and gastropods lost about half of their genera.  The marine reptile placodonts, which specialized in eating mollusks, went extinct, and plesiosaurs and ichthyosaurs were the marine apex predators to begin the Jurassic.  On land, it was nearly the end for therapsids; afterward, until their final extinction in the early Cretaceous, they were marginal fringe dwellers.  All large terrestrial non-dinosaur archosaurs went extinct and left dinosaurs unchallenged for terrestrial dominance during the Jurassic.

Similar to how reptiles found refuge in the oceans, the crocodile’s ancestors were originally terrestrial archosaurs and found their cooling niche in swampy margins and still do today, even though their cousins (1, 2) went extinct in the end-Triassic event.  Crocodiles have four-chambered hearts like dinosaurs, which suggests that they may have been endotherms/mesotherms that re-evolved ectothermy to better adapt to swamp life.[342]  Only one superfamily of primitive amphibians survived the end-Triassic event for long, and its last surviving member lasted into the Cretaceous in survival enclaves.  It was a giant, at five meters long and 500 kilograms.  Primitive amphibians could not abide the reign of crocodiles, and since the end-Triassic event, amphibians have been almost exclusively modern varieties.  The first salamanders appeared in the late Jurassic and frogs may have first appeared 100 million years earlier, in the late Permian.  Probably spurred by their size in an arms race with dinosaurs, crocodiles became huge, and a Cretaceous species reached twelve meters and eight metric tons; ambushing drinking sauropods and holding their heads under until they drowned was a likely specialty.

Although great mass death resulted from the end-Triassic extinction, dinosaurs emerged virtually unscathed.  Why?  It may have been due to their superior air sac breathing system, which could survive the hot times and record-low oxygen levels of the end-Triassic.[343]  The mammalian lung is pretty good, too, but not nearly as efficient as the saurischian dinosaurs’ air sac system.  Crocodiles have a piston-lung like mammals have, so they also have a superior respiration system.  Mammals rode out the storm in their burrows while crocodile ancestors cooled in the swamps and marine reptiles cooled in the oceans.  Living in burrows, swamps, and other refugia is probably how mammals, crocodiles, and birds survived the end-Cretaceous extinction when non-avian dinosaurs did not.

The end-Triassic event’s final tally was more than 20% of all families, nearly half of all genera, and between 70% and 75% of all species.  Afterward, marine reptiles dominated the oceans, flying reptiles filled the air, crocodile ancestors were the freshwater environment’s apex predators, and dinosaurs reigned in terrestrial environments.

The Jurassic (c. 201 to 145 mya) and Cretaceous (c. 145 to 66 mya) periods spanned the Golden Age of Dinosaurs.  The human fascination with dinosaurs is primarily due to their great size.  They were Earth’s largest land animals ever, by far.  Huge predators hunted even larger herbivores.  Prosauropods, or plateosaurs, were largely bipedal and were the early Jurassic’s dominant herbivorous dinosaurs, but their four-legged descendants, sauropods, supplanted them by the mid-Jurassic and sauropods became Earth’s largest land animals ever.  Some species may have weighed more than 100 metric tons, which would have rivaled the blue whale, which is generally considered to be the largest animal that ever lived.  The blue whale achieved weight primacy, but the sauropods’ vast dimensions are still awe-inspiring.  Some were up to 60 meters in length and could reach 17 meters tall.  Some of the largest sauropods ever lived in the late Jurassic, when they were most numerous, but huge sauropods were plentiful until the Cretaceous extinction.[344]  A prominent hypothesis is that their tremendous size was a strategy for digesting lower-quality food sources; they could digest food for a longer period as it wound its way through their digestive systems.  Their size also discouraged predation and conserved heat.  But their highly efficient air sac breathing system may have been the main reason why they could get so large, particularly in the record-low oxygen Jurassic Period, at least according to GEOCARBSULF. 

Jurassic sauropods probably subsisted on ferns and the foliage of cycads and conifers, which almost no vertebrates do today, and few animals.  Sauropods had huge guts to ferment those plants.[345]  It would not have been an energy-rich diet.  There has been controversy whether sauropods could rear up on their hind legs, and how they held their heads on their long necks, but the idea that they were primarily swamp-dwellers underwent significant revision.  Today, scientists think that they seem to have sought moist environments, but probably did not spend their lives immersed in water.  They were walking grazers and browsers, and their long necks were probably used for browsing trees.[346]

Sauropods seem to have lived in herds and tended their young.  Until relatively recently, animals as agents of ecosystem change and maintenance was a marginal idea.  But today, sediment burrowing is thought to be a seminal geophysical event in the Cambrian, and those huge sauropods probably had an ecosystem impact like what elephants have today in Africa.[347]  Elephants today break up woods as they feed, as they knock over trees and uproot them.  That damage transforms the biome and provides opportunities for other kinds of herbivores and their predators.  Elephants also create and enlarge water holes and are considered keystone species, which have an outsized impact on their environment.  Today, there is a “loyal opposition” to the overkill hypothesis regarding megafauna extinctions soon after humans appeared; such people minimize the impact of humans (their position has an inherent conflict of interest, as those scholars and scientists are all humans) and attribute the extinction of all elephants of the Western Hemisphere (north, south) to climate change and resulting changes in vegetation.  If the current situation with African elephants is relevant, it is likelier that those vegetation changes were a result of elephant extinction, not a cause.[348]  Elephant extinctions would have affected many other kinds of plants and animals, and could have precipitated cascade effects.  Similarly, those huge sauropods would not just have nibbled at vegetation and been relatively harmless browsers, but their vast bulk would have been ideal for pushing over trees to get at their foliage and other devastations of trees in particular, which would have dramatically impacted biomes.  Giant dinosaurs probably had keystone species impacts on their environments, particularly the vegetation.  Dinosaurs were not the only huge organisms in those days.  The first sequoias appeared in the Jurassic, and would have been immune to dinosaur browsing when they grew large enough.  Below is an artist's conception of a typical Jurassic landscape (just as an allosaur and stegosaur are about to cordially interact).  (Source: Wikimedia Commons)

Ornithischians started slowly and began to become common in the late Jurassic, just when the greatest biological innovation in the past 300 million years began: the appearance of flowering plants, which first bloomed about 160 mya.  Until that time, plant survival strategies included how to avoid being eaten by animals, whether it was bark, height, poisonous foliage, etc.  Flowering plants adopted a different strategy by laying out a banquet for animals.  The primary benefit for plants was spending less energy to reproduce, as well as attracting animals that did not seek to eat the plants and even ended up protecting them.  The advantage for animals was an easily acquired and tasty meal.  It was the greatest direct symbiosis between plants and animals ever, other than plants providing the oxygen that animals breathe, which is inadvertent.  The two primary aspirations that seed plants achieve for successful reproduction are becoming fertilized via pollination and placing seeds where they can become viable offspring (and feces fertilizer could only help).  Flowering plants, also called angiosperms, did not invent animal assistance from whole cloth.  Some Jurassic insects have been found in association with gymnosperm (conifer) cones, and were probably doing the work that the wind previously performed.[349]  Like the enzyme example of a key rattling around in a room, attracting animals to plants, to eat the pollen and nectar, was like a reproductive enzyme: animals carried the key to the lock to initiate reproduction.  Other animals ate the fruit and thereby spread the seeds.  That relationship did not become significant until the mid-Cretaceous.[350]  Angiosperms mature faster and produce more seeds than gymnosperms do.  By the Cretaceous’s end, angiosperms dominated tropical biomes where ferns and cycads used to thrive, and they pushed conifers to the high latitudes, just as they have today.  That tropical dominance is probably related to the insect population, which prefers warm climates.  Angiosperms became Earth’s dominant plants after the end-Cretaceous extinction and comprise more than 90% of plant species today.

There is speculation that dinosaurs invented flowering plants in a coevolutionary dance, as low-browsing ornithischians put pressure on plants to grow and reproduce quickly, and angiosperms are far more effective at those activities than all plants preceding them.[351]  The spread of angiosperms in the mid-Cretaceous coincided with the ornithischians’ rising dominance, and by the end-Cretaceous extinction, they were the most numerous herbivores by far.  Stegosaurs appeared in the late Jurassic and went extinct by the late Cretaceous.

In the late Jurassic, as ornithischians began to become plentiful, a theropod innovation would lead to the only dinosaurs to survive the end-Cretaceous extinction: birds.  As with synapsid sails, stegosaur plates, and a Triceratops’s horns and frill, feathers had a display function as well as thermoregulation, long before they were used to fly.  Ever since scientists realized that dinosaurs were closely related to birds, they have watched for feathers, and have found more than 20 genera of dinosaurs that sported feathers.[352]  That famous Archaeopteryx fossil discovered in 1860-1861 began the speculation that birds evolved from dinosaurs, and was considered one of the first confirmations of Darwin’s theory of evolution.  Today, scientists strongly doubt that Archaeopteryx flew, and it is not considered a direct ancestor of today’s birds.[353]  Feathered dinosaurs existed before Archaeopteryx’s 155 mya appearance, and they are in the clade that led to today’s birds, which first appeared about 160 mya.  Birds probably did not fly much, if at all, until the Cretaceous, and the first beaked birds appeared in the early Cretaceous.

When birds began to fly, their energy requirements skyrocketed.  Today’s bats, for instance, burn several times as many calories as similarly sized non-flying mammals and live several times longer, just as birds live far longer than similarly sized mammals.  Mammalian life-expectancy follows a curve in which size, metabolism, and longevity are all closely related.  The general rule is that all mammals have about the same number of heartbeats in a lifetime.  A mouse’s heart beats about 20 times as fast as an elephant’s, and an elephant lives about 20 times as long as a mouse.[354]  Larger bodies mean slower metabolisms, or less energy burned per unit of time per cell.  Birds have the same kind of size/metabolism/life-expectancy curve, but it sits on a higher level than mammals'.  A pigeon lives for about 35 years, or 10 times as long as a similarly sized rat.[355]  On average, birds live three-to-four times as long as similarly sized mammals.

Because of the stupendous energy demands of flight, birds not only have the superior air sac system for breathing, but their mitochondria, the cell’s energy-generation centers, are far more efficient than mammalian mitochondria.  Parrots in captivity can live to be 80, scientists have noted an albatross in the wild reproducing at more than 60, and scientists may discover that wild albatrosses live to be 100 or more, when their tagging programs get that old.  The mitochondrial theory of aging may explain bird longevity, as the efficient mitochondria of birds produced fewer free radicals.[356]  The theory is controversial and will be for many years, but I think that an engine analogy can help.  A bird is a piece of high-performance biological technology, and when operating at peak output it puts all land-bound animals to shame.  But a bird’s metabolism is usually in its slack state, only maximized during flight.  Simply put, a bird has a great energy capacity that is rarely used to its fullest.  It is like a high-performance engine that rarely runs near its redline.  Such engines will last far longer than those regularly running near redline.  High-performance technology that usually “loafs” in its slack state and is rarely taxed is expensive and long-lasting.  The increased investment in superior technology allows for high performance and long life.  High-quality technology is more economical in the long run, if the initial investment can be afforded.

Recognizably modern birds existed by the end-Cretaceous, and modern birds were the only dinosaurs to survive the end-Cretaceous extinction.  Small pterosaurs called pterodactyls first flew about 150 mya, about the time that birds appeared.  The skies were getting crowded by the late-Cretaceous, although birds and pterosaurs seem to have inhabited different niches.  Modern birds survived the end-Cretaceous extinction partly because they found refugia in swampy margins, burrows, and holes in trees, such as those that woodpeckers can create.

Another energy-related activity probably appeared on a large scale during the reign of dinosaurs: territoriality.  Although territoriality can be observed in insects, fish, crustaceans, amphibians, and reptiles today, it is most common among birds and mammals.  Territoriality is primarily about preserving an animal’s energy base from competition, and it is usually a behavior oriented toward others of the same species, which would eat the same food resources and mate with the same potential partners.  Just as what scientists call consciousness seems to have appeared with the earliest animals, territorial behavior may go all the way back to the Cambrian Explosion.  But the social behaviors apparent in dinosaurs probably also meant territorial behavior, and probably on a scale never experienced before on Earth.  Even the suspected display function of synapsid sails implies territorial behavior.  All great apes are territorial, and human political units such as nations are little more than ape territoriality writ large, as peoples protect their energy and mating bases.  In light of the display common in today’s birds (with its apotheosis in the peacock, although, as usual, there are competing hypotheses), and the phenomenon perhaps goes all the way back to synapsids, along with the discovery of dinosaurian mass nesting sites, herd behaviors, and the like, many scientists believe that dinosaurs were territorial.

In the late Jurassic, armored stegosaurs and ankylosaurs first appeared and used an ornithischian defensive strategy that ceratopsians also developed in the early Cretaceous, which reached its peak with Triceratops in the late Cretaceous.  Today’s rhinoceros is the mammalian equivalent of Triceratops, but today’s rhinos do not have to face anything as fearsome as Tyrannosaurus Rex, although the most successful predators in Earth’s history, humans, are driving rhinos to extinction.

The Tethys Ocean was fully formed in the Jurassic and the continents began to break up in earnest, which led to rising sea levels.  The shallow seas that began to reappear in the Triassic became widespread in the Jurassic as continental shelves were submerged.  The Atlantic Ocean began forming in the Jurassic, as North America, Africa, and South America split, and the world-circling Panthalassic Ocean became the Pacific Ocean about the same time, although it is more of a convention among geologists than any dramatic change.  Australia began to split from Antarctica during the Jurassic.  Mountain-building events along the west coast of North America continued unabated, and the Andes Mountains, which began forming in the Triassic, continued their development in the Jurassic. 

In the middle Jurassic, the largest bony fish ever, Leedsichthys, a filter feeder, lived.  It reached nearly 20 meters in length.  Scientists have long argued over how other leviathans of Jurassic oceans, such as plesiosaurs, lived.  Scientists have proposed several hypotheses to explain the function of their anatomy. 

The mid-Jurassic marked the beginning of a 160-million-year period of anoxic events that produced most of Earth’s oil deposits, and they finally ended in the Oligocene.  The anoxia of post-Triassic Mesozoic oceans seems to be at least partly the result of increased runoff from land spurred by volcanic events, combined with warm, stagnant, stratified surface waters.[357]  Low atmospheric oxygen, combined with high nutrient runoff and warm waters that absorb less oxygen than cold water, provided the conditions for those anoxic events, and atmospheric oxygen levels only increased toward modern levels in the Cretaceous.  Also, changing currents (including upwelling, which usually brings nutrients to the surface) and rising sea levels (which can make the seafloor anoxic) may have contributed to the unprecedented and never reproduced anoxia of those times.  Until the current low-oxygen events that humans are inducing, anoxic events, and hence oil formation, have not occurred much during the past 30 million years.[358]

About 183 mya, an extinction event linked to anoxic and volcanic events hit ammonoids hard, as usual.  The extinction seems to have been confined to the oceans.[359]  Along with the appearance of carbonate hardgrounds, reefs slowly recovered in the Jurassic, and by the Jurassic’s end, coral reefs lined Tethyan shores.  Low-oxygen tolerating marine animals proliferated in the Jurassic.  Ammonoids, with their superior respirational equipment, developed large, thin-shelled varieties that housed the large gills probably required to navigate the Jurassic’s low-oxygen waters.[360]  Also, a different kind of cephalopod, the ancestor of squids, became plentiful in the Jurassic.  The first crabs appeared in the Jurassic, and they also developed a superior respiration system; they put their gills within their armor and developed a pump gill.[361]  As most seashore visitors know, crabs are quite tolerant of exposure to air, much as nautiloids suffer no ill effects when exposed to air for a short time.  Crabs proliferated with the late Jurassic’s reefs, to only collapse with the end-Jurassic reef collapse (called the Tithonian event, or end-Jurassic extinction), which was caused by a sudden drop in sea levels, and the extinction again appeared to be largely restricted to marine biomes.[362]  On land, there were extinctions of sauropods, stegosaurs, and advanced ornithopods.[363]

The sea level drop quickly reversed in the early Cretaceous, and the Cretaceous (c. 145 to 66 mya) saw the most dramatic rise in global ocean levels during the eon of complex life.  At the sea level’s peak, the land’s surface area during the Cretaceous was about two-thirds of today’s (18% versus today’s 29% of surface coverage).  By the early Cretaceous, today’s continents were recognizable, and for the first time ever, marked regional differences appeared among the terrestrial animals that inhabited continental biomes.  Sauropods generally stayed in the southern continents and ornithischians came to dominate the northern continents, and theropods also became quite diverse in the late Cretaceous.  The iconic theropod and most famous dinosaur, T-rex, appears to have solely been a North American resident.  Earth’s fossil record for dinosaurs is richest in North America (with China and Mongolia coming in second), so the fossil record may be biased toward northern dinosaurs.[364]  Today, there are only about 100 professional dinosaur paleontologists on Earth; that is not a very large community.  To most six-year-old boys, those scientists won the lottery, as they are paid to study dinosaurs and dig their fossils from the ground.  In T-rex’s northern range, Triceratops was the dominant herbivore, and its confrontations with T-rex may have been Earth’s greatest land battles ever, at least until humans appeared.  In T-rex’s southern range, North America’s largest dinosaur, a gigantic sauropod, lived.[365]

As land’s surface area shrank, the continents became wetter, as all land became relatively close to the oceans.  In the late Jurassic there was a cooling period, the coldest time of the entire Mesozoic, with even some mountainous and polar glaciation, but end-Jurassic volcanism kept carbon dioxide levels high and the climate warmed.  Warm-climate plants lived within 15 degrees of the South Pole during the Cretaceous, and forest went all the way within five degrees of the poles, which has fascinated scientists as they try envisioning a biome which was in the dark for nearly half the year.[366]  The Cretaceous was generally a hot, wet time on Earth.

India broke away from Gondwana in the early Cretaceous, and Gondwana's breakup beginning about 150 mya is generally considered the birth of the Indian Ocean.  By the Cretaceous’s end, India was alone and swiftly moving toward Southern Asia and a tremendous collision that formed the Himalayan Mountains and Tibetan Plateau.  The Andes were uplifted during the Cretaceous, and mountain-building events (1, 2) continued in western North America.  In the late Cretaceous, the Rocky Mountains began their rise and the volcanic hotspot that created the volcanic mountain chain that is currently represented by the Hawaiian Islands first appeared.  In the late Cretaceous, the Tethys Ocean connected with the Pacific and created a world-circling tropical current, which helped gentle and warm Earth’s weather systems, and contributed to anoxic events.  North America’s Great Plains were under a shallow sea in the Cretaceous. 

Calcareous plankton appeared in the Mesozoic and required oxygen to form calcium carbonate.  They became so abundant in the high oxygen of the late Cretaceous that the rain of their bodies on ocean floors gave the Cretaceous its name: chalk (the Latin name).[367]  Calcium carbonate, the primary constituent of limestone, comes in two forms: calcite and aragonite.  The magnesium content in the oceans, as well as the ocean temperature, determines which form of calcium carbonate will dominate.  The Permian extinction also marked the end of a 100-million-year ice age and gave way to about 200 million years of hot times.  During the eon of complex life, Earth has vacillated between icehouse and greenhouse conditions.  That pattern also seems related to supercontinent dynamics.  Hot seas are generally calcite seas and cold seas are usually aragonite seas.  Calcite seas create carbonate hardgrounds, which influence the biome that forms.  The Ordovician and Silurian periods had vast carbonate hardgrounds, which disappeared during the Karoo Ice Age and returned in the Greenhouse Earth age of dinosaurs, becoming common in the Jurassic.  Today’s Icehouse Earth has aragonite seas, so organisms that form calcium carbonate shells use aragonite, which is less stable than calcite and its formation is sensitive to temperature and acidity.  Coral reefs, key phytoplankton (which help produce Earth’s oxygen), and shellfish use aragonite today to form their shells.  There is already strong evidence that acidification of the oceans due to humanity’s burning of fossil hydrocarbon deposits to power the industrial age is interfering with the ability of coral, carbonate-forming phytoplankton, and shellfish to form their shells.  That is only one of the industrial age’s many deleterious ecosystem impacts.  The current aragonite-formation situation is not a theoretical construct of fearful environmentalists, but is a measurable impact today.

According to GEOCARBSULF, oxygen levels rose in the Cretaceous and reached nearly modern levels by the end.  But anoxic events also dotted the Cretaceous, probably related to rising sea levels.  The largest bivalve ever lived in the Cretaceous and reached three meters in length.  It was a deep-water species that probably formed symbiotic relationships with chemosynthetic organisms, along with those other low-oxygen Mesozoic bivalves, and it went extinct as oxygen levels rose in the atmosphere and probably also in the seas.[368] 

When sea levels rise as dramatically as they did in the Cretaceous, coral reefs will be buried under rising waters and the ideal position, for both photosynthesis and oxygenation, is lost, and reefs can die, like burying a tree’s roots.  About 125 mya, reefs made by rudist bivalves, which thrived on carbonate hardgrounds, began to displace reefs made by stony corals.  They may have prevailed because they could tolerate hot and saline waters better than stony corals could.  About 116 mya, an extinction event happened, probably caused by volcanism, which temporarily halted rudist domination.  But rudists flourished until the late Cretaceous, when they went extinct, perhaps due to changing climate, although there is also evidence that the rudists did not go extinct until the end-Cretaceous event.  Carbon dioxide levels steadily fell from the early Cretaceous until today, temperatures fell during the Cretaceous, and hot-climate organisms gradually became extinct during the Cretaceous.  Around 93 mya, another anoxic event happened, perhaps caused by underwater volcanism, which again seems to have largely been confined to marine biomes.  It was much more devastating than the previous one, and rudists were hit hard, although it was a more regional event.  That event seems to have nearly spelled the end of ichthyosaurs, and a family of competing plesiosaurs also went extinct.  On land, spinosaurs, some of which seem to have specialized in eating fish, also went extinct.  There had been a decline in sauropod and ornithischian diversity before that 93 mya extinction, but it subsequently rebounded.  In the oceans, biomes beyond 60 degrees latitude were barely impacted, while those closer to the equator were devastated, which suggests that oceanic cooling was related.[369]  GEOCARBSULF shows rising oxygen and declining carbon dioxide in the late Cretaceous, which reflected a general cooling trend that began in the mid-Cretaceous.  Among the numerous hypotheses posited, late Cretaceous climate changes have been invoked for slowly driving dinosaurs to extinction, in the “they went out with a whimper, not a bang” scenario.  However, it seems that dinosaurs did go out with a bang.  A big one.  Ammonoids seem to have been brought to the brink with nearly all marine mass extinctions during their tenure on Earth, and it was no different with that late-Cretaceous extinction.  Ammonoids recovered once again, and their largest species ever lived in the late Cretaceous, but the end-Cretaceous extinction marked their final appearance as they went the way of trilobites and other iconic animals.

Sauropods were high grazers that ate tree ferns, cycads, and conifers as their staple.  The dramatic radiation of ornithischians in the late Cretaceous coincided with the spread of angiosperms, and their chewing ability continually improved.  Insects also dramatically diversified, as did birds and mammals, in an epochal instance of coevolution between plants and animals.[370]  Hive insects (bees, wasps, termites, and ants) began their rise when flowering plants did. 

Shell-cracking lobsters first appeared in the early Cretaceous.  By the late Cretaceous, mosasaurs became the dominant marine predators.  Ichthyosaurs went extinct after 150 million years of existence, and plesiosaurs declined.  Those apex predators preyed on squids as large as today’s and sharks and ray-finned fish always seemed to do well.  Some substantial sharks appeared in the mid-Cretaceous that even preyed on mosasaurs and plesiosaurs.  The largest sea turtles yet recorded lived in the late Cretaceous, at four meters long and two metric tons.

In the 19th century, the Jurassic was called the Golden Age of Dinosaurs, but that moniker is arguably most applicable to the late Cretaceous, and it was a golden age clear up until a bolide impact brought it all to an end.[371]  One of the uglier disputes in paleontology’s history was a race in the late 19th century between two Americans bent on outcompeting each other in finding and describing dinosaur fossils.[372]  However, the dinosaur extinction is probably the largest and most contentious controversy in the history of paleontology.  Again, the subject of mass extinctions was taboo, due to Lyell’s and Darwin’s prevailing uniformitarianism, until my lifetime.  The hypothesized bolide event, first proposed in 1980, was a kind of a bolide event inflicted on paleontology.  Acrimonious disputes ignited that still burn, but it made studying mass extinctions respectable.  Initially attacked and dismissed, the bolide impact hypothesis is by far today’s leading hypothesis for explaining the end-Cretaceous extinction.[373]  However, at the same time, India was speeding toward its Asian destiny, and its movement is associated with a huge volcanic event that created the Deccan Traps.  Also, sea levels seesawed during the Cretaceous’s end, so the bolide event has some theoretical competition as a causative agent.

It is probably safe to say that if the end-Cretaceous extinction had multiple causes, none of the pre-human mass extinctions can be attributed to just one cause.  However, the sudden disappearance of all non-avian dinosaurs, and what survived, casts a heavy vote for the bolide hypothesis.  Also, there may have been multiple impacts, similar to how the Shoemaker-Levy 9 comet fragmented before it plowed into Jupiter.  Dinosaurs were all terrestrial and were either herbivores or ate herbivores.  The largest bolide impact obviously hit North America the hardest, T-rex would have been among the first casualties, and it would have created an artificial “winter” lasting at least a few months, which might have followed the greatest fires in Earth’s history.  All photosynthetic organisms would have been devastated, as well as the food chains that relied on them.  That alone can explain the end of non-avian dinosaurs, but it also helps explain what survived.  Ammonoids were lightweight versions of nautiloids that lived near the ocean’s surface.  Nautiloids had retreated to deep waters hundreds of millions of years earlier, they lay eggs that take a year to hatch, and they lay them in deep water.  All ammonoids went extinct in the end-Cretaceous event, which ended a 300-million-year-plus tenure on Earth, and all marine reptiles disappeared, too.  Rudist bivalves were in decline before the extinction, probably related to the sea level changes, but it is looking like they lasted until the bolide event.  They were all dependent on primary-production food chains that would have been interrupted by the “bolide winter,” for those that survived the initial conflagration, and they all went extinct.  However, a year after the disaster, when the smoke and dust was clearing, out hatched nautiloids that had been safe in their eggs the entire time, and nautiloids are still with us.[374]  Sharks would have feasted on dead beasts; both aquatic animals and carcasses washed into the oceans by tsunamis.

Most plants produce seeds, which would have largely survived the catastrophe and began growing when conditions improved.  Ferns came back first, in what is called a fern spike, as ferns are a disaster-taxon.  Crocodiles, modern birds (which included ducks at the time), mammals, and amphibians also survived, and all could have found refuge in burrows, swamps, and shoreline havens, lived in tree holes and other crevices that they were small enough to hide in, and all could have eaten the catastrophe’s detritus.  In general, freshwater species fared fairly well, especially those that could eat detritus.  Also, the low-energy requirements of ectothermic crocodiles would have seen them survive when the mesothermic/endothermic dinosaurs starved.  The primary determinants seem to have been what could survive on detritus or energy reserves and what could not, and what could find refuge from the initial conflagration.  While there may have been some evidence of dinosaur decline before the end-Cretaceous extinction (it was gradually growing colder), and the Deccan Traps may have caused at least some local devastation, the complete extinction of non-avian dinosaurs, ammonites, marine reptiles, and others that would have been particularly vulnerable to the bolide event’s aftermath has convinced most dinosaur specialists that the bolide impact alone was sufficient to explain the extinction and no other hypothesis explains the pattern of extinction and survival that the bolide hypothesis does.[375]  In general, the key to surviving the end-Cretaceous extinction was being a marginal species, and all of those on center-stage paid the ultimate price.  The end-Cretaceous extinction's toll was nearly 20% of all families, half of all genera, and about 75% of all species, and marked the end of an era; the Mesozoic ended and made way for the Age of Mammals, also called the Cenozoic, which used to have the Biblically inspired title of the Tertiary.

With the success of the end-Cretaceous bolide hypothesis, there was a movement in some circles to explain all mass extinctions with bolide events, particularly the Permian extinction.  If bolide events were responsible for all mass extinctions, then the periodic, galactic explanation might still have relevance.  Even though an end-Permian bolide event was unveiled with great fanfare and media attention in 2001, it does not appear to be a valid extinction hypothesis today, and invoking bolide impacts to explain every mass extinction seems to have been a passing fad that has seen its best days.[376]  The oxygen hypothesis for explaining extinctions, evolutionary novelty, and radiations is similarly called a current fashion in some circles, and time will tell how the hypothesis fares, although it seems to have impressive explanatory value.

 

The Age of Mammals

World map in early-Eocene (c. 50 mya) (Source: Wikimedia Commons) (map with names is here)

World map in early-Miocene (c. 20 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

  • Recovery from the Cretaceous extinction

  • Development of mammals

  • Mammalian reproductive practices

  • Ecological guilds

  • Mammalian convergent evolution with dinosaurs

  • Mammals reach maximum size

  • Hindgut and foregut digestion

  • Primate development

  • Non-mammalian apex predators of Cenozoic

  • Rise of grass and C4 carbon fixation

  • Paleocene-Eocene Thermal Maximum

  • Eocene's Golden Age of Life

  • Mammals migrate to oceans and become whales

  • Mammals easily migrate between continents via Arctic region

  • India, Africa, and Arabia begin colliding with Eurasia, forming mountain ranges

  • How geological processes make oil

  • New Zealand's bird-dominated biomes evolve in isolation until humans arrive

  • 200-million-year Greenhouse Earth phase ends, and Earth begins cooling

  • Mid-Eocene extinction

  • End-Eocene extinction largely confined to Europe

  • Antarctic ice sheet begins developing

  • Original whales go extinct, and whales adapted to new biomes appear

  • Africa evolves in isolation; elephants appear

  • Monkeys appear

  • Many modern mammal families appear

  • Oligocene warms into Miocene

  • Global currents dramatically change

  • Asian invasion of North America

  • Africa collides with Eurasia, and mass cross-migration begins

  • South America and Australia evolve in isolation

  • Mid-Miocene cooling; Greenland ice sheets begin to develop

  • Cause of mid-Miocene cooling

  • Mountain-building events

  • Grasslands appear

  • Mammals adapt to eating dry-climate plants

  • Tethys Ocean finally disappears

  • Pliocene Epoch and Great American Interchange

  • Changing ocean currents initiate current ice age

  • Ice Age begins, along with Quaternary Period

As smoke cleared and dust settled, literally, from the cataclysm that ended the dinosaurs’ reign, the few surviving mammals and birds crept from their refuges, seeds and spores grew into plants, and the Cenozoic Era began, which is also called the Age of Mammals, as they have dominated this era.  The Cenozoic’s first period is the Paleogene, which ran from about 66 mya to 23 mya.  As this essay enters the era of most interest to most humans, I will slice the timeline a little finer and use the geological time scale concept of epochs.  The Paleogene’s first epoch is called the Paleocene (c. 66 to 56 mya).

Compared to the recovery from the mass extinctions that ended the Devonian, Permian, and Triassic periods, the recovery from the end-Cretaceous extinction was relatively swift.  The seafloor ecosystem was fully reestablished within two million years.[377]  But the story on land was spectacularly different.  By the Paleocene’s end, ten million years after the end-Cretaceous event, all mammalian orders had appeared in what I will call the “Mammalian Explosion.”  While the fossil record for Paleocene mammals is relatively thin, the Mammalian Explosion is one of the most spectacular evolutionary radiations on record.[378]  Because of its younger age, the Cenozoic Era’s fossil record is generally more complete than those of previous eras.

So far in this essay, mammals have received scant attention, but the mammals’ development before the Cenozoic is important for understanding their rise to dominance.  The therapsids that led to mammals, called cynodonts, first appeared in the late Permian, about 260 mya, and they had key mammalian characteristics.  Their jaws and teeth were markedly different from those of other reptiles; their teeth were specialized for more thorough chewing, which extracts more energy from food, and that was likely a key aspect of ornithischian success more than 100 million years later.  Cynodonts also developed a secondary palate so that they could chew and breathe at the same time, which was more energy efficient.  Cynodonts eventually ceased the reptilian practice of continually growing and shedding teeth, and their specialized and precisely fitted teeth rarely changed.[379]  Mammals replace their teeth a maximum of once.  Along with tooth changes, jawbones changed roles.  Fewer and stronger bones anchored the jaw, which allowed for stronger jaw musculature and led to the mammalian masseter muscle (clench your teeth and you can feel your masseter muscle).  Bones previously anchoring the jaw were no longer needed and became bones of the mammalian middle ear.[380]  The jaw’s rearrangement led to the most auspicious proto-mammalian development: it allowed the braincase to expand.  Mammals had relatively large brains from the very beginning and it was probably initially related to developing a keen sense of smell.  Mammals are the only animals with a cerebral cortex, which eventually led to human intelligence.  As dinosaurian dominance drove mammals to the margins, where they lived underground and emerged to feed at night, mammals needed improved senses to survive, and auditory and olfactory senses heightened, as did the mammalian sense of touch.  Increased processing of stimuli required a larger brain, and brains have high energy requirements.  In humans, only livers use more energy than brains.[381]  Cynodonts also had turbinal bones, which suggest that they were warm-blooded.  Soon after the Permian extinction, a cynodont appeared that may have had a diaphragm; it was another respiratory innovation that served it well in those low-oxygen times, functioning like pump gills in aquatic environments.

Further along the evolutionary path, here are two animals (1, 2) that may be direct ancestors of mammals; one herbivorous and the other carnivorous/insectivorous.  They both resembled rats and probably lived in that niche as burrowing, nocturnal feeders.  Mammaliaformes included animals that were probably warm-blooded, had fur, and nursed their young, but laid eggs, like today’s platypus.  Nursing one’s offspring is the defining mammalian trait today, but there has been great controversy over just which mammaliaformes are mammals’ direct ancestors and which one can be called the first mammal.[382]  According to the most commonly accepted definition of a mammal, the first ones appeared in the mid-Triassic, about 225 mya, nearly 20 million years after dinosaurs first appeared.  The only remaining therapsids after a mass extinction at 230 mya were small (the largest was dog-sized), including the mammalian clade, and archosaurs dominated all Earthly biomes from that extinction event until the end-Cretaceous extinction. 

Dinosaurs fortunately never became as small as typical Mesozoic mammals, or else mammals might have been out-competed into extinction.  Mammals stayed small in the Mesozoic.  The largest Mesozoic mammal yet known was raccoon-size, and its diet included baby dinosaurs.  Dinosaurs returned the favor, and digging up mammals from their burrows to snack on them is known dinosaurian behavior.[383]

The issue of early mammalian thermoregulation is controversial and unsettled; even today, mammals engage in a wide array of thermoregulatory practices.  Today’s primitive mammals have lower metabolic rates than modern ones.  Therapsids did not overcome Carrier’s Constraint as dinosaurs did; they were not high-performance animals.  However, early mammals did not see the Sun and their larger brains required more energy.  Early mammals probably were endothermic, but the condition may have included regular torpor, when they went into a brief “hibernation” phase, and their active body temperature may have been several degrees Celsius lower than today’s modern mammals.  Birds and mammals are often born without endothermy but develop it as they grow.[384]  Mammals solved Carrier’s Constraint when they adopted erect postures in the early Jurassic.[385]

Mammalian reproductive practices separate them into their primary categories.  Some “primitive” mammals still lay eggs.  The first placental mammal appeared about 160 mya, the marsupial split began about 35 million years later, and the first true marsupial appeared about 65 mya.  The marsupial/placental “decision,” as with many other lines of evolution, seems to have been a cost-benefit one rooted in energy.  Marsupials have far less energy invested in their young at birth than placentals do.  Marsupials and birds readily abandon their offspring when hardship strikes.  Placentals have a great deal more invested in giving birth to offspring and are therefore less likely to “cut their losses” as easily as birds and marsupials do.[386]  In certain environments, marsupials had the advantage over placentals.  The earliest known marsupial-line mammal appeared in China 125 mya, and marsupials and placentals co-existed on the fringes.  From there they migrated to North America and then to South America.  About the time of the end-Cretaceous holocaust, South America separated from North America, but South America was still connected to Antarctica.  About 50 mya, marsupials crossed from Antarctica to Australia, perhaps by crossing a narrow sea, and placental mammals died out in Australia, probably outcompeted by marsupials.  Earth’s only egg-laying mammals today live in New Guinea, Australia, and Tasmania.  An entire order of early mammals, which were like marsupial and monotreme rodents, existed for about 120 million years, longer than any other mammalian lineage, to only go extinct in the Oligocene, probably outcompeted by rodents.  They were probably the first mammals to disperse nuts and were probably responsible for a great deal of coevolution between nut trees and animals.[387]  All living marsupials have ancestors from South America.  In North America and Eurasia, marsupials died out, probably outcompeted by placentals.  Africa was not connected to any of those landmasses during those times and thus never hosted marsupials.  In South America, marsupials and birds were apex predators (1, 2), but a diverse and unique assemblage of placental ungulates flourished in South America during about 60 million years of relative isolation from all other landmasses.

As with the origins of animals, the molecular evidence shows that virtually all major orders of mammals existed before the end-Cretaceous extinction.  The Paleocene‘s Mammalian Explosion appears to have not been a genetic event, but an ecological one; mammals quickly adapted to empty niches that non-avian dinosaurs left behind.[388]  The kinds of mammals that appeared in the Paleocene and afterward illustrate the idea that body features and size are conditioned by their environment, which includes other organisms.  With the sauropods' demise, high grazers of conifers never reappeared, but many mammals developed ornithischian eating habits and many attained similar size.  That phenomenon illustrates the ecological concept of guilds, in which assemblages of vastly different animals can inhabit similar ecological niches.  The guild concept is obvious with the many kinds of animals that formed reefs in the past; the Cambrian, Ordovician, Silurian, Devonian, Permian, Triassic, Jurassic, and Cretaceous reefs all had similarities, particularly in their shape and location, but the organisms comprising them, from reef-forming organisms to reef denizens and the apex predators patrolling them, had radical changes during the eon of complex life.  If you squinted and blurred your vision, most of those reefs from different periods would appear strikingly similar, but when you focused, the variation in organisms could be astounding.  The woodpecker guild is comprised of animals that eat insects living under tree bark.  But in Madagascar, where no woodpeckers live, a lemur fills that niche, with a middle finger that acts as the woodpecker’s bill.  In New Guinea, a marsupial fills that role.  In the Galapagos Islands, a finch uses cactus needles to acquire those insects.  In Australia, cockatoos have filled the niche, but unlike the others, they have not developed a probing body part, nor do they use tools, but just rip off the bark with the brute force of their beaks.[389]

After the dinosaurs, empty niches filled with animals that looked remarkably like dinosaurs, if we squinted.  Most large browsing ornithischians weighed in the five-to-seven metric ton range.  By the late Paleocene, uintatheres appeared in North America and China and attained about rhinoceros size, to be supplanted in the Eocene by larger titanotheres, and in Oligocene Eurasia lived the largest land mammals of all time, including the truly dinosaur-sized Paraceratherium.  The largest yet found weighed 16 metric tons and was about five meters tall at the shoulders and eight meters in length.  Even a T-rex might have thought twice before attacking one of those.  It took about 25 million years for land mammals to reach their maximum size, and for the succeeding 40 million years, the maximum size remained fairly constant.[390]  Scientists hypothesize that mammalian growth to dinosaurian size was dependent on energy parameters, including continent size and climate, and cooler climates encouraged larger bodies. 

Huge mammals persist to this day, although the spread of humans was coincident with the immediate extinction of virtually all large animals with the exception of those in Africa and, to a lesser extent, Asia.  The five-to-seven-metric-ton browser formed a guild common to dinosaurs and mammals, and is probably related to metabolic limits and the relatively low calorie density that browsing and foraging affords.[391]  Sometimes, the similarity between dinosaurs and mammals could be eerie, such as ankylosaurs and glyptodonts, which is a startling example of convergent evolution, which is the process by which distantly related organisms develop similar features to solve similar problems.  They were even about the same size, at least for the most common ankylosaurs, which were about the size of a car.  Ankylosaurs appeared in the early Cretaceous and succeeded all the way to the Cretaceous’s end.  Glyptodonts appeared in the Miocene and prospered for millions of years.

The Cenozoic equivalent of a bolide impact was the arrival of humans, as glyptodonts went extinct with all other large South American megafauna shortly after human arrival.  The largest endemic South American animals to survive the Great American Interchange of three mya, when North American placentals prevailed over South American marsupials, and the arrival of humans to the Western Hemisphere beginning less than 15 kya, are the capybara and giant anteater, which are tiny compared to their ancient South American brethren.  The giant anteater is classified as a sloth, and sloths were a particularly South American animal.  The largest sloths were bigger than African bush elephants, which are Earth’s largest land animals today.  After car-sized glyptodonts went extinct, dog-sized giant armadillos became the line’s largest remaining representative.

Among herbivores, their mode of digestion was important.  Hindgut fermenters attained the largest size among land mammals, and elephants, rhinos, and horses have that digestive process.  Cattle, camels, deer, giraffes, and many other herbivorous mammals are foregut fermenters and many are ruminants, which have four-chambered stomachs, while the others have only three chambers.  While foregut fermenters are more energy efficient, hindgut fermenters can ingest more food.  Hindgut fermenters gain an advantage when forage is of low quality.  What they lack in efficiency they more than make up for in volume.  There are drawbacks to that advantage, however, such as when there is not much forage or its quality is poor, such as dead vegetation.  A cow, for instance, digests as much as 75% of the protein that it eats, while a horse digests around 25%.  Live grass contains about four times the protein as dead grass.  Cattle can subsist on the dead grass of droughts or hard winters and horses cannot, which was a tradeoff in pastoral societies.[392]

Angiosperms began overtaking gymnosperms in the early Cenozoic, but it did not immediately happen.  In Paleocene coal beds laid down in today’s Wyoming, gymnosperms still dominated the swamps, and the undergrowth was mainly comprised of ferns and horsetails.[393]  But angiosperms were on their way to dominance, and mammals, birds, and insects began major adaptations to them.

The present consensus is that primates appeared in the late Cretaceous between 85 mya and 65 mya, perhaps in China, but the earliest known primate fossils are from the late Paleocene around 55 mya and were found in Northern Africa.  The first primates were tree-dwellers that ate insects, nectar, seeds, and fruit.  Their eyes point forward (they rely on sight more than other senses, and have pronounced binocular vision), and most have opposable digits on their hands/feet, which are ideal for canopy-living.  Primates generally have larger brains than other mammals, which may have developed to rely more on eyesight and process the stimuli of binocular vision, and primates rely less on the olfactory sense.  That change assisted the increase in intelligence that characterizes primates.  Lemurs diverged early in the primate line and rafted over to the newly isolated Madagascar in the early Eocene.  Lemurs were Madagascar’s only primates until humans arrived about two thousand years ago (and the largest lemurs, which were gorilla-sized, immediately went extinct).  A rodent-like sister group to primates that lived in North America and Europe went extinct in the Paleocene, as did many early mammalian lines.  In general, Paleocene mammals had relatively small brains, and many from that epoch are called “primitive,” although it did not necessarily mean functionally primitive when compared to modern mammals.  However, evolutionary “progress” is a legitimate concept.  The energy efficiency of ray-finned fish is probably responsible for their success, and the change from “primitive” to “modern” was usually related to the energy issue.  Evolutionary progress is an unfashionable concept in some scientific circles, but it is a clear trend over life’s history on Earth, and can be quite obvious during the eon of complex life.[394]

Paleocene mammals were rarely apex predators.  Crocodilians survived the end-Cretaceous extinction and remained dominant in freshwater environments, although turtles lived in their golden age in the Paleocene Americas and might have even become apex predators for a brief time.  The largest snakes ever recorded (1, 2) lived in the Paleocene and could swallow crocodiles whole.  In addition to birds' being among South America’s apex predators, a huge flightless bird thrived in North America and Europe and survived to the mid-Eocene, although the evidence today strongly suggests that it was herbivorous.  When the Great American Interchange began three mya, one of those flightless South American birds quickly became a successful North American predator.

People are usually surprised to hear that grass is a relatively recent plant innovation.  Grasses are angiosperms and only became common in the late Cretaceous, along with flowering plants.  With grass, some dinosaurs learned to graze, and grazers have been plentiful Cenozoic herbivores.  According to GEOCARBSULF, carbon dioxide levels have been falling nearly continuously for the past 150-100 million years.  Not only has that decline progressively cooled Earth to the point where we live in an ice age today, but carbon starvation is currently considered the key reason why complex life may become extinct on Earth in several hundred million years.  In the Oligocene, between 32 mya and 25 mya some plants developed a new form of carbon fixation during photosynthesis known as C4 carbon fixation.  It allowed plants to adapt to reduced atmospheric carbon dioxide levels.  C4 plants became ecologically prevalent about 6-7 mya in the Miocene, and grasses are today’s most common C4 plants and comprise more than 60% of all C4 species.  The rest of Earth’s photosynthesizers use C3 carbon fixation or CAM photosynthesis, which is a water-conserving process used in arid biomes.

In Paleocene oceans, sharks filled the empty niches left by aquatic reptiles, but it took coral reefs ten million years to begin to recover, as usual.  As Africa and India moved northward, the Tethys Ocean shrank, and in the late Paleocene and early Eocene, one of the last Tethyan anoxic events laid down Middle East oil, and the last Paleocene climate event is called the Paleocene-Eocene Thermal Maximum (“PETM”).  The PETM has been the focus of a great deal of recent research because of its parallels to today’s industrial era, when carbon dioxide and other greenhouse gases are massively vented to the atmosphere, causing a warming atmosphere and acidifying oceans.  The seafloor communities suffered a mass extinction and the PETM’s causes are uncertain, but the release of methane hydrates when the global ocean warmed sufficiently is a prominent hypothesis.  Scientists also look to the usual suspects of volcanism, changes in oceanic circulation, and a bolide impact.

The PETM, according to carbon isotope excursions, “only” lasted about 120-170 thousand years.  The early Eocene (c. 56 to 34 mya), which followed the PETM, is also known as one of Earth’s Golden Ages of Life.  It has also been called a Golden Age of Mammals, but all life on Earth thrived then.  In 1912, the doomed Scott Expedition spent a day collecting Antarctic fossils and still had them a month later when the entire team died in a blizzard.  The fossils were recovered and examined in London.  They surprisingly yielded evidence that tropical forests once existed near the South Pole.  They were Permian plants.  That was not long after Wegener first proposed his continental drift hypothesis, and was generations before orthodoxy accepted Wegener’s idea.  Antarctica has rarely strayed far from the South Pole during the past 500 million years, so the fossils really represented polar forests.  A generation before the Scott Expedition’s Antarctic fossils were discovered, scientists had been finding similar evidence of polar forests in the Arctic, within several hundred kilometers of the North Pole, on Ellesmere Island and Greenland.  Scientists were finding Cretaceous plants in the Arctic, which were much younger than Permian plants.[395]

Polar forests reappeared in the Eocene after the PETM, and the Eocene’s first ten million years was the Cenozoic’s warmest time and even warmer than the dinosaurian heyday.[396]  Not only did alligators live near the North Pole, but the continents and oceans hosted an abundance and diversity of life that Earth may have not seen before or since.  That ten million year period ended as Earth began cooling off and headed toward the current ice age, and it has been called the original Paradise Lost.[397]  One way that methane has been implicated in those hot times is that leaves have stomata, which regulate the air they take in to obtain carbon dioxide and oxygen, needed for photosynthesis and respiration.  Plants also lose water vapor through their stomata, so balancing gas input needs against water losses are key stomata functions, and it is thought that in periods of high carbon dioxide concentration, plants will have fewer stomata.  Scientists can count stomata density in fossil leaves, which led some scientists to conclude that carbon dioxide levels were not high enough to produce the PETM, so methane became a candidate greenhouse gas that produced the PETM and Eocene Optimum, and the controversy and research continues.[398]

However the hot times were created and sustained, Earth’s life reveled in the conditions.  Similar to reptiles' beating the heat and migrating into the oceans, some mammals did the same thing about 200 million years later, and cetaceans appeared.  Scientists were surprised when molecular studies found that whales share a common ancestor with even-toed ungulates, and the hippopotamus is the closest living relative to whales.[399]  Whales evolved in and near India, beginning about 50 mya, when the earliest “whale” surely did not resemble one and lived near water.  By 49 mya, whales could walk or swim.  A few million years later they resembled amphibians, and by 41 mya they became fully aquatic, for a transition from land to sea that “only” took eight million years.[400]  Whales quickly became dominant marine predators.  However, sharks did not go quietly and began an arms race with whales, which culminated 28 mya in C. megalodon, the most fearsome marine predator ever: a shark reaching nearly 20 meters in length and weighing 50 metric tons.  It could have swallowed a great white shark whole, as seen below (C. megalodon in gray, great white shark in green, and next to that is a man taking a break in C. megalodon's mouth).  (Source: Wikimedia Commons)

C. megalodon preyed on whales and had the greatest bite force in Earth’s history (although some estimates of T-rex bite strength equal it).  C. megalodon went extinct less than two mya, due to the current ice age’s vagaries.

Because of early Eocene Arctic forests, animals moved freely between Asia, Europe, Greenland, and North America, which were all nearly connected around the North Pole, and great mammalian radiations occurred in the early Eocene.  Many familiar mammals first appeared by the mid-Eocene, such as modern rodents, elephants, bats, and horses.  The earliest monkeys may have first appeared in Asia and migrated to India, Africa, and the Americas.  Europe was not yet connected with Asia, however, as the Turgai Strait separated them.  Modern observers might be startled to know where many animals originated.  Camels evolved in North America and lived there for more than 40 million years, until humans arrived.  Their only surviving descendants in the Western Hemisphere are llamas.  As with lemurs migrating to Madagascar from Africa, or marsupials to Australia via Antarctica, or monkeys migrating from Africa to the Americas, or Eocene mammalian migrations via polar routes, the migrants often involuntarily “sailed” on vegetation mats that crossed relatively short gaps between the continents.  Such a migration depended on fortuitous prevailing currents and other factors, but it happened often enough.

Several of the Eocene’s geologic events had long-lasting impact.  About 50 mya, the plates under India and Southern Asia began their epic collision and started creating the Himalayas, and Australia split from Antarctica.  The collisions of the African, Arabian, and Indian plates with the Eurasian plate created the mountain ranges that stretch from Western Europe to New Guinea.  After the Pacific Ring of Fire, it is the world’s most seismically active region.  Those colliding plates eventually squeezed the Tethys Ocean out of existence.  That event ended more than 500 million years of Tethyan sedimentation, beginning with the Proto-Tethys Ocean in the Ediacaran, continuing with the Paleo-Tethys Ocean in the Ordovician, and the Tethys Ocean appeared in the late Permian.  The Tethys Ocean’s existence spanned the entire Mesozoic and finally vanished less than six mya, at the Miocene’s end.[401]  Most of the world’s oil formed in the sediments of those Tethyan oceans and very little has formed since the Oligocene. 

The process of transforming anoxic sediments into oil requires millions of years.  When organic sediments are buried, most of the oxygen, nitrogen, hydrogen, and sulfur of dead organisms is released, leaving behind carbon and some hydrogen in a substance called kerogen, in a process that is like reversed photosynthesis.  Plate tectonics can subduct sediments, particularly where oceanic plates meet continental plates.  There is an “oil window” roughly between 2,000 and 5,000 meters deep; if kerogen-rich sediments are buried at those depths for long enough (millions of years), geological processes (which produce high temperature and pressure) break down complex organic molecules and the result is the hydrocarbons that comprise petroleum.  If organic sediments never get that deep, they remain kerogen.  If they are subducted deeper than that for long enough, all carbon-carbon bonds are broken and the result is methane, which is also called natural gas.  Today, the geological processes that make oil can be reproduced in industrial settings that can turn organic matter into oil in a matter of hours.  Many hydrocarbon sources touted today as replacements for conventional oil were never in the oil window, so were not “refined” into oil and remain kerogen.  The so-called oil shales and oil sands are made of kerogen (bitumen is soluble kerogen).  It takes a great deal of energy to refine kerogen into oil, which is why kerogen is an inferior energy resource.  Nearly a century ago in East Texas oil fields it took less than one barrel of oil energy to produce one hundred barrels, for an energy return on investment ("EROI" or "EROEI") of more than 100, in the Golden Age of Oil.  Global EROI is declining fast and will fall to about 10 by 2020.  The EROIs of those oil shales and oil sands are less than five and as low as two. 

During the early Eocene’s Golden Age of Life, forests blanketed virtually all lands all the way to the poles, modern orders of most mammals appeared, today’s largest order of sharks appeared, and coral reefs again appeared beyond 50 degrees latitude.  Many animals would also appear bizarre today.  One crocodile developed hooves, and an order of hooved mammalian predators lived, including the largest terrestrial mammalian predator/scavenger ever, which looked like a giant wolf with hooves.  The ancestors of modern carnivores began displacing those primitive predatory mammals in the Eocene, after starting out small.  A family of predatory placentals called bear dogs lived from the mid-Eocene to less than two mya.  Rhino-sized uintatheres and their bigger cousins the brontotheres were the Eocene’s dominant herbivores in North America and Asia.  Primates flourished in the tropical canopies of Africa, Europe, Asia, and North America.  Deserts are largely an Icehouse Earth phenomenon, and during previous Greenhouse Earths, virtually all lands were warm and moist.  Australia was not a desert in the early Eocene, but was largely covered by rainforests.  It must have been a marsupial paradise, as it would have been in Antarctica and South America, but the fossil record is currently thin, as rainforests are poor fossil preservers.

In the late Cretaceous, about 75 mya, New Zealand split from Gondwana, and by the end-Cretaceous event it, Madagascar, and India were alone in the oceans.  Madagascar was close enough to Africa for lemurs to migrate to it, but the only animals that repopulated New Zealand’s lands after the end-Cretaceous holocaust were those that flew.  From the end-Cretaceous event until the Maoris arrived around 1250-1300 CE (CE stands for “Common Era,” formerly designated with AD), birds were New Zealand’s dominant animals and had no rivals.  The only mammals were a few species of bat that migrated there in the Oligocene.  A recent finding of a mouse-sized mammal fossil shows that some land mammals lived in New Zealand long ago, possibly Mesozoic survivors and unrelated to any living mammals, but they died out many millions of years ago.  A few small reptiles and amphibians also lived there, and even a crocodile that died out in the Miocene, but New Zealand, unlike any other major landmass in Earth’s history, was the realm of birds.  The Maoris encountered giant birds, ecological niches filled with mammals elsewhere were filled by birds, and gigantic moas were the equivalent of mammalian browsers.  Before the arrival of humans, moas were only preyed upon by the largest eagle ever.  Of all ecosystems that would have appeared strange to modern eyes, New Zealand’s pre-human ecosystem has been perhaps the most beguiling to me, perhaps because it still existed less than a millennium ago.  It seemed like something that sprang out of Dr. Seuss’s imagination.  The Seuss-like kiwi is one of the few surviving specialized birds of that time.  The Maoris drove all moas to extinction in less than a century and quickly destroyed about half of New Zealand’s forests via burning.

For several million years, life in the Eocene was halcyonic, and at 50 mya, the Greenhouse Earth state had prevailed ever since the end-Permian extinction 250 mya.  But just as whales began invading the oceans 49 mya, Earth began cooling off.  The ultimate reason was atmospheric carbon dioxide levels that had been steadily declining for tens of millions of years.  The intense volcanism of the previous 200 million years waned and the carbon cycle inexorably sequestered carbon into Earth’s crust and mantle.  While falling carbon dioxide levels were the ultimate cause, the first proximate cause was probably the isolation of Antarctica at the South Pole and changes in global ocean currents.  During the early Eocene, the global ocean floor’s water temperature was about 13oC (55oF), warm enough to swim in, which was a far cry from today’s near-freezing and below-freezing temperatures.  The North Sea was warm as bathwater.  Radical current changes accompanied the PETM of about 56 mya, warming the ocean floor, and perhaps that boiled off the methane hydrates.  Whatever the causes were, the oceans were warm from top to bottom, from pole to pole.  But between 50 to 45 mya, Australia made its final split from Antarctica and moved northward, India began crashing into Asia and cut off the Tethys Ocean and the global tropical circulation, and South America also moved northward, away from Antarctica.  Although the debate is still fierce over the cooling’s exact causes, the evidence (much is from oxygen isotope analyses) is that the oceans cooled off over the next 12 million years, very consistently, although a brief small reversal transpired at about 40 mya.[402]  By 37-38 mya, the 200-million-year-plus Greenhouse Earth phase ended and the transition to today’s ice age was underway.  In the late Eocene, as the trend toward Icehouse Earth conditions began, deserts such as the Saharan, South African, and Australian formed.

That cooling caused the greatest mass extinction of the entire Cenozoic Era, at least until today’s incipient Sixth Mass Extinction.  With continents now scattered across Earth’s surface, there was no event that wiped nearly everything out as the end-Permian extinction did, nor were bolide events convincingly implicated.  But mass extinctions punctuated a 12-million-year period when Earth’s global ocean and surface temperatures steadily declined.  When it was finished, there were no more polar forests, no more alligators in Greenland or palm trees in Alaska, and Antarctica was developing its ice sheets.  A few million years later, another mass extinction event in Europe marked the Eocene’s end and the Oligocene’s beginning, but the middle-Eocene extinctions were more significant.[403]  All in all, there was about a 14-million-year period of cooling and extinction, which encompassed the mid-Eocene to early Oligocene, and Icehouse Earth conditions reappeared after a more-than-200-million-year hiatus.[404]

The Oligocene Epoch (c. 34 to 23 mya) was relatively cold.  In the 1960s, a global effort was launched to drill deep sea cores, the Glomar Challenger recovered nearly 20,000 cores from Earth’s oceans, and scientists had paradigm-shift learning experiences from studying those cores.  One finding was that Antarctica developed its ice sheets far earlier than previously supposed, and the cores pushed back the initial ice sheet formation by 20 million years, to about 34-35 mya; the first Antarctic glaciers formed as early as 49 mya.  The evidence included dropstones in Southern Ocean sediments, which meant icebergs.[405]  The event that led to Antarctic ice sheets was the formation of the Antarctic Circumpolar Current, which began to form about 40 mya and was firmly established by 34 mya, when the Antarctic ice sheets grew in earnest.  The current’s formation was caused by Antarctica’s increasing isolation from Australia and South America, which gradually allowed an uninterrupted current to form that circled Antarctica and isolated it so that it no longer received tropical currents.  That situation eventually turned Antarctica into the big sheet of ice that it is today.  It also radically changed global oceanic currents.  Antarctic Bottom Water formed, which cooled the oceans as well as oxygenated its depths, and it comprises more than half of the water in today’s oceans.  North Atlantic Deep Water began forming around the same time.[406]

Those oceanic changes profoundly impacted Earth’s ecosystems.  Not only did most warm-climate species go extinct, at least locally, but new species appeared that were adapted to the new environment.  Early whales all died out about 35 mya and were replaced by whales adapted to the new oceanic ecosystems that are still with us today: toothed whales, which include dolphins, orcas, and porpoises; and baleen whales, which adapted to the rich plankton blooms caused by upwellings of the new circulation, particularly in the Southern Ocean.[407]  Sharks adapted to the new whales, which culminated with C. megalodon in the Oligocene.  With the land bridges and small seas between the northern continents unavailable in colder times, the easy travel between those continents that characterized the Eocene’s warm times ended and the continents began developing endemic ecosystems.  Europe became isolated from all other continents by the mid-Eocene and developed its own peculiar fauna.  At the Oligocene’s beginning, the Turgai Strait was no longer a barrier between Europe and Asia.  More cosmopolitan Asian mammals replaced provincial European mammals, although from competition, an extinction event, or other causes is still debated, and competition is favored.  About half of European mammalian genera went extinct, replaced by immigrants from Asia, and some from North America via Asia.[408]

Africa was also isolated from other continents during those times and developed its own unique fauna.  The first proboscideans evolved in Africa about 60 mya, Africa remained their evolutionary home, and the one leading to today’s elephants lived in Africa in the mid-Oligocene.  Hyraxes are relatives of elephants, they have never strayed far from their initial home in Africa, and were Africa’s dominant herbivore for many millions of years, beginning in the Oligocene.  Some reached horse size, and a close relative looked very much like a rhino, with rhino size.  The rhinoceros line itself seems to have begun in North America in the early Eocene, and rhinos did not reach Africa until the Miocene. 

But the African Oligocene event of most interest to most humans was African primate evolution.  By the Eocene’s end, primates were extinct in Europe and North America, and largely gone in Asia.  Africa became the Oligocene's refuge for primates as they lived in the remaining rainforest.  The first animals that we would call monkeys evolved in the late Eocene, and what appears to be a direct ancestor of Old World monkeys and apes appeared in Africa at the Oligocene’s beginning, about 35-33 mya.  But ancestral to that creature was one that also led to those that migrated to South America, probably via vegetation rafts (with perhaps a land bridge helping), around the same time.  Those South American monkeys are known as New World monkeys today and they evolved in isolation for more than 30 million years.  For those that stayed behind in Africa, what became apes first appeared around the same time as those New World monkeys migrated; they diverged from Old World monkeys.  Scientists today think that somewhere between about 35 mya and 29 mya the splits between those three lineages happened.  Old World and New World monkeys have not changed much in the intervening years, but apes sure have.

The size issue is dominant in evolutionary inquiries, and scientists have found that in Greenhouse Earth conditions, animal size is relatively evenly distributed, and all niches are taken.  When Icehouse Earth conditions prevail, the cooling and drying encourages some animal sizes and not others, and mid-sized animals suffer, such as those early primates.  That may be why primates went extinct outside the tropics in the late Eocene.[409]  Tropical canopies are rich in leaves, nectar, flowers, fruit, seeds, and insects, while temperate canopies are not, particularly in winter.  Large herbivores lost a great deal of diversity in late-Eocene cooling, but the survivors were gigantic, and the largest land mammal ever thundered across Eurasia in the Oligocene.  Mid-sized species were rare in that guild.[410]

The earliest bears appeared in North America in the late Eocene and early Oligocene, and raccoons first appeared in Europe in the late Oligocene.  It might be amusing to consider, but cats and dogs are close cousins and a common ancestor lived about 50 mya.  Canines first appeared in the early Oligocene in North America about 34 mya, and felines first appeared in Eurasia in the late Oligocene about 25 mya.  Beavers appeared in North America and Europe in the late Eocene and early Oligocene, and the first deer in Europe in the Oligocene.  The common ancestor of today’s sloths lived in the late Eocene; South American giant ground sloths appeared in the late Oligocene.  The kangaroo family may have begun in the Oligocene.  The horse was adapting and growing in North America in the Oligocene.  By the late Eocene, the pig and cattle suborders had appeared, and squirrels had appeared in North America.

In summary, numerous mammals appeared by the Oligocene that resemble their modern descendants.  They were all adapted to the colder, dryer Icehouse Earth conditions, the poorer quality forage, and the food chains that depended on them.  In subsequent epochs, conditions warmed and cooled, ice sheets advanced and retreated, and deserts, grasslands, woodlands, rainforests, and tundra grew and shrank, but with a few notable exceptions, Earth’s basic flora and fauna has not significantly changed during the past 30 million years.

The Oligocene ended with a sudden global warming that continued into the Miocene Epoch (c. 23 to 5.3 mya).  The Miocene was also the first epoch of the Neogene Period (c. 23 to 2.6 mya).  Although the Miocene was nowhere near as warm as the Eocene Optimum, England had palm trees again, Antarctic ice sheets melted, and oceans rose.  The Miocene is also called the Golden Age of Mammals.  Scientists still wrestle with why Earth’s temperature increased in the late Oligocene, but there is no doubt that it did.  As the study of ice ages has demonstrated, many dynamics impact Earth’s climate, and positive and negative feedbacks can produce dramatic changes.  For the several million year warm period, carbon dioxide levels do not appear to have been elevated.  That data has been seized on by Global Warming skeptics as evidence that carbon dioxide levels have nothing to do with Earth’s temperature, but climate scientists not funded by the Hydrocarbon Lobby rarely think that way.  Carbon dioxide is only one greenhouse gas, and water is more important.  But as clouds demonstrate, water is notoriously ephemeral, constantly evaporating and precipitating, and some land can get a lot (rainforests), and some can get very little (deserts).  Icehouse Earth temperatures are more variable than Greenhouse Earth temperatures, particularly during the transitions between states, and an Icehouse Earth atmosphere contains less water vapor than a Greenhouse Earth atmosphere.

In recent years, Neogene temperatures have been the focus of intensive research.[411]  What appears to be the proximate cause of elevated temperatures was a dramatic change in global ocean currents.  The final closing of the Tethys Ocean, the isolation of Antarctica, the creation of that vast arc of Eurasian mountains, and the opening and closing of land bridges, such as in the Bering Sea and ultimately the land bridge between North and South America, created dramatic changes in ocean currents and global climate.  One result was fluctuating Antarctic Bottom Water.  Its production declined beginning about 24 mya, and its weakness lasted until about 14 mya.  Consequently, Earth’s oceans were not stratified as they are today, and warm water extended far lower into the oceans than it does today.  Also, it reduced the temperature gradient between the equator and poles, which drives global currents: the greater the differential, the more vigorous the currents.  It was still an Icehouse Earth, but the “mid-Miocene climatic optimum” was relatively warm.[412]  The past three million years are the coldest that Earth has seen since the Karoo Ice Age that ended 260 mya, but this Icehouse Earth phase began developing in the mid-Eocene.  While the steadily declining carbon dioxide levels of the past 150-100 million years is the ultimate cause of this Icehouse Earth phase, relatively short-term and regional fluctuations have had their proximate causes rooted in other geophysical, geochemical, and celestial dynamics.

Whatever the causes were, the early Miocene was warm, and as with Eocene migrations around the North Pole, migrating in the Arctic became easy again, and North America was invaded by Eurasian animals migrating across Beringia.  The prominent Menoceras descended from Asian migrants, and the strange-looking Moropus was also an Asian immigrant, which had claws on its forefeet, like a sloth’s.[413]  Pronghorns also migrated from Asia, and the first true cat in North America arrived.  Those North American days saw the last of a pig-like omnivore that was rhino-sized.  A giraffe-like camel lived then, and the first true equines appeared in the early Miocene and migrated to Asia from North America.  The general Oligocene cooling gave rise to tough, gritty plants, and deer, antelope, elephants, rodents, horses, camels, rhinos, and others developed hypsodont teeth, which had greatly expanded enamel surfaces for grinding those plants.[414]  Carnivores also migrated from Asia, such as an early bear, an early weasel, and bear dogs.  North America’s rodents and rabbits, which have a common ancestor from what became Eurasia, continued to diversify.  Later in the Miocene’s warm period, the trickle of Asian immigrants became a flood, including a giant bear dog that weighed up to 600 kilograms (1,300 pounds), and two large groups of immigrant rhinos, Teleoceras and several genera of aceratherine rhinos, displaced endemic ones.  In a late-Pliocene count of North American mammalian genera, a third were not native to North America.[415]  But North American fauna was unscathed compared to other continents.  Below is an artist's conception of Miocene North America.  (Source: public domain from Wikipedia)

The invasion of North America from Asia (with a little migration from North America to Asia), while important, was not as dramatic as what happened in Africa a few million years later.  About 24 mya, Africa and the attached Arabian Peninsula began colliding with Eurasia.  The once-vast Tethys Ocean had finally been reduced to a strait between the continents, and one of Earth’s most dramatic mammalian migrations began.  By about 18 mya, proboscidean gomphotheres had migrated from Africa and they reached North America by 16.5 mya.  An elephant ancestor left Africa but stayed in Asia.  As with the North American interchange with Asia, however, the greater change came the other way.  Rodents, deer, cattle, antelope, pigs, rhinos, giraffes, dogs (including the hyena), and cats came over, along with small insectivores and shrews.  Most of the iconic large fauna of today’s African plains originated from elsewhere, particularly Asia.[416]  Asian animals invaded and dominated Europe and Africa, and became abundant in North America.  In general, Asia had more diverse biomes and was the largest continent, so it developed the most competitive animals.  That principle, which Darwin remarked on, became very evident when the British invaded Australia in the 18th century: imports such as rabbits and foxes quickly prevailed, and endemic species were quickly driven to extinction.  The most important Miocene development for humans was African primate development, but that is a subject for a later chapter.

What seems to explain invader and endemic success with those migrations is what kind of continent the invaders came from, what kind of continent they invaded, and the invasion route.  Asia contains large arctic and tropical biomes, unlike any other continent.  North America barely reaches the tropics and only a finger of South America reaches high latitudes, and well short of what would be called arctic latitudes in North America.  Africa’s biomes were all tropical and near-tropical.  The route to Europe from Asia in the late Oligocene was straight across at the same latitude, so the biomes were similar.  About the same is true of the route to Africa from Asia.  Asian immigrants were not migrating to climates much different from what they left.  But the route to North America was via Beringia, which was an Arctic route.  Primates and other tropical animals could not migrate from Asia to North America via Beringia, and even fauna from temperate climates were not going to make that journey, not in Icehouse Earth conditions.  Oligocene North America was geographically protected in ways that Oligocene Europe and Africa were not, and it already had substantial exchanges with Asia before and was a big continent with diverse biomes in its own right.  It was not nearly as isolated as Africa, South America, and Australia were.

In South America, its animals continued to evolve in isolation, and some huge ones appeared.  In the Miocene, the largest flying bird ever known flew in South American skies; it looked like a giant condor, had a seven-meter wingspan, and weighed 70 kilograms.  The largest turtle ever lived in South America in the late Miocene and early Pliocene.  Glyptodonts first appeared, as well as a rhino-sized sloth, and some large browsers and grazers inhabited the large herbivore guild and looked like guild members on other continents, for another instance of convergent evolution.  In Australia, the Miocene fossil record is thin, but recent findings demonstrate that all Miocene mammals were marsupials, except for bats.  Kangaroos diversified into different niches; some were rat-sized and others became carnivorous.  Giant wombats foraged in the Miocene, and marsupial lions first appeared in the Oligocene, kept growing over the epochs, and when humans arrived about 50 kya, they were lion-sized.  Giant flightless birds also roamed Australia, as they still did in South America, although just how carnivorous some may have been is debated.

In the oceans, the Miocene warm period meant expanding reefs, and tropical conditions again visited high latitudes, but not to the early Eocene’s extent.  Corals, mollusks, echinoids, and bryozoans all expanded and diversified in the warm period.[417]  Also, the first appearance of the closest thing to marine forests was in the Miocene, when kelp developed about 20 mya.  Kelp forest denizens such as seals and the ancestors of sea otters also appeared in the Miocene.  Seals are closely related to bears and otters, from the family that includes weasels.  Whales radiated in the warm Miocene oceans, and C. Megalodon was not far behind.  The first rorquals appeared in the Miocene, and they specialized in eating polar krill.  They were the last whales hunted nearly to extinction by humans, after all other species had been decimated.  Rorquals were fast swimmers and hunting them was not feasible until whaling became industrialized.

For 10 million years, Earth’s ecosystems readily adapted to the warmer temperatures, but Greenland began to grow its ice sheet about 18 mya, and by 14 mya the party was over and a steady cooling trend began that lasted all the way to the beginning of the current ice age, as the Antarctic ice sheets grew like never before.  Once again, tropical flora and fauna in high latitudes either migrated toward the equator or went extinct.  Reefs cannot migrate, so those outside the shrinking tropics died out.

The cause of the cooling at 14 mya is the subject of a number of hypotheses, one of which is mountain-building in that great arc created by colliding continents exposed rock that then absorbed carbon dioxide from the atmosphere in silicate weathering.  Around the time of the cooling, the Arabian Peninsula finally crashed into Asia and closed off the Tethys Ocean, which by then was more like the Tethys Strait there.  The last remnants of the Tethys consisted of an inland sea that includes today’s Caspian, Black, and Aral seas, and the Mediterranean Sea and Persian Gulf. 

Eurasian mountain building was not the only such Miocene event.  The Cascade Range, which I have spent my life happily hiking in, began erupting in the Miocene and rose in the Pliocene, and so is one of Earth’s younger and more rugged ranges.  The Sierra Nevada of California also formed in the Miocene, and the Andes grew into a formidable climatic barrier.  The Rocky Mountains also had renewed uplifting in the Miocene, and the Southern Alps of New Zealand were formed.  In the mid-Miocene, the northward movement of Australia toward Asia initiated the plate collision that created the Indonesian archipelago, which blocked tropical flow between the Indian and Pacific Oceans.[418]  Grinding tectonic plates have created the Pacific Ring of Fire, which is Earth’s most seismically active region, and contributed to many Cenozoic mountain-building and volcanic events, but it is only a pale imitation of Mesozoic volcanism.  The radioactivity that drives plate tectonics has steadily declined over the eons, and in about one billion years the plates will cease to move and Earth will become geologically dead, as Mars is today.  Life on Earth will then quickly end, if it has not already expired.  Complex life will likely be long gone by then.[419]

As the cooling event began 14 mya, drying came again, the tropics shrank, rainforests gave way to woodlands, woodlands gave way to grasslands, grasslands gave way to steppes, coniferous forests grew, angiosperm forests shrank, and deserts and tundra grew.  In the Miocene, another major new biome appeared: grasslands.  Grasses originated in the Cretaceous and dinosaurs ate them, but it was not until the mid-Miocene's cooling at 14 mya that grasslands first appeared as a biome’s foundation.  Those grasslands were the first savannas, and North America’s Miocene grasslands would have resembled Africa’s today.  As it is today, North America’s grasslands were on the Great Plains.  Instead of elephants there were mastodonts, instead of hippos there were hippo-like rhinos, in place of giraffes were long-necked camels, some of which indeed reached giraffe size and even far more massive, pronghorns played the antelope role, and horses played zebras.  The predators would have looked a little different, and hyena-like dogs, bears, and bear dogs brought down the big game.[420]

Those grasslands, with their attendant grazers and browsers, and their predators, appeared in the pampas of Argentina, the plains of the Ukraine, China, and Pakistan, and, of course, Africa.  Africa’s savanna fauna would have looked very familiar, with elephants, antelope (including impalas, gazelles, etc.), hippos, cats, hyenas, short-necked giraffes, horses, the first modern rhinos, and the like.  In Eurasia and Africa, with the land barriers removed, all the savanna biomes resembled each other.  In the late Miocene, C4 plants began to proliferate, especially in those grasslands.  Those grasslands grew when the ice age began.

Many plant families incorporate silica into their structures.  Diatoms also incorporate silica, and those are among the few life forms that use silicon, although it is one of Earth’s most plentiful crustal elements.  Diatoms seem to gain energy advantages by using silica, and plants seem to have structural advantages, but it is thought that plants also used silica for a defensive measure, as it helps make plants unpalatable.  Eating plants full of silica structures, called phytoliths, is like chewing sand.  This is particularly true with grasses, as phytoliths make chewing them a tooth-wrecking process, particularly for ruminants and their thorough chewing.  Grazing herbivores have heavily enameled hypsodont teeth (also called high-crowned teeth) to deal with the silica and generally tough grassland vegetation.  In North America, hypsodont herbivores proliferated while those without that heavy enamel (also called low-crowned teeth), which were browsers instead of grazers, declined.  By about nine mya, North American browsers had largely vanished and grazers dominated the new grasslands.[421]  Earth kept cooling and drying, and fewer than seven mya, steppe vegetation began replacing savanna-like grasslands, and forests were decimated.  This led to the greatest mass extinction in pre-human North America in the Cenozoic Era, as many species of horses, mastodonts, bears, dogs, and small predators went extinct, as well as mice, beavers, and moles.[422]  Asia and Africa were hit similarly, although not quite as hard as North America seemed to be, but South America and Australia hardly seemed affected at all.[423]  New Zealand’s surrounding seafloor changed from warm-water communities to the Southern Ocean communities that it has today.[424]

The Tethys Ocean finally evaporated, literally, at the Miocene’s end, and it was a spectacular exit.  As part of the collision of Africa and Europe, Morocco and Spain smashed together and separated the Atlantic Ocean from the Mediterranean Sea.  Then the entire Mediterranean dried out, as there was not enough regional precipitation to replenish the evaporation.  Then the crashing Atlantic waves eroded through the rock and the Atlantic again filled the Mediterranean Sea in floods that may have been Earth history's most spectacular.  The grinding continents then made another rock dam, the Atlantic was cut off again, and the Mediterranean once again dried up.  That pattern happened more than 40 times between about 5.8 and 5.2 mya.  Each drying episode, after the rock dam again separated the Atlantic from the Mediterranean, took about a thousand years and left about 70 meters of salt on the floor of the then Mediterranean Desert.  The repeated episodes created 2,000-to-3,000-meter-thick sediments of gypsum, which is formed from evaporating oceans, as trapped as the Mediterranean was.[425]  Creating so much gypsum partially desalinated Earth’s oceans (a 6% lowering), raised their freezing point, and may have contributed to the growth of Antarctica’s ice sheets.[426]  Also, those drying episodes initiated great droughts in Africa and may well have spurred the evolutionary events that led to humans.

The Pliocene Epoch (c. 5.3 to 2.6 mya) began warmer than today’s climate, but was the prelude to today’s ice age, as temperatures steadily declined.  An epoch of less than three million years reflects human interest in the recent past.  Geologically and climatically, there was little noteworthy about the Pliocene (although the Grand Canyon was created then), although two related events made for one of the most interesting evolutionary events yet studied.  South America kept moving northward, and the currents that once circled Earth at the equator in the Tethyan heyday were finally closed.  The gap between North America and South America began to close about 3.5 mya, and by 2.7 mya the current land bridge had developed.  Around three mya, the Great American Biotic Interchange began, when fauna from each continent could raft or swim to the other side.  South America had been isolated for 60 million years and only received the stray migrant, such as rodents and New World monkeys.  North America, however, received repeated invasions from Asia and had exchanges with Europe and Greenland.  North America also had much more diverse biomes than South America's, even though it had nothing like the Amazon rainforest.  The ending of South America’s isolation provided the closest thing to a controlled experiment that paleobiologists would ever have.  South America's fauna was devastated, far worse than European and African fauna were when Asia finally connected with them.  More than 80% of all South American mammalian families and genera existing before the Oligocene were extinct by the Pleistocene.[427]  Proboscideans continued their spectacular success after leaving Africa, and Stegomastodon species inhabited the warm, moist Amazonian biome, as well as the Andean mountainous terrain and pampas.  The Cuvieronius also invaded and thrived as a mixed feeder, grazing or browsing as conditions permitted.  In came cats, dogs, camels (which became the llama), horses, pigs, rabbits, raccoons, squirrels, deer, bears, tapirs, and others.  They displaced virtually all species inhabiting the same niches on the South American side.  All large South American predators were driven to extinction, as well as almost all browsers and grazers of the grasslands.  The South American animals that migrated northward and survived in North America were almost always those that inhabited niches that no North American animal did, such as monkeys, ground sloths (which survived because of their claws), glyptodonts and their small armadillo cousins (which survived because of their armor), capybaras, and porcupines (which survived because of their quills).  The opossum was nearly eradicated by North American competition but survived and is the only marsupial that made it to North America and exists today.  One large-hoofed herbivore survived: the Toxodon.  The largest rodent ever (it weighed one metric ton!) survived for a million years after the interchange.  Titanis, that large predatory bird from South America, also survived and migrated to North America and lasted about a million years before dying out.[428]  In general, North American mammals were more energy efficient and brainier, which resulted from evolutionary pressures that South America had less of, in its isolation.  They were able to outrun and outthink their South American competitors.  South American animals made it past South America, but none of them drove any northern indigenous species of note to extinction.

The scientific consensus today is that climate change or inhospitable biomes had nothing to do with North American mammals prevailing over South American mammals, which were significantly marsupials.  But the event that made the exchange possible, closing the gap between those continents, seems to have triggered the current ice age (and may have triggered interchange events, but would not have greatly influenced their outcome), and started about 3.5 mya, as the ocean gap began disappearing between the Americas.  The closure of the gap between North and South America led to today’s thermohaline circulation and created the Gulf Stream.  Although the Gulf Stream brings warm water to the North Atlantic and makes western Europe far warmer than it would otherwise be, the pre-ice-age Caribbean had low-salinity waters that drifted north into the Arctic, and because of that low salinity, the surface water did not sink but continued into the Arctic Ocean, warming it.  Once Pacific access was cut off, the Gulf Stream formed, which was saltier (hence denser) and sank as it cooled in the North Atlantic, sinking to the ocean floor before it got to Greenland, as is the case today.  This cessation of warm tropical waters to the Arctic seems to have triggered the growth of Arctic ice, particularly Greenland, which has the world’s second largest ice sheet after Antarctica.[429]  The change in currents killed off about 65% of mollusk species along the Atlantic coast of North America, and Florida’s reefs largely died out.  Caribbean reefs survived and much of the east North Atlantic’s warm water sea life migrated south into the tropics and the Mediterranean.  Japanese mollusks also survived the new currents.  The western North Atlantic cooled off, which led not only to Greenland’s ice sheet but the largest ice sheets of the current ice age have been North American, and their volumes even exceeded Antarctica’s.

At 2.6 mya, today's ice age began.  It ended the Neogene Period and initiated the Quaternary Period, which we still live in.  The term “Quaternary” is one of the last vestiges of Biblical influences on early geology and refers to the time after Noah’s flood.  The Quaternary’s first epoch is the Pleistocene, which ended 12 kya, at the beginning of this ice age’s most recent interglacial period.  The past 12 thousand years are called the Holocene Epoch. 

The current ice age has come in phases, and about a million years ago a steady rhythm of advancing and retreating ice sheets began and has recurred about every 100 thousand years, which is certainly related to Milankovitch cycles.  During this ice age, the land fauna was already adapted to Icehouse Earth conditions, and during 17 or more ice sheet advances and retreats over the past two million years, there were not any large-scale extinctions, except for the most recent one.  Below is an artist's conception of Pleistocene Spain.  (Source: public domain from Wikipedia)

In general, the large-sized fauna guilds that have dominated the past 40 million years were well represented on all continents.  Proboscideans thrived in all inhabitable continents and biomes that they could migrate to.  In North America, mammals whose size would astound (and terrify) modern observers included the short-faced bear (about the largest carnivore ever), a bison with horns two meters wide, the largest cat ever, giant mammoths, the largest wolf ever, and the largest beaver ever.  They only seem large because of today’s stunted remnant populations.  With the exception of the bison, they all lived for millions of years, through numerous ice age events, all to go extinct just after humans arrived, along with many other species, such as the American cheetah.  The other continents had similar giants.  Australia had a kangaroo about the size of a gorilla and the largest lizard ever.  Southeast Asia had the largest primate ever, which dwarfed today’s gorillas.  With only Africa and parts of Eurasia as partial exceptions, virtually all large fauna went extinct, worldwide, soon after human arrival, and how humans came to be is the subject of a coming chapter. 

 

Mid-Essay Reflection

This chapter falls at about this essay's midpoint, and humanity's role in this story has yet to be told.  As I conceived this essay, studied for it, wrote it, edited it, and had numerous allies help out, an issue repeatedly arose regarding the half of this essay just completed, and can be summarized with: "What was the point?"  Not everybody asked it and some understood, but others wondered openly and sometimes subtly what the purpose of this essay's first half was (and some asked if the essay had any point at all and considered my effort a waste of time).  This chapter is my reply, and I think it is important to understand.

My teachers from the first grade onward remarked on my fascination with nature.  Science always came easily to me.  A bizarre set of circumstances saw me trade my science studies for business studies in college, and that voice in my head led me to attempting to fulfil my teenage dreams of changing the energy industry.  I left the pure science path for applied science in the real world, and that experience radicalized me.  In 2002, when I finished my website largely as it stands today, I longed to one day resume my math and science studies.  Soon afterward, one of R. Buckminster Fuller's pupils remarked that my work was like Fuller's, and reading his work helped crystallize the paradigm that I had been groping toward.  When that paradigmatic view became clearer, I began the studies that resulted in this essay, and my efforts since 2007 were specifically directed toward writing it.

Could this essay's first half be considered an indulgence of my childhood fascination with nature?  That argument could have merit, but I have always been a "big picture" kind of thinker, even as a teenager.  I am writing this essay primarily to help manifest FE technology in the public sphere and help remedy the deficiencies in all previous attempts that I was part of, witnessed, heard of, or read about.  The biggest problem, by far, was that those trying to bring FE technology to the public had virtually no support from the very public that they sought to help.  My journey's most important lesson was that personal integrity is the world's scarcest commodity, and an egocentric humanity living in scarcity and fear is almost effortlessly manipulated by the social managers.  John Q. Public is only interested in FE technology to the extent that he can immediately profit from it.  Otherwise, he goes back to watching his favorite TV show.  It took many years of disillusionment for that to finally become clear to me.  While this essay and all of my writings are provided for free to humanity and anybody can read them, I intend to only reach a very tiny fraction of humanity with my writings, but that tiny fraction will be sufficient for my plan to succeed.  The readers that I seek have a formidable task ahead of them, but nothing less is required for my approach to have any hope of bearing fruit.  This essay and my other writings are intended as a course in comprehensive (also called "big picture") thinking.  Studying the details deeply enough to avoid misleading superficial understandings is also a key goal.  I am an accountant by profession, but one of the world's leading paleobiologists surprisingly read an early draft of this essay and informed me that it was one of the best efforts that he ever saw on the journey of life on Earth.  There was nobody on Earth whose opinion I would have respected more than his, so I do not think that I am asking readers of this essay's first half to humor me.  Every sentient being on Earth should know the rudiments of what this essay's first half covers.

Perhaps the most damaging deficiency in FE efforts, after self-serving orientation, was that the participants and their supporters were scientifically illiterate and easily led astray by the latest spectacle.  Scientific literacy can help prevent most such distractions.  While writing this essay, I was not only bombarded with news of the latest FE and alternative energy aspirants' antics, but I had to continually field queries from my allies regarding whether Peak Oil and Global Warming were conspiratorial elite hoaxes (or figments of the hyperactive imaginations of environmentalists and other activists), for two examples that readily come to mind.  Digesting this essay's material should have those questions answered as mere side-effects.  Far from being a hoax or imaginary, Peak Oil was reached in the USA in 1970 and globally in 2005-2006, and it is all downhill from there, and conventional oil will be almost entirely depleted in my lifetime.  Shale oil and tar sands are not solutions at all, although both were heavily promoted in the USA in 2014.  In every paleoclimate study that I have seen, so-called greenhouse gases have always been considered the primary determinant of Earth's surface temperature (after the Sun), and carbon dioxide is chief among them.  The radiation-trapping properties of carbon dioxide are not controversial in the slightest among scientists, and after the Sun's influence (which is exceedingly stable), declining carbon dioxide levels are considered to be the ultimate cause of the Icehouse Earth conditions that have dominated Earth for the past 35 million years.  Humanity's increasing the atmosphere's carbon dioxide content is influencing the ultimate cause of Icehouse Earth, and oceanic currents, continental configurations, and Earth's orientation to the Sun are merely proximate causes.  Increasing carbon dioxide can turn the global climate from an Icehouse Earth to a Greenhouse Earth, and the last time that happened, Earth had its greatest mass extinction event.  But scientists with conflicts of interest have purposefully confused the issues, and a scientifically illiterate public and compliant media have played along, partly because believing the disinformation seems to relieve us all of any responsibility for our actions.  Although scientific literacy can help people become immune to the disinformation and confusion arising from many corners, and reading this essay's first half can help people develop their own defense from such distractions, my goals for this essay's first half are far greater than that.

This essay presents a table of key energy events in the history of Earth and its ecosystems, and nearly half of the events happened during the timeframe covered by this essay's first half, which includes almost the entirety of Earth's history.  Humanity's tenure amounts to a tiny sliver of Earth's history, and surveying pre-human events was partly intended to help readers develop a sense of perspective.  We are merely Earth's latest tenants.  We have unprecedented dominance, but we are quickly destroying Earth's ability to host complex life.  As my astronaut colleague openly wondered, is that the act of a sentient species?  Is our path of destruction inevitable, as we plunder one energy resource after another to exhaustion?  Will depleting Earth's hydrocarbons be the latest, greatest, and perhaps final instance?

Few people on Earth today have much understanding of the relationship between energy and economic activity.  Most people think that money runs the world, when it is only an accounting fiction.  Money by itself is meaningless, and financial measures of economic activity can be highly misleading.  I noted long ago that scientists had little respect for economists and their theories.  History's greatest energy baron and richest man funded the leading economic institution that obscured the role of energy while exalting money.  What a coincidence.  Understanding this essay's first half will help with comprehending the last half, and the connections between energy, ecosystems, and economics should become clear.

Paleobiologists are fascinated with the history of life on Earth, and I share their sense of wonder.  If I can impart the slightest sense of that to my readers, this essay's first half will be successful for that alone.  However, just as a math curriculum builds on itself, as each class forms the foundation of the next one, this essay's first half is intended to help readers develop a foundational understanding.  With that foundation built, the information in this essay's last half can make a profound impact and help readers achieve personal paradigm shifts.  That is essentially this essay's purpose.  Studying this essay's first half is far from a waste of time for those whom I seek, but is vitally important. 

 

The Path to Humanity

Chapter summary:

From their Cretaceous origins through their radiations and extinctions in the Eocene, primates continued evolving.  About 35 mya, Old World and New World monkeys, called higher primates (also called simians or anthropoids), split.  Simians seem to have split from a group also ancestral to prosimians.  Today’s prosimians include lemurs, lorises, tarsiers, and bush babies.  During the Oligocene, Africa and Southeast Asia became primate refugia.  Tarsiers have lived in Southeast Asia continually for about 45 million years, and the only survivors of their evolutionary line live on islands near Southeast Asia.  Primate history in the late Eocene and Oligocene is controversial today.  The fate of an extinct group from primates’ wide geographical range in the early Eocene is debated, but they seem at least cousins to ancestors of non-tarsier prosimians, if not ancestral to them.

This chapter and the next will survey the disputes of evolutionary lineages and geography that continue all the way to Homo sapiens.  The debates and drama have two primary sources: the first is that humans are descendants from those lines and the second is that there has been a desire to demonstrate that humanity radically differs from its ancestors, possessed of unique traits, not only in degree, but in kind.  The debates seem to get fiercer the closer the primate line gets to modern humans.[430]

Early primate migrations and extinctions led to a disjointed geographical distribution, as they could only live in tropical canopies.  When tropical forests shrank in the cooling conditions that led to the current ice age, primates such as tarsiers found themselves in isolated refugia.  In the late Eocene and late Miocene, when tropical canopies disappeared, the primate lines inhabiting them went extinct unless they used an escape route to a surviving tropical forest.  

Although simians may have first appeared in Eocene Asia, when the late-Eocene cooling began, Africa became the primary primate refuge.  Around the early Oligocene, a splinter group migrated to South America from Africa and evolved in isolation for the next 30 million years.  Just as dinosaurs marginalized early mammals, simians marginalized prosimians, beginning in the Oligocene.  Today’s prosimians either live where simians do not, or where they coexist with simians, they are nocturnal.  Prosimians have simple social organization; most nocturnal prosimians lead solitary existences.  Lemurs living in daylight have societies of up to 20.  Monkeys have far more complex social organization than prosimians, and baboon societies number up to 250 individuals, although societies of about 50 are typical.  Capuchins are considered the most intelligent New World monkeys, and their societies have between 10 and 40 members.  Studies of simian societies have shown them engaging in crude versions of human politics, which have even been called Machiavellian, which has caused some to leap to Machiavelli’s defense.

From their origins around 40-45 mya, monkeys continued evolving in Africa’s Oligocene forests, and between 35 and 29 mya, according to molecular clock studies, some African monkeys began evolving into apes, and Proconsul, a controversial transitional fruit-eating monkey, appeared about 25 mya.[431]  Mary Leakey’s most famous find was a Proconsul skull in 1948.[432]  The primary differences between apes and monkeys are that apes are larger, lost their tails (not having as much need for balancing on tree limbs), and they have a stiffer spine and larger brain.[433]  Apes began the descent from canopy to ground.  Simians will eat fruit if they can, but some early monkey/apes developed thicker tooth enamel.  That change meant that they no longer subsisted on soft fruit and leaves, but were eating coarser vegetation, which was a consequence of living in a cooler, dryer world.[434]  No Miocene apes were as adapted to leaf eating as today’s apes and leaf-eating monkeys.  As with the first tetrapods to leave water, a prominent speculation today is that those monkeys/apes changed their diets and left the trees as they lost the competitive game with other canopy-dwellers.[435]  Gibbons split from the line that became great apes about 22 mya and became masters of tree-living, with their swinging mode of locomotion.[436]

By 20-17 mya, apes became common in East Africa, some became large, up to 90 kilograms, and some resembled gorillas.[437]  Nearly all apes eventually abandoned tropical canopies, and although monkeys were scarce in the Miocene, they stayed and dominate them today.  The number of monkey species increased and ape species have decreased rather steadily over the past 20 million years.[438]  With that late-Oligocene warming that continued into the Miocene, tropical forests began expanding again.  When Africa and Arabia finally crashed into Eurasia and began that great invasion from Asia, apes escaped Africa beginning about 16.5 mya.  They had thickly enameled teeth suited to the non-fruit foods available outside rainforests.[439]  Their migrations resulted in new homes that spanned Eurasia, from Europe to Siberia to China to Southeast Asia.[440]  It was a spectacular adaptive radiation that tallied more than 20 discovered ape species so far, and has been called the Golden Age of Apes.[441]  That is how gibbons and orangutans arrived in Asia.  About 14 mya in Africa, the ancestors of today’s great apes may have appeared, and about 12.5 mya the likely ancestors of orangutans appeared in India.  By that time, tropical forests were shrinking once again and orangutans continued down their evolutionary path, isolated from their African cousins.  One possible ancestor lived in Southeast Asia about 9-7 mya.  A descendant from the orangutan line became the largest primate ever, at three meters tall and more than 500 kilograms.  Below is a comparison of that primate to humans.  (Source: Wikimedia Commons)

It lived for nine million years, only to go extinct about when humans arrived, and might have something to do with Yeti legends.  Today’s orangutans are confined to two Indonesian islands, Borneo and Sumatra, and are particularly endangered on Sumatra.  All apes besides humans are endangered today due to human activities.

In the mid-Miocene cooling’s early stages, beginning about 14 mya, apes were richly spread across Eurasia and were adapted to the hardier diets that less-tropical biomes could provide, and one from Spain 13 mya may well be ancestral to modern humans and other great apes.[442]  It largely lived on the ground and had a relatively upright posture.  Its discovery threw previously accepted ideas of ape evolution into disarray.  The idea of apes ancestral to humanity living beyond Africa is a recent one, but is gaining acceptance.[443]  Important new fossils are found with regularity, as with all areas of paleontology, but the most plentiful funding is for investigating human ancestry.  A 1996 discovery of a Miocene ape in Turkey, with features common to both orangutans and African apes, led to questioning whether some key ape features are ancestral or convergent.[444]  One early fossil ape finding is still highly controversial as to where it fits into the evolutionary tree, as it had ape and monkey features but lived 10 million years after the hypothesized ape/monkey split.[445]  The great ape lineages are the subject of considerable controversy today, and the human ancestral tree is regularly shaken up with new findings.

Around 10.5 mya, after Eurasian forests began thinning out, African rainforests began losing their continuity, broke up into isolated patches, and woodlands and grasslands appeared along rainforest edges.[446]  Whether the direct ancestor of humans moved “home” to Africa from Eurasia around 9-10 mya as the Miocene cooling progressed, or indigenous African lines led to humans, is currently controversial.  However, by seven mya the evolutionary line to humans was firmly established in Africa, as the forests that could support apes in near-African Eurasia disappeared, and the last of those lines went extinct about eight mya.  The gorilla line may have split from the human line about seven mya, but recent findings may push that back to ten mya.  Whatever the timing really was, there is little scientific debate whether humans and gorillas descended from the same line and that that ancestor lived in Africa.  The genome sequencing projects show that great ape DNA and human DNA are very similar.  Chimpanzees and bonobos, our closest surviving cousins, share more than 98% of their genes with humans.  About 94% of human DNA is identical to chimps’.  Gorillas have slightly less DNA in common with humans, and orangutans understandably have the greatest divergence.  Humans also lost a chromosome that other great apes retained. 

The terminology of the ape/human line can be confusing to a lay reader, as it gets sliced ever finer as humanity’s time is approached, and I will avoid some of the many “homi” and “homo” terms used to describe families, genera, and species.  Homo in Greek meant “same,” while homo in Latin meant “human,” which is the meaning used in ape taxonomy.  The ape clade is the superfamily called Hominoidea, and all of its branches have “hom” prefixes.  Members of the genus “Homo” are of the solely human line.  Homo habilis is perhaps the genus’s first member, although its status is still unsettled. 

Orangutans are the most arboreal great ape, and in Africa the great apes had definitely left the trees as their daytime residence, although they slept in trees to avoid predators.  Gorillas can primarily subsist on leafy vegetation, although the staple of the western lowland gorilla, which is the most prevalent gorilla species, is still fruit.  Mountain gorillas primarily subsist on leaves.  Gorillas usually have a smaller daily range than chimpanzees have and live in the heart of rainforests; what became chimpanzees were probably pushed to the margins by their larger cousins and live more along a rainforest’s woodland fringes.  They have to range relatively far to find their staple: fruit.  Since their diet is more diverse and they can survive in more varied environments, the chimpanzees’ range is far larger than that of gorillas.  Like the largest quadrupedal herbivores, gorillas ingest a great deal of low-calorie vegetation each day and are hindgut fermenters that extract energy from cellulose, which humans cannot do.  Chimpanzees are also hindgut fermenters.  As with all organisms, the ecological situation of great apes influenced their evolution, including social organization and behaviors.  This has been increasingly studied since the 19th century and has provided valuable insights into humanity, some of which follow.

The chimpanzee and human lines seem to have split between five and seven mya, and some recent estimates are as low as 4.6 mya.  The species perhaps the closest to that split found so far dates to about seven mya, but the findings have also been used to argue for pushing the human/chimpanzee split back to 13 mya.  Whatever the timing that scientists eventually agree on, the splits of orangutans first, gorillas second, and chimpanzees last (and the bonobo split arguably about a million years ago) almost certainly will not change.  The end of the Tethys Ocean between 5.8 and 5.2 mya may have been the reason for the split, as the resulting droughts from those Mediterranean Sea drying episodes further shrank the African rainforest.  As with so many other evolutionary events, the line that led to humans began to leave the trees as the losers of rainforest life and adapted to new environments probably out of necessity, not a sense of adventure and opportunity.  Those apes pushed to the margins learned to walk upright and learned to eat new foods such as roots.[447]

A recent find of a possible human-line ape may even displace australopithecines as humanity’s ancestors, relegating them to a side-branch that went extinct.  These are still the early days of investigating human ancestry, and rapidly and dramatically changing ideas about the evolutionary path to humanity will continue.  That is partly because the fossil sparseness has only been recently expanded by numerous teams digging around Africa, with dreams of the ultimate find haunting their sleep.  Darwin speculated that humans evolved in Africa, but in the early 20th century, Asia was considered the likeliest evolutionary home of humans.  In 1921, an early protohuman skull was discovered in a Rhodesian mine, and in 1924 an even more primitive protohuman skull was discovered in a South African mine.  Africa became the focus of investigating the human line and accelerated with the work of what became the Leakey dynasty, which began with Louis Leakey’s checkered but ultimately triumphant career.

That human/chimp find of 6-7 mya had thick teeth that meant that it had abandoned the arboreal ape diet and brings up perhaps the single biggest question of the early human line: “When did our ancestors became bipeds?”[448]  One piece of evidence for bipedalism is where the spinal cord enters the skull; if it is underneath the skull, it suggests an upright posture and, hence, bipedalism.  There is disputed evidence that that seven mya ape had a skull hole that meant bipedalism.  Skull and vertebrae evidence, changes in the shoulders, arms, and hands of apes from Proconsul onward, as well as the pelvis, legs, knees, ankles, and feet, are used whenever relevant ape fossils are found to determine what kind of posture they had, all the way from swinging from branches to walking upright.  The great range of motion of the human arm has that arboreal heritage to thank.

Part of that late Miocene ground-foraging existence probably included digging roots, as chimpanzees do today.[449]  Around 4.4-3.9 mya came the earliest celebrity humanoid fossil, called Ardi today.  Ardi has an older cousin, maybe an ancestor, from 5.8-5.2 mya, but Ardi is the most complete early great ape fossil.  Ardi had about the same-sized brain as a chimpanzee, but she may have walked upright.  Ardi had relatively delicate features, which suggest that she did not eat roots and tough food, but soft fruits obtained by nimbly climbing trees.  Her canine teeth are markedly less prominent than chimpanzee teeth, which has led to speculation that her species was less aggressive than chimpanzees. 

Although the human lineage through those early protohumans can be shuffled, perhaps radically, with the next new finding, today’s anthropologists are fairly confident that the human line passes through australopithecines.[450]  The first ones appeared about 4.2-4.1 mya, and about 3.9 mya, the most famous australopithecine species appeared, called Australopithecus afarensis, of which the original humanoid fossil rock star, Lucy, was a member.  She lived about 3.2 mya, and one of Mary Leakey’s greatest finds was biped footprints, probably of Lucy’s A. afarensis, dated to about 3.6 mya.  But all early humans up to australopithecines also had shoulder and arm adaptations for climbing in trees, bipedal or not, and all early humans climbed at least every night to sleep.  Sleeping on the ground is not done by great apes today except gorillas (and some chimps do), and adult male gorillas are the most regular ground sleepers, and smaller gorillas sleep in trees.[451]  Gorillas are rarely preyed upon in their rainforest homes, other than by humans, rival gorillas, and the stray leopard, which generally avoids large males.  African predators made sleeping on the ground infeasible for primates, and none does today in the kinds of woodland environments where early humans lived.  The human line may have not slept on the ground until it controlled fire.

The study of intelligence is a young science, and the relationship of brain size (both absolute and relative) and structure to what is called intelligence is currently subject to a great deal of research and controversy, and even the definition of intelligence is hotly debated.  The cerebral cortex appeared with mammals, and is the key structural aspect of brain evolution that led to human intelligence.  The mirror test attempts to determine which animals have self-recognition, and those suspected of being the most intelligent have passed the test, including all great apes, cetaceans, elephants, and even a bird.  Humans do not pass the mirror test until about 18 months of age.[452]  There is great debate between those embracing "rich" versus "lean" interpretations of behavior and intelligence observations among animals, in which seemingly complex thinking can be an illusion.[453]

Many human mental traits exist in more rudimentary form in other animals, but human thought seems far more complex and sophisticated.  Feats such as language with grammar may be a unique human achievement, which provides evidence of the greater mental ability of humans, and our tools provide the best evidence of advanced human cognitive abilities. 

Intelligence can confer great advantages, and the encephalization of theropods is an early indicator of its benefits.  For instance, spider monkeys have brains about twice the size of howler monkeys, which is thought to be due to their larger societies (about twice as large), and the fact that their diet is more than 70% fruit, while the howler monkey’s diet is less than half fruit, and leaves provide twice the proportion of the howler’s diet over the spider’s.  Remembering where and when fruit is ripe, and navigating more complex social environments, takes greater thinking power.  Just as with howler and spider monkeys, chimpanzees have to range far to find fruit, which is their staple, while gorillas can more readily eat nearby leaves, and chimps have more complex social lives than gorillas do.  Chimpanzees also have proportionally larger brains than gorillas' and are considered more intelligent.

Did the larger brain lead to the behaviors, or did the behaviors lead to the larger brain?  If other evolutionary trends have relevance, they mutually reinforced each other and provided positive feedbacks; down one evolutionary line it reached runaway conditions that led to the human brain.  The initial behavior was probably the use of a body part (the brain) for a new purpose, and its success led to selective advantages that led to mutual reinforcement.  Although it is by no means an unorthodox understanding, I think that the likely chain of events was walking upright freed hands for new behaviors, which led to new ways of making and using tools, which enhanced food acquisition activities.  This allowed the energy-demanding brain to expand, as well as related biological changes, which led to more complex tools and behaviors that acquired and required even more energy.  That, in short, defines the human journey to this day, which the rest of this essay will explore.  There has never been and probably never will again be an energy-devouring animal like humanity on Earth, unless it is a human-line descendant.

Many traits of apes, including humans, are evident in monkeys.  Sexual dimorphism, which is when species have genders of different shapes and sizes, is a minor phenomenon among prosimians.[454]  But it is pronounced in simians, especially apes, and is why men are larger and stronger than women.  Its ultimate cause is probably sexual selection: how females choose their mates.  A prominent hypothesis is that early monkey troupes had males as sentinels guarding the territorial perimeter and protecting the female-dominated core where offspring were cared for and where food was.  A defensible food source was the key attribute of any simian territory.  Most primates are territorial, and extreme territorial behaviors can be seen in monkeys and apes, including murder, with its apotheosis in humans.

Nursing led to more involved mammalian parenting behaviors and increased female participation, in addition to the great investment that females have in gestating offspring.  Larger simian males are more likely to become dominant, and dominant males often get the most and best food and have enhanced reproductive rights, as females are attracted to them.  Virtually all monkey and ape societies are male-dominated, and the modern ideal of human females freely choosing their mates (or, perhaps more importantly, non-dominant males choosing their mates, if they get to mate at all) is rarely in evidence in monkey and ape societies, and is a new phenomenon for humans.  The phenomenon of attractive women mating with rich and powerful men has deep roots in the simian evolutionary journey. 

In addition to their Machiavellian social activities, monkeys are quite vocal and a key social behavior is grooming, which is integral to forming social bonds.  In crab-eating macaques, grooming seems to be a form of foreplay or even a payment for sex, and male chimpanzees and capuchins have paid for sex, so the world’s oldest profession may be quite old indeed.  Vocalizations and grooming behaviors become more prominent in gorillas and chimpanzees (orangutan social organization is markedly different from that of African apes).  A recent hypothesis is that gossip largely replaced grooming with humans as a cheap way to form social bonds, and “cheap” is almost always measured in terms of energy and relates to how much metabolism is devoted to an activity.  Chimpanzees spend about 20% of their day grooming, and humans spend about 20% of their day in conversation.[455]  The more intelligent a primate is, the larger its society can be, to navigate all of those social relationships.  Chimp societies can reach to 120 members and humans can double that, to 250 or so, which probably not coincidentally is around the size of the group that geneticists think left Africa perhaps 60-50 kya and conquered Earth.

There are three primary survival requirements for any species: obtain nutrients (always primarily energy), avoid becoming nutrients, and perform those first two tasks long enough to produce offspring.  If those requirements are not met, the species will go extinct.  The eating instinct outranks the sex drive, but avoiding becoming food is where the most energetic behaviors can usually be found.  Primal survival instincts take over during the fight-or-flight response.  In humans, that fear response shuts down the neocortex to enable the body to perform feats of physical survival.  That is when adrenaline pumps.  All evolutionary adaptations studied by scientists always have those three primary requirements girding the explanatory framework.

Female simians usually stay within their society of origin, while males leave.  That is how simians prevented inbreeding, but that pattern is reversed in chimpanzee and gorilla societies, in which females usually leave.  Sexual coercion of females is common behavior among simians.  Bonobos and gibbons are among the few simians that overcame it, and it seems to have been due to ecological dynamics.  Humans have partially discarded that behavior during the industrial age.  Those are obviously highly charged areas of behavioral research, and sociobiology is a highly controversial scientific discipline.[456]  A falsifiable hypothesis is arguably the sine qua non of science, and behavioral sciences have often been plagued with a lack of them, going back to Freud, which has caused some to say that psychology is not really a science.  This essay will soon sail into some of those murky waters.

Becoming bipedal freed human-line hands for other uses.  The non-human great apes all have long fingers and short thumbs.  Below is a comparison of chimpanzee and human hands.  (Source: Wikimedia Commons)

Ardipithecus ramidus is an early example of the growing thumb in the ape milieu from which the human line descended.  Changes in australopithecine hands may have been at least partly adaptations to throwing and wielding clubs.[457]  Lucy’s species existed for about a million years and went extinct about 2.9 mya, but it might have been one of those “happy ending” extinctions when the descendants eventually changed to become new species.  What seems clear today is that australopithecine species were scattered around Africa, as they were a highly successful line.  Lucy’s species lived in eastern Africa, around Ethiopia, while other australopithecines lived in southern Africa and others lived in central Africa, where Miocene ape fossils have also been found.[458]  Not long after Lucy’s species disappeared, an australopithecine line appeared called “robust australopithecines,” and its members have been assigned their own genus, and Lucy and her cousins are called “gracile.”  The robusts had huge jaws and teeth, and a dramatic sagittal crest anchored their powerful chewing muscles.  A member of the robust line is nicknamed “Nutcracker Man” because of its gigantic teeth. 

Several lines of evidence have converged and more evidence is regularly amassed, which is telling a story of dramatic and rapid climate change spurring vegetation changes that initiated evolutionary adaptations in the cradle of humanity.  Sediment cores off of East Africa in the Arabian Sea, land sediment records in East Africa, combined with studies of carbon-12/13 ratios of fossil teeth, are telling an interesting and familiar tale of human origins.  Three mya, as Earth was moving toward an ice age and the climate dried, the familiar grasslands of the Serengeti appeared for the first time.  C4 grasses have higher proportions of carbon-13, and so will animals that eat them.  The expanding C4 grasslands coincided with the disappearance of Lucy's species and the appearance of the robusts that ate generous amounts of C4 plants (or perhaps eating animals that ate those plants), probably from those expanding grasslands.[459]

Becoming bipedal allowed for far greater mobility than knuckle-walkers were capable of, and farther excursions from the safety of trees became possible.  But ranging farther from the safety of trees was also dangerous.  Like Proconsul, key australopithecine fossil finds were apparently where the remains of predator meals accumulated, usually in caves.[460]  Those early apes on the path to humanity were the hunted, not hunters.  Cats such as leopards feasted on australopithecines, and one robust skull showed leopard puncture marks.[461]  Most surviving bones were those from body parts more difficult to eat, with less flesh on them, so predators left those parts largely intact.  Fossil hunters discovered body parts such as jaws, teeth, hands, and feet.  Skull finds are rare.

The woodland fringes that australopithecines and their relatives lived in were markedly different from where gorillas and even chimpanzees exist today.  Today’s most successful primates in fringe environments such as those that australopithecines operated in are macaques, which also suffer high rates of predation.  The social organization of humanity’s early ancestors may well have been more like macaques than chimpanzees.[462]

Earth’s evolutionary tree of life has many branches, so many that no one person can become intimate with all of them, and innumerable lines of animals arose, radiated, and died out, almost always going out with a whimper instead of a bang.  All australopithecine branches came to their ends, except perhaps for the line that led to humans.  About 2.6-2.5 mya, just as the current ice age began, a gracile australopithecine lived in eastern Africa, another in southern Africa, and the robust australopithecine with that amazing skull lived in eastern Africa.  The oldest manufactured stone tools yet discovered of a recognized culture were associated with that east African gracile australopith.  Earlier tools were likely made at least 3.4-3.3 mya, probably by australopiths of Lucy's species, and making them may well have been part of australopith culture for millions of years.  Many non-human animals use tools, and some even make them.  But all early tools would have been made of twigs, bones, sticks, unshaped rocks, and the like, and they have not left behind much evidence for scientists to study.  Stone tools were an energy technology that mimicked the teeth and claws of more specialized animals.

Chimpanzees are the most tool-using non-human great ape, and female chimps make and use tools more often than males do.  One problem with studying today’s animals and applying those findings to their ancestors is that their line has evolved too.  The ancestor of chimpanzees when the split was made with the human line did not look like today’s chimpanzees, and probably did not act quite like them.  However, chimpanzees and gorillas adapted to environments that have not remarkably changed for the past 8-10 million years, and it is unlikely that they have dramatically changed over that time.  Orangutans are similar.  Scientists have argued that since there is little evidence of morphological change in those great apes in the intervening years since they split from the human line, particularly in their cranial capacity, that they probably act similarly today and have similar capacities to their distant ancestors.[463]  Today’s chimps have nearly the same-sized brains as australopithecines did.  They make and use tools, and an orangutan was even trained in captivity to make stone tools.  All great apes have learned to use sign language and some even invent their own signs. 

I think it very reasonable to believe that relatively sophisticated tool use among humanity’s ancestors predates, perhaps by several million years, those stone tools dated to 3.4-3.3 mya.  Tools may be hundreds of millions of years old, and insects, fish, cephalopods, and reptiles use tools today.  The protohuman equivalent of Nikola Tesla (although it may have been a female) discovered how to bang two rocks together to create a hard edge used for cutting, perhaps with a little inventor’s serendipity.  It may not be possible to overstate the significance of that invention.[464]  More than a million years of free hands, due to australopithecine bipedal posture, probably led to the most significant tool-making event in Earth’s history to that time.  The shortening fingers and lengthening thumbs of australopithecines led to more dexterity, and in training today’s great apes to make stone tools, their relative lack of dexterity has been noted as an impediment.  Also, the increasing dexterity of the protohuman hand is linked with neurological changes, from the hands to the brain, as early protohumans took tool-making to a new level, in another case of mutually reinforcing positive feedbacks.[465]

Although that australopithecine may have been the smartest member of its species, with an ape IQ that went off the scale, his or her brain was the same size as the fellow members of his or her species, but that would not last long.  The swift climb to the appearance of Homo sapiens had begun. 

 

Tables of Key Events in the Human Journey

Timeline of Humanity’s Evolutionary Heritage

Human Event Timeline Until Europe Began Conquering Humanity

Human Event Timeline Since Europe Began Conquering Humanity

Table of Humanity’s Epochs

Humanity’s Evolutionary Heritage

Group Humans Likely Descended From

Direct Human Ancestor, or a Close Relative to It

Description

Time When Ancestor First Appeared

Earliest life forms

Last common ancestor of all life on Earth

A form of bacteria, with many traits unique to all life on Earth.

c. 3.8 – 3.5 bya

Bacteria and archaea

First complex cell

An archaean enveloped a bacterium, and both lived.

c. 2.1 – 1.6 bya

Eukaryotes

First sexually reproducing organism

This innovation accelerates evolution.

c. 1.2 – 1.0 bya

Motile eukaryotes

Choanoflagellate

Motile eukaryote was an ancestor to animals.

c. 900 mya

Unicellular organisms

First multicellular organism

Was probably sponge-like.

c. 760 – 660 bya

Immobile animals

First mobile animal

Was probably like a jellyfish.

c. 580 mya

Motile animals

Flatworm

First animal with a brain.

c. 550 mya

Worms

Acorn worm

Early animals with breathing and circulatory systems.

c. 540 mya

Fish-like ancestors to vertebrates

Pikaia

First animal with a spinal cord.

c. 530 mya

Eel-like fish

Ostracoderm

First true fish.  Used gills exclusively to breathe.

c. 505 mya

Jawless fish

Placoderm

First fish with jaws.

c. 480 mya

Cartilaginous fish

Guiyu oneiros

First bony fish.

c. 420 mya

Bony fish

Coelacanth

First fish with lobed fins, which later became legs.

c. 410 mya

Lobe-finned fish

Panderichthys

First fish that begins developing legs.

c. 380 mya

Leggy fish

Tiktaalik

First fish to crawl on land.

c. 375 mya

Crawling fish

Ichthyostega

First fish to walk on land.

c. 374 mya

Tetrapods

Acanthostega

First amphibian.

c. 365 mya

Amphibians

Hylonomus

First reptile.  First amniote. 

c. 312 mya

Reptiles

Archaeothyris

First synapsid.

c. 306 mya

Synapsids

Sphenacodonts (AKA pelycosaurs)

Lost its scales, and teeth begin to become specialized.

c. 295 mya

Pelycosaurs

Raranimus

First therapsid.  Could breathe and eat at the same time.

c. 270 mya

Therapsids

Theriodonts

May have been warm-blooded.

c. 265 mya

Theriodonts

Cynodonts

Jaws changed, freeing up bones to eventually form middle ear.

c. 260 mya

Cynodonts

Tritheledontids

More mammalian traits than reptilian.

c. 230 mya

Tritheledontids

Mammaliaformes

Nursed young, had one replacement of teeth.

c. 225 mya

Mammaliaformes

Mammals

Cranial features suggest developing mammalian brain.  Mammalian brains have first and only cerebral cortex.

c. 225 mya

Mammals

Juramaia

First placental mammal.

c. 160 mya

Placental mammals

Euarchontoglires

Tree-dwelling ancestor of rodents and primates.

c. 95-90 mya

Euarchontoglires

Euarchonta

Direct ancestor of primates.

c. 88-86 mya

Euarchonta

Primates

Primates have unique features, such as forward-looking eyes and opposable digits, which are specializations for tree-dwelling.

c. 80 mya

Primates

Simple-nosed primates

More encephalized, lost ability to produce vitamin C. 

c. 63 mya

Simple-nosed primates

Old World monkeys

Called “higher primates,” and split from New World monkeys. 

c. 35 mya

Old World monkeys

Apes

Apes lost tails, became more encephalized and intelligent, have tricolor vision. 

c. 34.5-29 mya

Apes

Great apes

Male-dominated, most intelligent primates.

c. 14 mya

Great apes

African great apes

Evolved in African isolation.  Ground-dwelling by day.

c. 12 mya

African great apes

Chimpanzee and human line

Gorillas split from line.

c. 10-7 mya

Chimpanzee and human line

Human line

Chimpanzee and human line split.

c. 5-7 mya

Human line

Ardipithecus

Possible direct human ancestor.  Smaller canines probably meant reduced male conflict.

c. 4.4 mya

Human line

Australopithecines

Possible direct human ancestor.  Walked upright.

c. 4.1 mya

Gracile australopithecines

Homo habilis

First member of genus Homo.

c. 2.3 mya

Homo habilis

Homo erectus

First Homo species to widely migrate past Africa.  Used fire.  Inventors of Acheulean stone tool technology.  The first hunter-gatherers.

c. 2.0-1.8 mya

Homo erectus

Homo heidelbergensis

May have been first humans to bury their dead.

c. 1.3 mya-600 kya

Homo heidelbergensis

Homo sapiens

First anatomically modern humans.

c. 200 kya

Homo sapiens

Behaviorally modern humans

Became founder population of today’s humanity.  Replaced/displaced all other humans.

c. 60-50 kya

A table like the above one is here. 

Human Event Timeline Until Europe Began Conquering Humanity

Event

Date

 Likely or Known Location

Global Human Population

First stone tool made

c. 3.4-3.3 mya

East Africa

 

First control of fire

c. 2.0-1.0 mya

East Africa

 

Appearance of Homo erectus

c. 2.0-1.8 mya

East Africa

 

First migration from Africa

c. 2.0-1.9 mya

Across Asia, then Europe by 1.5 mya

 

First Mode 2 (Aurignacian) stone tools made

c. 1.7 mya

East Africa

 

Appearance of Homo heidelbergensis

c. 1.3 mya-600 kya

Africa, and soon migrated to Eurasian vicinity

 

Appearance of stone-tipped weapons

c. 500 mya

South Africa

 

Neanderthal descent from Homo heidelbergensis

c. 500 mya

Europe and West Asia

 

Appearance of thrown weapons

c. 400 kya

Germany

 

Neanderthal invention of Mode 3 (Mousterian) tools

c. 300 kya

Europe

 

Appearance of Homo sapiens

c. 200 kya

East Africa

 

First heat-treated stone tools

c. 170 kya

South Africa – first seashore human community yet discovered

 

First bedding and complex tool-making processes

c. 75 kya

South Africa

 

First needle, and perhaps the first arrowheads

c. 60 kya

South Africa

 

Behaviorally modern humans appear and a group of about 300 leave Africa and colonize the rest of Earth

c. 60-50 kya

East Africa

c. 5,000

Humans reach Australia, and megafauna quickly go extinct

c. 48-46 kya

Australia, via boat

 

Humans begin invading Europe

c. 45-40 kya

Via southeast Europe

 

First cave paintings made

c. 40 kya

Europe

 

First fisherman appears

c. 40 kya

China

 

Dog domesticated

c. 33-15 kya

East-central Asia

 

Mode 4 (Châtelperronian) stone tools invented

c. 30 kya

Europe and West Asia

 

Humans begin hunting mammoths

c. 29 kya

Eastern Europe

 

Neanderthals go extinct

c. 30-27 kya

Southern Europe is their last refuge

 

First known inter-human violent conflict

c. 25 kya

Europe

 

Pottery invented

c. 20 kya

China

 

Mode 5 (microlith) tools invented

c. 17 kya

Europe

 

Humans reach the Americas, and megafauna quickly go extinct on both continents

c. 15-11 kya

Via Siberia-Alaska (15 kya by boat, 11 kya by land)

 

Pig domesticated

c. 15 kya

Tigris watershed

 

Nuts first made into human staple

c. 13.5 kya

The Levant

 

First sedentary village established

c. 13.5 kya

Euphrates watershed became first agricultural settlement about 11 kya.

 

First known mass slaughter of humans

c. 13 kya

Egypt

 

Slavery “invented”

c. 11 kya

Wherever sedentary populations appeared

 

Humans reach Mediterranean islands and megafauna quickly go extinct

c. 11-9 kya

Mediterranean periphery

 

Blond hair appears

c. 11 kya

Northern Europe

 

Blue eyes appear

c. 10-6 kya

Baltic states region

 

Cattle domesticated

c. 10.5 kya

Near Anatolia

 

Goat domesticated

c. 10 kya

Today’s Iran

c. 5 million

Agriculture begins in Americas

c. 10-8 kya

Mesoamerica

 

First city-sized settlement

c. 9.5 kya

Anatolia

 

Agriculture begins in China

c. 9-8 kya

China

 

Hook-and-line fishing invented

c. 8 kya

Eurasia, Western Hemisphere

 

Plow invented

c. 7 kya

Fertile Crescent

 

First city established

c. 5400 BCE

Mesopotamia

 

First metal smelted: copper

c. 5000 BCE

Balkans

 

Sailboat invented

c. 5000 BCE

Mesopotamia

 

Writing invented

c. 5000 BCE

Eastern/Southern Europe

 

Humans begin populating Caribbean islands, and megafauna quickly go extinct

c. 4500 BCE

Caribbean periphery

 

Mass warfare begins

c. 4000 BCE

Mesopotamia

c. 7 million

White skin appears

c. 4000 BCE

Northern Europe

 

Horse domesticated

c. 4000 BCE

Steppe region north of Black Sea

 

Humans arrive at Saint Paul Island, and isolated dwarf mammoths quickly go extinct

c. 3800 BCE

Saint Paul Island

 

Wheel invented

c. 3500 BCE

Mesopotamia or Europe

 

Bronze invented

c. 3300 BCE

Fertile Crescent

 

Harappan civilization appears

c. 3300 BCE

Indus River Valley

 

Egyptian civilization appears

c. 3100 BCE

Nile River Valley

 

Rice paddy system invented

c. 3000 BCE

China

 

Camel first domesticated

c. 3000 BCE

East Africa or Arabian Peninsula

 

First literate civilization

c. 3000 BCE

Sumer, in Mesopotamia

c. 15 million

Polynesian expansion begins

c. 3000-1000 BCE

Taiwan

 

Construction of necropolis at Giza

c. 2570-to-2470 BCE

Nile River Valley

 

Humans arrive at Wrangel Island, and the last mammoths on Earth quickly go extinct

c. 2500-2000 BCE

Wrangel Island

 

Egypt’s Old Kingdom ends

c. 2200 BCE

Nile River Valley

 

First civilization becomes depopulated

c. 2000 BCE

Mesopotamia, and environmental refugees disperse.  Intense deforestation of the region from Morocco to Afghanistan commences.  Today, only about 10% of that forest remains, and much has turned to desert.

c. 27 million

Bronze Age civilizations rise and collapse

c. 2700-to-1150 BCE

Mediterranean and periphery, including Egypt

 

Harappan civilization collapses

c. 1800-to-1700 BCE

Indus River Valley

 

Olmec civilization appears

c. 1600-1500 BCE

Mesoamerica

c. 38 million

Egyptian civilization at its height

c. 1350

Nile River Valley

 

First iron age begins

c. 1300 BCE

Anatolia, Balkans, or Caucasus

 

Trojan War fought

c. 1200 BCE

Mediterranean shore of Anatolia

 

Peak influence of Phoenician civilization

c. 1200-to-800 BCE

Eastern Mediterranean, Levant

 

Bantu Expansion Begins

c. 1000 BCE

Equatorial Africa

c. 50 million

Madagascar discovered, and megafauna quickly go extinct

c. 1000 BCE

Madagascar, via Africa

 

Rome founded

c. 750 BCE

Italian Peninsula

 

Assyria destroys Kingdom of Israel

c. 722 BCE

The Levant

 

Greece begins to recover from collapse of Mycenaean civilization

c. 700 BCE

Greece

 

Gautama Buddha born

c. 560-480 BCE

Today’s Nepal

 

Athens enters its classic phase

 508 BCE

Greece

 

First Mesoamerican state appears

c. 500 BCE

Mesoamerica

 

Victory in 50-year-war with Persia marks height of classic Greek civilization

449 BCE

Greece

 

War with Sparta, and devastating epidemic, marks decline of Athens

431-to-404 BCE

Greece

 

Alexander the Great conquers numerous civilizations with a military prowess unsurpassed until industrialized warfare

336-to-323 BCE

Eastern Mediterranean to India

 

Watermill invented, probably by Greek engineers

c. 300-250 BCE

Greece

 

Rome begins first war with Carthage

264 BCE

Mediterranean periphery

 

Paper invented

c. 200 BCE

China

 

Rome destroys Carthage and Corinth, enslaving the survivors

146 BCE

Northern Africa and Greece

 

Roman civil wars begin that end the republic

c. 133 BCE

Mediterranean periphery

 

Defeat of Mark Antony and Cleopatra mark end of Roman republic and beginning of Roman empire

31 BCE

Mediterranean near Greece

 

Jesus born

c. 7-4 BCE

Today’s Israel

c. 170 million

Rome invades Great Britain

43 CE (all subsequent dates in this table are CE)

Island of Great Britain

 

Windmill and steam engine invented

c. 50

Roman Egypt

 

Rome defeats Second Jewish revolt against Roman rule

73

Today’ Israel

 

Moche culture appears

c. 100

Peru

 

Antonine plague ravages Roman Empire, kills two emperors, and marks end of Peace of Rome

165-180

Mediterranean periphery

 

Plague of Cyprian scourges Rome

250-270

Mediterranean periphery

 

Polynesians discover Hawaii

c. 300-800

Hawaiian islands

 

Christianity becomes Rome’s state religion

325-to-380

Mediterranean periphery

 

Roman imperial capital moved to Constantinople

330

Anatolia

 

Horse collar invented

5th century

China

 

Rome falls to Germanic tribes

476

Italian Peninsula

 

Teotihuacan declines from drought

c. 535

Valley of Mexico

 

Plague of Justinian kills up to half of Europe

541-542

Mediterranean periphery

c. 200 million

Muhammad born

c. 570

Levant or Arabian Peninsula

 

Cahokia settled

c. 600

North America, on Mississippi River

 

Arabs begin enslaving Africans

c. 650

African periphery, other than equatorial West Africa

 

Islamic Moors invade Iberian Peninsula

711

Iberian Peninsula

 

Mayan civilization collapses

c. 750-to-950

Mesoamerica

 

Viking expansion

c. 787-to-early-1000s

Northern Europe, North Atlantic, North America, Eastern Europe

 

Medieval Warm Period begins

c. 800

Earth

 

European watermills begin great proliferation

c. 1000

Western and northern Europe

 

Chinese horse collar used in Europe

c. 1000

Europe

c. 400 million

Compass used for navigation

c. 1040

China

 

England conquered from France, and peasantry begins dispossession

1066

England

 

Christian conquest of Toledo results in Greek teachings being reintroduced into Europe

1085

Iberian Peninsula

 

First Crusade begins

1096

Europe to Levant

 

Angkor Wat completed

c. 1150

   

Fourth Crusade sacks “ally” Constantinople

1204

Anatolia

 

Albigensian Crusade begins

1209

Southern France

 

Rise and fall of Mongol empire

1206-to-1368

China to Europe

 

Mexica people arrive in Valley of Mexico, later known as Aztecs

c. 1248

Mesoamerica

 

Medieval Warm Period ends

c. 1250

Globally

 

Maoris discover New Zealand and drive megafauna to extinction in about a century, maybe less

c. 1250-1300

New Zealand

 

Queen Eleanor driven from Nottingham by cloud of coal smoke

1257

England

 

Series of European famines mark prelude to Little Ice Age

1304-1317

Europe

 

England and France begin more than 100 years of warfare

1337

England and France

 

Black Death sweeps Old World

c. 1338-1350

Eurasia

 

Renaissance begins, rise of humanism in Europe

Late 1300s

Northern Italian Peninsula

 

Cahokia abandoned, probably due to environmental overtaxation, Mississippian civilization begins its decline

c. 1400

North America, on Mississippi River

 

China mounts naval expeditions in Indian Ocean and in Pacific Ocean near Southeast Asia

1405-to-1433

Periphery of Indian Ocean and Southeast Asia

 

Portugal begins sailing the Atlantic Ocean

1420

Atlantic Ocean

 

Aztecs form the Triple Alliance that dominates the Valley of Mexico

1428

Mesoamerica

 

Portugal initiates new era of slavery with captured Africans

1434

Iberia and West Africa

 

Incan expansion begins

1438

Peru

 

Printing press invented

c. 1439

Germany

 

Ottoman conquest of Constantinople

1453

Anatolia

 

Portuguese naval expedition crosses the southern tip of Africa. 

1488

South Africa

 

Columbus stumbles into Western Hemisphere, and European conquest of humanity begins.

1492

Bahaman and Caribbean islands

c. 500 million

 

Human Event Timeline Since Europe Began Conquering Humanity

Event

Date

 Likely or Known Location

Global Human Population

Columbus returns to Caribbean with invasion force

1493

Island of Española

 

First gold strike on Española, initiating century-long quest for gold in Western Hemisphere.

1499

Island of Española

 

Portuguese Vasco da Gama expedition returns after expedition reaches India by sailing around Africa

1499

African and South Asian periphery

 

Portugal launches military expedition to conquer spice trade

1500

African and South Asian periphery

 

Martin Luther begins the Reformation

1517

Germany

 

Spanish conquest of Aztecs provides greatest proselytizing opportunity ever for the Catholic Church, the same year that it condemns Martin Luther

1521

Mesoamerica

 

Magellan expedition is first to circumnavigate Earth

1522

Earth

 

Spain invades Incan Empire

1532

Peru

 

Henry VIII kicks Catholic Church out of England

1534

England

 

English ironworks established for the first time since the Roman invasion

1543

Island of Great Britain

 

First works of modern science published

1543

Europe

 

Michael Servetus burned at the stake for his “heresies” in Protestant Geneva

1553

Geneva

 

The Spanish crown goes bankrupt, in the first of several bankruptcies that mark its imperial decline

1557

Spain

 

The Inquisition begins banning “heretical” books

1559

Europe

 

French Wars of Religion begin

1562

France

 

Spanish establish permanent presence in Philippines

1565

Philippines Islands

 

Dutch revolt against Spanish rule begins

1566

Netherlands

 

Portuguese nobility, including its king, annihilated by Moors when they invade north Africa

1578

North Africa

 

Francis Drake returns from pirate expedition that circumnavigates Earth, and becomes England’s richest private citizen

1580

England

 

Spanish armada destroyed engaging the English and Dutch

1588

England’s periphery

 

Giordano Bruno burned at the stake for his heresies

1600

Rome

 

English East India Company founded

1600

England

 

Dutch East India Company founded

1602

Netherlands

 

English make first visit to New England, and note the prodigious forests that could be used for sailing ship masts 

1602

New England

 

King James I campaigns against smoking tobacco

1604

England

 

English establish Ulster Plantation

1606

Today's Northern Ireland

 

English establish Jamestown

1607

Today’s Virginia

 

French establish Montreal

1611

Today’s Quebec

 

Dutch establish Jakarta

1619

Today’s Indonesia

 

English establish Plymouth

1620

Today’s Massachusetts

 

Rembrandt van Rijn opens his first studio

c. 1624

The Netherlands

 

Dutch establish Fort Amsterdam

1625

Manhattan Island

 

Galileo Galilei forced to recant his scientific findings by the Inquisition

1633

Italy

 

English civil wars begin

1642

England

 

The Maunder Minimum marks the heart of the Little Ice Age

c. 1645 to 1715

Earth

 

Thirty Years’ War ends

1648

Europe

 

Western Hemisphere’s population about nine million, down from 30-100 million in 1491, for history’s greatest demographic catastrophe

1650

Western Hemisphere

c. 500 million

English and Dutch begin series of wars

1652

Europe

 

Dutch establish Cape Town

1652

South Africa

 

Isaac Newton invents calculus

1666

England

 

Antonio Stradivari begins making violins

1666

Italy

 

War between France and Netherlands ends, marking the decline of Dutch power

1678

Europe

 

Scotland formally unites with England to become Great Britain

1707

Island of Great Britain

c. 600 million

Abraham Darby founds first successful iron-smelting operation based on coal

1709

England

 

Thomas Newcomen builds first commercial steam engine

1710

England

 

Voltaire imprisoned for his satirical writings

1717

Paris

 

Isaac Newton loses life’s fortune speculating in the slave trade

1720

England

 

Roller Spinning machine for cotton patented, soon followed by many other models

1738

England

 

Abolition movements begin in Europe

c. 1750

Europe

 

Great Britain wins first global war, defeating France

1763

Europe, North America, Asia

 

Great Britain begins conquering India, beginning with Bengal

1764

Bengal

 

First British-induced famine hits Bengal

1770

Bengal

 

James Cook “discovers” Australia

1770

Australia

c. 800 million

James Cook nearly reaches Antarctica, turned back by ice

1773-1774

Antarctica

 

James Watt installs first commercial application of his steam engine

1776

England

 

Adam Smith publishes first work of classical political economy

1776

Scotland

 

French-assisted American Revolution begins

1776

Eastern North America

 

Antoine Lavoisier falsifies phlogiston theory of combustion

1777

France

 

James Cook “discovers” Hawaii

1778

Hawaiian islands

 

George Washington crafts plan to steal North America from its natives.

1782

Eastern North America

 

First steamboat built

1783

France

 

French Revolution begins

1789

Paris

 

Mozart dies, marking the beginning of the end of Classical Period in music

1791

Vienna

 

Cotton gin patented

1793

USA

 

Great Britain unites with Northern Ireland to become the United Kingdom ("UK")

1800

   

First steam powered railroad built

1804

Wales

c. 1 billion (estimated to have happened between 1800 and 1810)

Napoleon defeated at Waterloo

1815

Today’s Belgium

 

First photograph made

1822

France

 

Sadi Carnot publishes first work on thermodynamics

1824

France

 

The USA steals more than half of Mexico

1836-to-1848

Western North America

 

The UK invades China under principles of “free trade”; first use of steam-driven naval ships in warfare

1839

China

 

Charles Dickens publishes A Christmas Carol

1843

England

 

Ignaz Semmelweis pioneers sanitary medical practices

1847

Vienna

 

American whaling peaks

1847

Global ocean

 

Karl Marx publishes his Communist Manifesto

1848

England

 

California Gold Rush begins

1848

California

 

Herman Melville publishes Moby-Dick

1851

USA

 

The USA invades Japan

1853

Japan

 

First industrial war begins

1853

Crimea

 

Darwin publishes Origin of Species

1859

England

 

First commercial oil well drilled in the West

1859

Pennsylvania

 

The USA’s Civil War begins

1861

USA

 

John Rockefeller enters oil industry

1863

Ohio

 

The USA’s transcontinental railroad is completed

1869

USA

 

John Rockefeller’s empire controls 95% of the USA’s oil refining, and  Rockefeller soon becomes history’s richest human

1879

USA

 

Thomas Edison publicly demonstrates incandescent lighting

1879

Menlo Park, Edison, New Jersey

 

Final large massacre of American Indians

1890

South Dakota

 

Vincent van Gogh dies, marking the waning of the post-impressionism era

1890

France

 

Nikola Tesla’s alternating current technology wins “war” with Edison’s direct current

1891

USA

 

Americans overthrow Hawaiian monarchy

1893

Hawaii

 

The USA steals last remaining shreds of Spain’s empire

1898

Caribbean, Philippines

 

Wright brothers first fly

1903

North Carolina

 

Ford Motor Company established

1903

Detroit

 

Panama gains “independence” via robber baron swindling of the USA’s government

1903

Panama

 

Tesla loses funding for his free energy tower

1903

USA

 

Albert Einstein publishes first paper on relativity

1905

Switzerland/Germany

 

Mark Twain publishes King Leopold’s Soliloquy to protest the “philanthropic” genocide in the Congo

1905

USA

 

Method developed to artificially fix nitrogen

1909

Germany

 

Greatest international balance of payment difference is between the UK and India

1910

UK and India

 

Winston Churchill begins converting the British Navy from coal to oil

1911

UK

 

Income tax amendment and Federal Reserve Act passed

1913

USA

 

World War I begins

1914

Europe

 

Company controlled by notable “philanthropist” John Rockefeller uses machine guns on striking coal miners

1914

Colorado

 

Einstein publishes general theory of relativity

1915

Germany

 

Russian Revolution

1917

Russia

 

World War I ends, and oil-rich Ottoman Empire is dismembered by imperial nations

1918

Eurasia

 

First confirmation of general theory of relativity

1919

South America and Africa

 

Modern quantum theory invented

1925

Europe

c. 2 billion (reached in 1927)

Hitler publishes Mein Kampf and lauds Henry Ford for his anti-Jewish publications

1925

Germany

 

Public relations campaign to addict American women to tobacco begins

1929

New York

 

Great Depression begins with stock market crash

1929

USA, then the world

 

Fluorine ion discovered as cause of tooth mottling

1931

USA

 

Hitler comes to power

1933

Germany

 

Attempted White House coup

1933

USA

 

The American Medical Association helps provide “scientific” evidence to promote tobacco smoking

1935

USA

 

World War II begins

1939

Europe

 

World War II ends with nuclear weapons dropped on cities

1945

Japan

 

Post-war boom of unprecedented prosperity begins

1945

USA, with the rebuilding West also benefitting

 

Communist Revolution begins

1946

China

 

Roswell UFO incident

1947

USA

 

National Security Act passed, CIA founded

1947

USA

 

Public relations campaign begins for putting fluoride ion in water supply as tooth “medicine.”

1947

USA

 

Transistor invented

1947

USA

 

The CIA begins overthrowing elected governments on behalf of corporate interests

1953

Iran

 

The American Medical Association stops promoting tobacco smoking in its journal

1954

USA

 

Sputnik launch begins space race

1957

Soviet Union

 

Revolution overthrows American-friendly dictatorship

1959

Cuba

c. 3 billion (reached in 1960)

World War III narrowly averted

1962

Cuba, Soviet Union, USA

 

John Kennedy murdered

1963

USA

 

The USA invades Southeast Asia

1964

Southeast Asia

 

Apollo 11 lands on the moon

1969

The Moon

 

Peak oil production reached

1970

USA

 

West’s first oil crisis marks end of post-war boom; American energy consumption and wages peak and declined afterward

1973

Earth, USA

c. 4 billion (reached in 1974)

The USA lures the Soviet Union into invading Afghanistan

1979

Afghanistan

 

Three Mile Island nuclear accident

1979

Pennsylvania

 

The American Medical Association’s retirement fund is discovered to have more than million invested in tobacco farms

1979

USA

 

Revolution overthrows the USA’s puppet dictator

1979

Iran

 

The American Medical Association's board members are discovered to own tobacco farms

1985

USA

 

Chernobyl nuclear disaster

1986

Soviet Union

 

The USA uses the threat of trade sanctions to open Asian markets to tobacco companies, primarily to addict their women and children, using familiar “free market” principles

1986

Taiwan, South Korea, Japan, and Thailand

c. 5 billion (reached in 1987)

Microsoft makes its initial public stock offering, with Bill Gates soon becoming Earth’s richest human

1986

USA

 

Soviet Bloc begins fragmenting, Berlin Wall falls

1989

Eastern Europe, Berlin

 

The USA attacks Iraq to begin invasion of the oil-rich Middle East

1991

Iraq, Kuwait

 

The Soviet Union collapses

1991

Soviet Union

 

Internet revolution begins

c. 1996

Industrialized nations

c. 6 billion (reached in 1999)

Terror attacks on September 11

2001

USA

 

The USA invades Afghanistan

2001

Afghanistan

 

The USA invades Iraq, with imperial nation assistance

2003

Iraq

 

Peak oil production reached

c. 2006

Earth

 

Financial panic

2008

Industrialized nations

 

Gulf oil spill

2010

Gulf of Mexico

 

Fukushima nuclear disaster

2011

Japan

c. 7 billion

Humanity’s Epochal Events

Energy Epoch (see data derived here)

Primary Energy Sources

Approximate time when pristine instance of event began

Input – multiple of dietary calories

Energy Efficiency

Surplus energy produced

Societal attributes

Environmental effects

1: Making stone tools/ controlling fire/ growing the human brain

Scavenged/processed (cooked?) food and wood

3.4 mya for stone tools, 1-2 mya years ago for fire.

1 (see this discussion)

 

0

Hand-to-mouth, organized like chimpanzees or perhaps macaques; male dominated.

No more than any other animal, at least before fire harnessed.

2: Super-predator/hunter-gatherer

Cooked hunted and gathered food, and wood

60-50 kya

2.5

<5%

0.1

Share the kills and gathering results; fight other bands in raids, especially as territories shrink.

Anthropogenic burning alters ecosystems; megafaunal extinctions.

3.1: Subsistence agricultural

Cooked crops and wood

11 kya

5

10%

0.5

Village life, beginning of social hierarchies as economic redistribution becomes more complex; initially peaceful through chiefdom phase, but organized warfare between settlements develops as states begin to form.

Plants and animals domesticated; environments around villages and herds are transformed into human-useful biota.

3.2: Advanced agricultural

Cooked professionally raised crops and wood

6 kya

10

15%

1.5

State formation, literacy, economic/social/political stratification: elites appear, mass slavery, pronounced subjugation of women, professions form, including soldiers, priests and craftsmen.

Plow agriculture disturbs soils, professional deforestation, irrigation, competing predators eliminated, urban environments formed, with commensurate large ecological footprints. 

4.1: Early industrial

Coal

1700 CE

30

25%

7-to-8

Capitalist formation, end of pronounced subjugation of women and slavery abolished.  Industrial working class appears, severed from land.  Warfare becomes industrialized.  Transcontinental, capitalist-based empires form.

Heavy mining operations, increasing air and water pollution, increase in carbon dioxide content of atmosphere, disappearing forest and natural habitats in larger areas.

4.2: Advanced industrial  

Coal, oil, and electricity

1860 CE

60

35%

21

Women liberated, capitalists dominate states, and global wars as empires fight over controlling subject peoples and their resources.

Nature under siege.  Roads and expanding urban areas bring larger areas of nature under human control and resultant destruction.  Conservation movements begin.

4.3: Industrial-technological

Oil, coal, and electricity, with nuclear power producing some of it

1950 CE

110

36%

40

Racism, sexism, and other discriminatory ideologies largely overturned in imperial heartlands, but exploitation exported to subject peoples.  Warfare to secure energy resources becomes more intense, but due to threat of nuclear weaponry, warfare is not waged between industrialized nations, but against resource-rich but industrially poor nations.  As oil runs out, the standard of living in industrialized nations declines.

Nature largely banished from urban environments, humanity’s ecological footprint encompasses the entire planet, species extinctions accelerate at biosphere-threatening rates, and human-induced climate change becomes dramatic.  As oil runs out, increasingly marginal sources are exploited, with resultant accidents.

5: Free energy

Zero-point field

2020?

Virtually unlimited

Relatively unimportant

1,000 or 10,000 or 100,000 or more, reaching a "Type 1 Civilization"

End of scarcity-based ideologies.  End of hierarchical societies.  End of urban societies.  Race disappears.  With the end of scarcity comes the end of war.  Heaven on Earth – or do we blow Earth up?

Exploitation of nature no longer necessary to improve human standard of living.  No more destructive mining of materials or water tables, and the end of air and water pollution.  Nature reclaims ecosystems, ideally with human assistance. 

 

Humanity’s First Epochal Event(s?): Growing our Brains and Controlling Fire

Chapter summary:

When that likely human ancestor made the first stone tool, it was the culmination of a process of increasing encephalization and manipulative ability that probably began its ascent with the appearance of apes and accelerated when humanity’s ancestors became bipedal.  Studying great apes today and applying those findings to humanity’s ancestors is problematic, but there has probably not been significant evolution in great apes since they descended from the last common ancestor that they shared with humans, particularly chimpanzees.  About one mya, bonobos split from other chimpanzee populations and became a separate species, but for many years scientists did not realize it.  Another chimpanzee split about 1.5 mya created east and west chimp species that are virtually indistinguishable today.  It is widely considered to be very likely that the last common ancestor of chimps and humans looked like a chimp.[466]

Other than humans, rhesus macaques are Earth’s most widespread primates, and both species are generalists whose ability to adapt has been responsible for their success.  Rhesus macaques are significantly encephalized, about twice that of dogs and cats, and nearly as much as chimpanzees.  Rhesus macaques have what is called Machiavellian social organization, in which everybody is continually vying for rank and power is everything.  Those with rhesus power get the most and best food, the best and safest sleeping places, mating privileges, the nicest environments to live in, and endless grooming by subordinates, whom the dominants can beat and harass whenever they want, while those low in the hierarchies get the scraps and are usually the first to succumb to the vagaries of rhesus life, including predation.[467]  It is the same energy game that all species play.  But even the lowliest macaque will become patriotic cannon fodder if his society faces an external threat, as even a macaque knows that a miserable life is better than no life at all.  The violence inflicted seems economically optimized; within a society the violence is mostly harassment, but when rival societies first come in contact, the violence is often lethal, as the initially established dominance can last for lifetimes.  Within a society, killing a subordinate does not make economic sense, as that subordinate supports the hierarchy.  Potentates rely on slaves.  The human smile evolved from the teeth-baring display of monkeys that connotes fear or submission.[468]

For all of their seeming cunning and behaviors right out of The Prince, rhesus monkeys cannot pass the mirror test; they attack their images, as they see themselves as just another rival monkey.  Chimpanzees, on the other hand, pass the mirror test, and the threshold of sentience, whatever sentience really is, may not be far removed from the ability to pass the mirror test, or perhaps humanity has not yet achieved it.  Capuchin monkeys, considered the most intelligent New World monkeys, have socially based learning, in which the young watch and imitate their elders.  Different capuchin societies have different cultures and different tool-using behaviors reflected in different solutions to similar foraging problems.[469]  Capuchins, isolated from African and Asian monkeys for about 30 million years, have striking similarities to their Old World counterparts, with female-centric societies and lethal hierarchical politics.  As with chimpanzees and humans, ganging up on lone victims is the preferred method, which increases the chance of success and reduces the risk to the murderers.[470]  Unlike rhesus monkeys, for instance, capuchin males can help with infant rearing, but they will also kill infants that they did not father, as rhesus, chimpanzees, and gorillas also do (that behavior has been observed in 50 primate species).[471]  Those comparisons provide evidence that simian social organization results from the connection between simian biology and environment; their societies formed to solve the problems of feeding, safety, and reproduction.

Chimps and orangutans have distinct cultures and ways of transmitting knowledge, usually confined to observation.  They have regional variations in tool use, and orangutans can display startling intelligence in captivity that is not witnessed in the wild, which may be like country bumpkins moving to the city where they can develop their intellects or get a chance to use them.[472]  Chimps can negotiate, deceive, hunt in ranked groups, learn sign language, use more than one tool in a process, problem-solve, and engage in other human-like activities.  Developmentally, a chimp is ahead of a human until about age two, and chimps can also express empathy.[473]  Research has suggested that imitation (performing somebody else’s actions) and empathy (feeling what somebody else feels) are related neurologically.[474]  Humans, however, are far better than chimps in their social-cognitive skills, which brings in the "theory of mind," which is guessing what others are thinking.  This is suspected to be the key developmental trait that set humans apart from their cousins.[475] 

Many observable common aspects of today’s simians probably reflect ancestral traits predating the evolutionary splits that led to humans.  A chimpanzee’s brain is about 360 cubic centimeters (“ccs”) in size, and that gracile australopithecine that probably made those early stone tools had a brain of about 450 ccs.  That brain growth reflected millions of years of evolution since the chimpanzee line split, at least a million years of bipedal existence, and hands adapted to manipulating tools.  The cognitive and manipulative abilities of the species that made early stone tools seem to have been significantly advanced over chimps.  Below is a comparison of the skull of a modern human, and orangutan, a chimpanzee, and a macaque.  (Source: Wikimedia Commons)

The human brain weighs more than three times the orangutan's and chimpanzee's, and more than ten times the macaque's.  Beginning about 2.5 million years ago, around when the first stone tools were invented, the human line's jaws became weaker and jaw muscles were no longer attached to the braincase.[476]  Some scientists think that that change helped the human line's brain grow.

The rise of humans was dependent on numerous factors, but the most important may have been the ability to increase humanity’s collective knowledge.  If each invention during human history had to be continually reinvented from scratch, there would not be people today.  The cultural transmission of innovations was critical for growing humanity’s collective technology, skills, and intelligence.  Striking stones to fashion tools was new on Earth, and it was likely invented once, and then proliferated as others learned the skill.  The pattern of proliferation of stone tool culture in Africa supports that idea.

Those first stone tools are called pebble tools, and anthropologists have placed the protohumans who made them in the Oldowan culture (also called the Oldowan industry, or Mode 1 on the stone tool scale).  The rocks used for Oldowan tools were already nearly the shape needed and were made by banging candidate rocks on a rock “anvil,” and the fractured rock’s sharp edge was the tool.  Those first stone tool makers were largely still the hunted, not hunters, and stone edges would have been like claws and teeth that would have made scavenging predator kills easy in a way that primates had never before experienced.  Modern researchers have used Oldowan tools to quickly butcher elephants.  Sawing a limb from a predator kill and stealing it would have been quick and easy.[477]  Stone tools also crushed bones to extract marrow, and would have made harvesting and processing plant foods far easier.[478]

Below are relics of the five stone tool cultures that scientists have discovered.  (Source for all images: Wikimedia Commons)

Scientists today think that above all else, the first stone tools began humanity’s Age of Meat.  Meat is a nutrient-dense food and is highly prized among wild chimpanzees that use it as a key social tool, and male chimps have used it as payment for sex.[479]  The human brain is more than three times the size of a chimpanzee’s, but recent research suggests that the human brain’s size is normal for its body size, and great ape brains seem relatively small because their bodies became relatively large, possibly due to sexual selection that resulted from vying for mates.[480]  Humans developed relatively larger brains and relatively smaller and weaker bodies, which was probably an energy tradeoff; something had to give.[481]  Protohumans began relying on brains more than brawn.  The studies of brain size, encephalization, neocortex function, intelligence, and their relationships are in their infancy.  The current leading hypothesis for the stimulant of simian brain growth is social navigation.  Larger brains were needed for navigating increasing social complexity, and not only the number of individuals in a society, but the sophistication of interactions.[482]  It is also argued that smarter brains allowed for greater social complexity, in another possible instance of mutually reinforcing positive feedbacks.  Societies can perform tasks that individuals cannot.  Those Machiavellian rhesus macaques engage in wars and revolutions.  They can procure a food source and secure the territory, which creates the energetic means for developing a society.  Tool-making may have been a bonus of that enlarged brain needed for social navigation, and walking bipedally coincidentally provided new opportunities for hands.  Numerous hypotheses have been proposed to explain the rise of human intelligence, and all proposed dynamics may have had their influences.  Brains have very high energy requirements, about 10 times the energy needs of equivalent muscle mass, and primates cannot consciously turn their brains off any more than they can turn their livers off.  Few studies have been performed on the relationships between energy, brains, and sleep, but a recent one found that sleep seems to be how brains recharge themselves.[483]

Larger brains had to confer immediate advantages or else they would not have evolved, especially as energy-demanding as they are.  Evolutionary pressures ensure that there is no cost without an immediate benefit.  As humans have demonstrated, intelligence combined with manipulative ability led to a domination of Earth that no other organism ever achieved.  Humans weigh about 50% more than chimpanzees, but have brains three times the size.  A human brain comprises about 2% of the body’s mass, but uses nearly 20% of its energy at rest.  Growing an energy-demanding organ was funded with the coin of energy.  How did protohumans manage it?

There are a number of possible solutions to obtaining the energy to fuel the growing protohuman brain, and they all fall under these categories:

 

  • Increase total energy input;
  • Reduce total energy output;

  • Rob energy from other tissues and processes; they will either become smaller, more energy efficient, or will be discarded. 

 

Studies have shown that humans and chimpanzees have the same basal metabolism, so the first possibility is considered very unlikely in our ancestors, although large brains in general seem to require higher metabolic rates.[484]  The subject of reducing energy output has an intriguing hypothesis: bipedal motion allowed humans to move by using less energy than our pre-bipedal ancestors.  Human bipedal locomotion requires only a quarter of the energy that chimpanzee locomotion does, and chimps use about a quarter of their metabolism walking, although whether this was a key evolutionary event is controversial.[485]  Even though protohumans would have taken advantage of bipedal walking to range farther than chimps (humans can average 11 miles a day, while chimps can only achieve six[486]), thereby using a relatively larger proportion of their energy on locomotion; bipedal locomotion energy savings alone might largely account for the growing brain’s energy needs.  The Expensive-Tissue Hypothesis was developed to account for the required energy, which proposed that energy to fuel the growing brain came from reducing digestion costs, which was initially provided by eating more meat.[487]

Gorillas and chimpanzees are hindgut fermenters and can digest cellulose while humans cannot.  The human digestive tract is only about 60% of the size expected for a primate of our size.[488]  Human guts are far smaller than chimp and especially gorilla guts, which process all of that low-calorie foliage.  Chimp and gorilla rib cages flare outward from top-to-bottom, like a dress, as did australopithecine rib cages, to accommodate large guts, as shown below. 

When chimpanzees eat meat, they put large, tough leaves in their mouths.  That helps them overachieve as meat eaters, as their teeth and jaws are poorly adapted for chewing meat.  Mountain gorillas eat no meat at all.  In the wild, great apes spend about half of their day chewing.  Chimpanzees are the most carnivorous great ape, and although meat is the greatest treasure in chimpanzee societies, they often stop eating meat after chewing it for an hour or two and revert to fruit and other softer foods if they can get it.  Chimpanzees hunt animals primarily during the dry season when their staple, fruit, is scarce.  Chimps have been seen killing monkeys, eating their organs, and then abandoning the carcasses to find more monkeys to kill.  Organ meats and intestines are far easier to chew, and a poor meat chewer like a chimpanzee prefers soft meats.[489]  Just as chimpanzees prefer soft meats, predators will eat soft organs first and leave the tougher muscle for later, if they eat it at all.  It depends on how plentiful the available flesh is, but the pattern across all predator groups is clear: eat the best, first, and leave the lesser quality foods to the end or let scavengers have them.  It will always be a cost/benefit decision.  All things being equal, the less time and energy needed to eat something, the sooner it will be eaten.  If extra time and effort is needed to procure food, then the nutritional reward (primarily in energy) has to be exceptional to justify it.  Evolutionary pressures have made animals into excellent accountants.[490]  The human sweet tooth is a relic of humanity’s fruit-eating ape heritage, and the desire for fatty foods reflects an adaptation to prefer that energy-richest of foods.  Fat (made of hydrocarbons) is the ultimate energy windfall of all foods.

A recent study has challenged The Expensive-Tissue Hypothesis, at least as far as robbing energy from the digestive system to fuel the brain.[491]  The study compared brain and intestinal size in mammals and found no strong correlation, but there was an inverse correlation between brain size and body fat.  But since human fat does not impede our locomotion much, humans have combined both strategies for reducing the risk of starvation.  Whales have bucked the trend, also because being fatter does not impede their locomotion and provides energy-conserving insulation.  A human infant’s brain uses about 75% of its energy, and baby fat seems to be brain protection, so that it does not easily run out of fuel.  However, the rapid evolutionary growth of an energy-demanding organ like the human brain seems unique or nearly so in the history of life on Earth, and comparative anatomy studies may have limited explanatory utility.  There are great debates today on how fast the human brain grew, what coevolutionary constraints may have limited the brain’s development (1, 2, 3), and scientific investigations are in their early days.[492]

About a quarter-million years after Oldowan culture began, a new species appeared called Homo habilis, named by Louis Leakey in 1964.  Whether Homo habilis is really the first member of the human genus has been debated ever since.  As with all of its primate ancestors, Homo habilis was adapted for tree climbing.  Virtually all apes and monkeys sleep in trees, especially those in Africa.  Silverback gorillas are about the lone exception, along with some isolated chimps.  Homo habilis certainly slept in trees.  The predators of African woodlands and grasslands have been formidable for millions of years, and predators of Homo habilis in those days included Dinofelis, Megantereon, and Homotherium.  Night camera footage is readily available on the Internet today showing the nighttime behaviors engaged in by hyenas, lions, and others.  The African woodlands and plains are extremely dangerous at night, just from roving predators, not to mention being stumbled into by elephants, rhinos, and water buffalos.  Today’s African hunter-gatherers sleep around the campfire to keep predators and interlopers at bay; a sentinel keeps watch as everybody sleeps in shifts through the twelve-hour nights.  They are safer from predation at night in camp than they are in daytime as they roam.[493]

The anatomy of habilines (members of Homo habilis) spoke volumes about their lives.  They had brains of about 640 ccs, with an estimated range of 600 to 700 ccs, nearly 50% larger than their australopithecine ancestors and nearly twice that of chimps, and the artifacts they left behind denoted advanced cognitive abilities.  They stood about 1.5 meters tall (five feet), and weighed around 50 kilograms (120 pounds).  With the first appearance of habilines about 2.3 mya, Oldowan culture spread widely in East Africa and also radiated to South Africa.  Habiline skeletal adaptations to tree climbing meant that they slept there at night, just as their ancestral line did.  Their teeth were large, which meant that they heavily chewed their food.  Habiline sites have large rock hammers that they pounded food on, to break bones and crack nuts.  Those habiline stone hammers may well have also been used to soften meat, roots, and other foods before eating them.[494]  Sleeping in trees meant that habilines were preyed on, mostly by big cats.  Today, the leopard is the only regular predator of chimpanzees and gorillas, and leopards have developed a taste for humans at times.  But if modern studies of chimpanzees are relevant, our ancestors engaged in warfare for the past several million years, and monkeys have wars, so simian intra-species mass killings may have tens of millions of years of heritage.  Habilines were not only wary of predators, but also of members of their own species.

Monkeys, apes, and humans have many traits in common, and one is that members of "out-groups" are fair game.  Chimpanzees are the only non-human animals today that form ranked hunting parties, and they are also the only ones that form hunting parties to kill members of their own species.[495]  Distinct from the killer ape hypothesis, which posits that humans are instinctually violent, the chimpanzee violence hypothesis proposes that chimps only engage in warfare when it makes economic sense: when the benefits of eliminating rivals outweigh the risks/costs.  Macaque wars and revolutions appear spontaneously, but chimp wars have calculation behind them, which befits a chimp’s advanced cognitive abilities; they plan murderous raids and carry them out.  It is quite probable that the advancing toolset of protohumans was used for coalitionary killing when perceived benefits exceeded assessed risks/costs.  Just as with other behaviors that humans and chimps have in common, these traits probably also existed in our last common ancestor.  Other animals also engage in intra-species violence, which includes spiders when key resources are scarce and contested, and when ant colonies have power imbalances, they can trigger invasion and extermination by the larger colony.[496]  But human and chimpanzee warfare is uniquely organized and calculating.

Habilines and australopithecines coexisted, and the last gracile australopiths discovered so far went extinct about 2.0 mya.  Robust australopiths survived to about 1.2 mya (1, 2), and habilines disappeared about 1.4 mya, so they overlapped the tenure of a species about which there is no doubt of its genus: Homo erectus, which first appeared about 2.0-1.8 mya, and the first fossils are dated to 1.8 mya.  Homo erectus is the first human-line species whose members could pass for humans on a city street, if they dressed up and wore minor prosthetics on their heads and faces.  Homo erectus had a protruding nose and was probably relatively hairless, the first of the human line to be that way.  That was probably related to shedding heat in new, hot environments, as well as cooling its large brain (molecular data with head and body lice supports arguments that the human line became relatively hairless even before australopiths).[497]  There are great controversies about that overlap among those three distinct lines that might all have ancestral relationships.  Oldowan culture was a multi-species one.  There is plenty of speculation that the rise of Homo habilis and its successors caused the extinction of other hominids, driving them to extinction by competition, predation, warfare, or some combination of them.  What is certain is that “competing” protohumans went extinct after coexisting with the human line for hundreds of thousands of years.  The suspicion that evolving humans drove their cousins to extinction becomes more common as the timeline progresses toward today.[498]

The fossil record is thin for early humans, and any portrayal of the human family tree of those times always carries the disclaimer that it is speculative.[499]  Below is a current depiction of the human family tree, with geographical distributions presented.  (Source: Wikimedia Commons)

and below is one from a leading scientist of human evolution, Christopher Stringer.  (Source: Wikimedia Commons)

With the paucity of fossils, particularly between 2.5 and 1.0 mya, a timeframe in which the bones of only about 50 individuals have been found so far, discoveries are regularly announced that can be promoted as finds that will shake up the human family tree.  That recently discovered australopith kept evolving hands better suited for tool-making, in parallel to developing humans, and perhaps is even a human ancestor, which would relegate Homo habilis to an extinct offshoot, not a human ancestor.[500]  With such a scanty existing record, such announcements can be more than hyperbole.  There are often heated controversies over the dates of fossils and artifacts, in which changing a date can radically alter how the evidence is viewed.  Many findings can change from minor curiosity to paradigm-shifting discovery and back again, depending on the dates assigned to them.

The most complete early fossil find for the genus Homo is called Turkana Boy, who lived about 1.5 mya.  He was a child or juvenile, and would have stood more than 1.6 meters tall as an adult, about as tall as an average woman today (earlier estimates that he would have been more than 1.8 meters tall (six feet) in adulthood appear overstated today).  He is the ultimate Homo erectus find so far, and changes from his ancestral species were substantial.  His teeth shrank the most between species in the entire line from the chimp/human split, by about 20%, his jaw shrank as well, and perhaps most importantly, his guts shrank, as his rib cage is nearly modern in being more barrel-shaped than flaring at the bottom.  This was also the most dramatic rib cage change in the human line.  His hips became narrower and he no longer had the shoulder, arm, and hand adaptations needed for sleeping in trees; he was fully adapted for living on the ground.  Here are skeleton comparisons between gorillas, chimpanzees, Homo erectus, and today's humans.  (Source: Wikimedia Commons)

Homo erectus may have been the first member of its line since the chimp/human split to leave Africa, and was certainly the first to become widespread.  The Homo erectus story is a big one, and covers several subjects pertinent to this essay.

I am taking some liberties in calling Turkana Boy a Homo erectus; he is technically a member of Homo ergaster, which is often considered ancestral to Homo erectus, which is the Asian variant’s name.  There is great debate regarding how the human family tree branches between Ardi and Homo heidelbergensis.  Some call the various erectus-type species all subspecies of Homo erectus, while others argue for several distinct species.  I will not stray far from the orthodox narrative here, for good reason.  The reconstructed early human tale is based on very limited evidence, but that evidence will only grow over time, and the tools and techniques for using them will become more sophisticated.  Although there may be some upcoming radical changes in the view of the early human journey, efforts of countless scientist and fossil hunter lifetimes support the narrative that this essay sketches, and I respect their findings and opinions, even though I acknowledge many limitations.  The human ego, it seems, becomes more involved as the story of life on Earth moves closer to its human chapters.

Some further examples of the complexity and debate follow.  About when Homo erectus is supposed to have appeared, a fossil formed in a similar location, which was at least contemporary with Homo habilis.  Where it fits in the human family tree is unknown at this time, but today it is called Homo rudolfensis.  This is perhaps a descendant of Kenyanthropus platyops, which Maeve Leakey (who led the team that discovered it) argued is a member of a new genus.  Because there is Neanderthal DNA in the modern human genome, under the classic definition of a species, Neanderthals have been placed within Homo sapiens by some anthropologists.  Some small Homo erectus fossils in Georgia were initially classified in their own species, but are now designated as a Homo erectus subspecies.  The “hobbit” fossils recently discovered on Flores Island have been widely considered as island-dwarfed Homo erecti, but they have features that suggest that they may have been habilines or even australopithecines, which would dramatically change the current view on the first migrations past Africa.  They may well have been Oldowan culture australopiths that migrated from Africa about when Homo erectus did, and they also controlled fire.  Similarly, a relative of Homo erectus that precedes Homo heidelbergensis is called Homo antecessor, but may also be a Homo erectus subspecies.  The confusion and debate is partly because the differences between those “species” are minor and more on the order of regional variation than any radical change.  They perhaps could have all interbred with each other.  Other than the “hobbits,” there are no great anatomical changes and few noticeable cultural ones among the various specimens for more than a million years of evolution, so I refer to them all as Homo erectus, as do many anthropologists, particularly when writing for the lay audience.[501]  For those who want to explore the relatively fine distinctions, the material is readily available for study and can be another useful example of the process of science, if one of the more heated illustrations.

The most-accepted hypothesis today is that Homo erectus evolved from Homo habilis and first appeared in East Africa between 2.0 and 1.8 mya.  If those are not the exact species that the human line descended through during those times, our actual ancestors were close cousins.  The early Homo erectus adults had brains of about 850 ccs, and some later specimens reached 1,100 ccs, or triple the mass of a chimpanzee’s brain.  Today’s human brain only averages about 1,200 ccs (women 1,130 and men 1,260).  Homo erectus, as with other members of the line, had a brain that was another third larger than Homo habilis, and probably was responsible for its relatively sophisticated material culture.  But important as its growing brain was, other anatomical changes were more telling.  Homo erectus was fully adapted for living on the ground and walking great distances.  For the first quarter-million years of Homo erectus’s existence, it lived in the Oldowan culture, which used tools and weapons that were little more than rocks with sharpened edges, and probably some shaped sticks.  They evolved in a highly dangerous environment and all of their ancestors slept in trees.  How could they have slept on the ground?  In a word: fire.

More than any other technical innovation, the control of fire marked humanity’s rise.  In his The Descent of Man, Darwin called making fire humanity’s greatest achievement.  The only possible exception that he noted was the invention of language.  Even today, in our industrialized and technological world, almost all of our energy practices are merely more sophisticated ways of controlling fire.  The initial control of fire was at once a social act, a mental act, and a technical act.[502]  Although making stone tools represented the big break between the human line and its ancestry, it only allowed apes to mimic what other animals could do.  Stone tools represented artificial claws, teeth, and jaws of animals far larger and more capable than apes at killing and eating flesh and bones.  Protohumans with stone tools could scavenge more effectively and maybe defend themselves and even attack others, but it was not initially different in kind from what other animals could do, and was a pathetically small advantage when their first stone tools were merely rocks with sharpened edges, about on the order of brass knuckles.  Would you want to fend off a lion predation attack (and perhaps multiple lions) with a rock, and at night?  Controlling fire was the radical break from all other organisms that ever lived on Earth.

A bonobo named Kanzi built a fire (using matches) and roasted marshmallows on his own, and made Oldowan-style tools after being taught.  But those who invented stone tools and the control of fire were the Einsteins and Teslas of their day.  Hunter-gatherers today often start fires by banging flint against pyrite stones, which is a combination that produces generous sparks.  Habilines probably used such stones when making tools.  Even Darwin suggested that that may have been how protohumans discovered how to make fire, as they banged rocks together.[503]  I have not seen anybody else advocate it, but as with the likelihood that protohumans learned to make stone tools once and the practice then spread, I consider it very likely that the control of fire was learned only once, and then spread.  Richard Wrangham thinks that habilines first controlled fire, which led to the evolution of Homo erectus.[504]  He could be right, and my reasoning follows. 

First and foremost, I have a very difficult time imagining that Homo erectus could have slept on the ground without something to keep Africa’s predators at bay, and I am not the only one.[505]  I doubt that slender apes, much smaller than humans, swinging sharpened rocks and sticks at saber-toothed cats, hyenas, and the like (or throwing them) would have done much to scare them off.  Those days predated spears, arrows, and other sophisticated weapons by more than a million years.  The strongest plausible deterrent is fire, and I doubt that Homo erectus was simply vigilant and the sentry awoke everybody when the cats came and they all scrambled up trees (or lived in large enough groups so that they could mass attack any predators).  Those apes certainly could not have outrun them.  Cats are ambush predators, and woodland apes sleeping on the ground would have likely been easy meat.  Without fire, Homo erectus would have been in the same situation as its ancestors, going back tens of millions of years: they slept in trees and other lofty refuges so that predators could not attack them.  But all animals respect and fear fire.  Fire is the ultimate protection and weapon for humans, even to this day.

Wrangham made the ability to sleep on the ground a key part of his Cooking Hypothesis.  Homo erectus was not only adapted for ground living, its guts and teeth also shrank, which would have reflected eating soft and easy-to-digest food.  Along with organ meats, cooked food is the leading candidate for soft foods.  If habilines mastered fire, they would have almost immediately used it for cooking.

In the 1990s, Wrangham began to develop his Cooking Hypothesis, which he more fully elucidated in Catching Fire, published in 2009.  Wrangham marshaled numerous lines of evidence to support his hypothesis, which was widely pilloried by his colleagues.[506]  Wrangham conceded that the archeological record was scarce for the early control of fire, but he countered that evidence for early fires would rarely survive.  Most caves last a quarter million years or so; they are made from soft stone, and the geological dynamics that create caves also destroy them.  Also, early humans, just like gorillas and chimpanzees today, and even early hunter-gatherers, would have been constantly on the move, never sleeping in the same place twice.  If the first fires were made in the African woodlands and grasslands, the evidence would not survive for long, just as the remnants of today’s hunter-gatherer fires on the African savanna quickly disappear.  The gist of Wrangham’s Cooking Hypothesis is this:

 

  • Humans cannot solely subsist today on raw food (they cannot get enough calories by eating raw food), but need their food cooked, and all human societies cook their food;
  • Cooked food reduces the energy required to digest food and also allows more calories to be absorbed from food, sometimes greatly more, such as doubling;

  • Anatomical changes, beginning with Homo erectus and perhaps even earlier, provide evidence that humans have cooked their food for a very long time, up to two million years; the control of fire may be responsible for the appearance of Homo erectus;

  • The control of fire allowed Homo erectus to leave the trees and sleep on the ground, which was a first for the human line (or perhaps habilines or australopiths were the first to sleep on the ground with fire, but Homo erectus was the first human-line member biologically adapted to it);

  • The energy boost from cooked food helped fuel the continued expansion of the human brain, from habilines to today’s humans;

  • Cooking reduced chewing time from the six hours per day that other great apes chew to less than an hour for humans; this allowed humans to pursue other activities with their enlarged brain, and was one of the positive feedback loops that led to modern humans;

  • Fire became the center of human social life after it was controlled, and the changes attending that development profoundly affected the human journey.

 

Wrangham’s hypothesis is more robust and subtle than this essay can do justice to, but I will survey some of the findings, implications, and controversy.  Raw food has various nutritional properties that are superior to cooked food, such as vitamins, but because cooked food provides more digestible calories for humans than raw food, it represented an evolutionary advantage.  Meat, starches, and seeds are far more digestible when cooked, and are much easier to chew.  Today, chimps in Senegal will not eat raw seeds of Afzelia trees, but when a fire passes through the savanna, they search the ground below the Afzelia trees and eat their cooked seeds.[507]

People and animals universally prefer the taste of cooked food over raw, except for fruit, which was designed by the plant to be eaten by animals; no other foods were designed to be eaten and digested (except nectar, blossoms, and mother’s milk).  The toxins created by cooking, such as Maillard compounds, can cause health problems in humans, including chronic diseases.  But cooking also destroys some toxins, making otherwise inedible food palatable.  Cooking also reduces collagen, which makes meat tough, to gelatin (called denaturing the protein, when it falls apart), and converts raw starch to a far more digestible form.  However, as far as species viability is concerned, humans only have to live long enough to produce offspring.  The degenerative diseases (especially artery disease, cancer, and diabetes) that shorten human lives today would have been irrelevant in the ancient past, when virtually nobody lived long enough to die of old age and they could reproduce long before the deleterious effects of cooked food caught up with them.  Many detriments of cooking and food processing have only become important to human welfare with the advent of civilization.  Cooking would have been an undisputed advantage long ago.

Were the dramatic changes in Turkana Boy’s anatomy a result of cooked food, or was Turkana Boy eating organs as his species became hunters instead of hunted, and the stone tools softened up the meat and plant foods so that he did not need to chew as much?  Wrangham co-authored a study on shrinking teeth in the human line that began with Homo erectus.  It concluded that food processing, cooking in particular, accounted for the effect.[508]  Cooked food versus raw food and the number of neurons that can be supported in a brain has been the focus of recent research.[509]  The primary reason why Wrangham’s hypothesis was initially dismissed was that archeological evidence for fires that long ago is almost nonexistent.  When Catching Fire was published, the earliest evidence with wide acceptance only supported fires beginning around 800 kya, where Israel is today, which is more than a million years after Wrangham’s estimated timeframe.  Wrangham did what all bold scientists do: he made falsifiable predictions.  If it turned out that no evidence of early fires was ever found, his hypothesis could begin looking shaky. 

Animals can quickly adapt to changing environmental conditions that impact their food supply.  For example, in recent studies of Galapagos finches during a severe drought, small-beaked finches largely died out, because large and hard seeds became dominant.  The surviving finch population had measurably larger beaks in one year.  It took 15 years of normal conditions for finch beaks to return to their pre-drought length.[510]  Wrangham argued that the biological changes attending cooked food would have been immediately evident, and Homo erectus’s anatomy presented the most dramatic changes seen in the human line.  The only other plausible candidate would have been Homo heidelbergensis, but it was only a more robust version of Homo sapiens.

The derision was loud from Wrangham’s colleagues…until evidence of fire being used a million years ago was found at Wonderwerk Cave in South Africa by using new tools and techniques.  The chortling is subsiding somewhat and scientists are now looking for the faint evidence, and long-disputed evidence of 1.5-1.7 mya controlled fires is being reconsidered, although his hypothesis is still widely considered as being only "mildly compelling" at best.[511]  New tools may push back the control of fire to a time that matches Wrangham’s audacious hypothesis.  Wrangham cited the Expensive-Tissue Hypothesis as partially supporting the Cooking Hypothesis, but as discussed previously, the energy to power the human brain may not have solely derived from cooked food’s energy benefits.  Wrangham has cited numerous lines of evidence, one of which is a bird called the honeyguide that has coevolved with humans to find honeybee hives and smoke them out; the humans get the honey and the honeyguide gets the larvae and wax.  According to recent molecular evidence, the evolutionary split of the honeyguide from its ancestors happened up to three mya, which supports the early-control-of-fire hypothesis.  There is great controversy regarding these subjects, from recent findings that some chimps make ground nests today to scientists making arguments that meat instead of cooking led to the anatomical changes to the social impacts of campfires.  This section of this essay will probably be one of the first to be revised in future versions, as new evidence is adduced and new hypotheses are proposed.

Two major events happened soon after Homo erectus appeared, and their sequence seems to support the Cooking Hypotheses.  The first of which was the migration of Homo erectus from Africa as early as 2.0-1.9 mya; they spread to Georgia and Java by 1.8 mya (perhaps 1.6 mya in the case of Java), and China by 1.7 mya.  It was the first mass migration from Africa by apes since the Miocene, and Homo erectus may have become the first multi-continental member of the human line, and certainly the first widespread one.  Favorable climates and a lower Himalaya range and Tibetan Plateau may have encouraged that migration.[512]  Unlike Miocene apes that began to migrate from Africa 16.5 mya, there was no unbroken forest to sustain Homo erectus’s journey to East Asia.  Those Homo erecti migrants would have had to sleep on the ground for much of the journey and were not adapted for sleeping in trees, as already discussed.  From today’s viewpoint, it may seem that they were adventurers, but as will also become obvious with the spread of Homo sapiens, in one individual’s lifetime, there was probably only modest movement, expanding into the next uninhabited valley or two.  Such an expansion happened one valley at a time, one generation at a time, to make it across a continent in a few thousand years for those that could adapt to changing biomes.  Migrating at the same latitude would not have presented great climatic issues.  As those migrations happened during the ice age, they were along southern Eurasia.  There is no evidence yet that Homo erectus ever made it to Australia, probably because of the ocean crossing required for passage.

The other big event happened about 1.8-1.7 mya, when African stone tools took a leap in sophistication, and Acheulean culture (also called Acheulean industry or Mode 2) appeared and lasted for more than a million years.  The quintessential Acheulean tool is the hand axe, and the makers used bone, antler, and wood to shape the axes.  Some argue that the axes were not really axes at all, but used for other purposes, even including just the leftover core after flakes were removed.  Some gigantic hand axes have been discovered that could not have been easily used by human hands, and may have been early status symbols.[513]  Not only were axes made, but also flakes, scrapers, cleavers, and other relatively sophisticated tools.  There is almost no doubt among anthropologists that Homo erectus invented Acheulean tools and developed them from Oldowan tools.  The axes have a very distinctive shape and could even be called a product of craftsmanship, which reflected minds greatly advanced from today’s great apes.

A plausible series of events, when fire came first and Acheulean industry second, is that the Homo erecti that traveled to East and Southeast Asia did not have Acheulean tools, but the primitive Oldowan toolset, and the most remote ones never used Acheulean tools.  I consider it quite possible that early Homo erecti migrated from Africa (and maybe even an earlier protohuman, if the “hobbits” were descended from habilines or australopiths) wielding fire.  Cooking came with it, and hundreds of thousands of years later, those Homo erecti that stayed home in cosmopolitan Africa invented a new level of technology, Acheulean tools, and that culture never made it to the remote corners of East Asia.  Some have speculated that those East Asian Homo erecti used bamboo more than stone, which would not be preserved for study today, or that as they moved east they lost the art of making Acheulean tools.[514]  I think the likelier explanation is that they never had Acheulean tools, which means that they left Africa before they were invented, but they brought fire with them, which was the essential technology.

The Homo erecti that arrived in East Asia and the islands off of Southeast Asia existed, and virtually no changes are evident in their anatomy or technology for more than 1.5 million years, only to disappear about when Homo sapiens arrived.  Like tarsiers finding refuge in the islands near Southeast Asia, those Homo erecti at the far end of the “known” world seem to have lived like country bumpkins for well over one million years, without any outside disturbances or benefits from their cosmopolitan homeland.  The foregoing is largely my speculation on the issue, which could collapse like a house of cards with the Next Great Finding, and the lack of evidence for early fires is the biggest hurdle.  Like Wrangham, I will follow those investigations of early fire with great interest.  I strongly doubt that any species that ever acquired the greatest technology in Earth's history would ever lose it, as it would have quickly become indispensable.

Growing the human brain was about more than energy.  There is speculation that meat protein helped human evolutionary brain development, and there is also evidence that oils help.  There are surely nutritional requirements besides calories, but calories comprise the vast majority of nutrition.  About 80% of what is called human nutrition consists of calories.  If animals can obtain enough energy, the other dietary constraints are usually minor issues.

Apes make poor carnivores and are adapted for eating fruit as their staple, and fruit is the ideal human food.  The dietary shift to meat, probably out of necessity, came with a price.  If humans get more than half of their calories from protein, they will die from protein poisoning.[515]  Chimpanzees get about ten percent of their calories from protein today, which is about the same level that humans seem to need, but it is not necessary to get that protein from meat.  I have not eaten meat since the 1980s.

Moreover, the rise of the human brain was not only about size, even if the human brain turns out to “only” be a linearly scaled primate brain.  The human cerebral cortex is four times the size of a chimp’s, and the cerebral cortex is considered to be where all higher human brain functions originate.  For all the influences of using hands, tools, cooking, and the like, they largely only laid the foundation for the cerebral cortex to grow.  A mystic might say that the growing cerebral cortex allowed for the human brain to host a more sophisticated consciousness, which originates in other dimensions.  This is a question largely unanswerable by today’s mainstream science, although Black Science probably has some pretty good ideas.  As with mainstream scientists, I will not attempt to address that question, at least in this part of the essay.  In the final analysis, the cerebral cortex’s growth made humans radically different from any other land animal in Earth’s history.  Cetaceans may have similar levels of brain functioning, perhaps even greater, but they cannot manipulate their environments like humans can and they cannot make fires.  Humans are significantly juvenilized when compared to chimps, for instance; humans retained traits of chimp infants.  An infant chimp’s flat face appears far closer to a human’s than an adult chimp’s does.  That juvenilization is partly why humans are far weaker, physically, than other great apes.  As the human line increasingly relied on its brain, it lost even more of its brawn. 

In summary, becoming bipedal had great portent for evolving protohumans, and the suspicion is very strong among scientists that it led to feedback loops in which tool use became advanced, which allowed for a richer diet, which helped lead to larger and more complex brains, which led to more advanced thinking and behaviors, which led to more advanced tools, which led to more acquired energy, better protection, and larger brains, and so it went.  But the control of fire was a watershed event.  Although better tools improved the viability of early humans, nothing on Earth could challenge fire-wielding humans.  With the control of fire, humans never had to worry again about being preyed on, nor as a threat to species viability, except by other humans.  Naturally, fire was eventually used for offense instead of defense. 

What is fire?  That may seem too-elementary a question, but understanding what it is and where it came from is vitally important for understanding the human journey.  The first fires were the quick release of stored sunlight energy that life forms, plants in that instance, had used to build themselves as they made their energy budget “decisions,” and it was from vegetation that recently died and was dry enough to burn.  The energy was released from burning so fast that it became far hotter (because the molecules were violently "pushed" by the reaction that also released photons) than the biological process of making animals warm-blooded.  Hot enough in fact that the released photons' wavelengths were short enough (energetic enough) so that human eyes could see them, in a phenomenon called flames.  Flames are visible side-effects of that intense energy release.  The rapid movement of the molecules as they rocketed due to that great release of energy is the motion that powers the industrial age.  Those rocketing molecules move pistons in automobile engines and turbine blades in electric plants, and are behind the damaging explosions of bombs and the propulsive explosions of rockets.  For more than one million years, all human fires were made by burning vegetation, and wood in particular.  What was fire doing?  Energy stored by plants, trees in particular, was violently released by controlled fires for human-serving purposes of warmth, light, food preparation (to obtain more energy from food) and protection from predation, and it also became the heart of social gatherings.  Humans have stared into fires for a million years or more. 

The energy from controlled fire allowed humans to leave the trees, grow their brains, and socially organize in new ways.  Humans commandeered energy that otherwise fed ecosystem processes and used it for immediate human benefit.  It was also the first great human robbery.  All heterotrophs “rob” energy from other life forms to live.  The primary exception is the symbiosis that flowering plants enter into with animals.  But no animal had ever robbed energy from ecosystems on that scale before.  By making fires, humans were liberating many times the energy that their biological processes used - energy that could have fed forest ecosystems.  While humans were only using deadwood, it was the least destructive to forest ecosystems.  But when humans began burning forests to flush out animals to kill and make biomes suitable for animals to hunt, they were destroying and altering ecosystems on a vast scale.  A cord of wood provides about four years of the calories that fuel a human adult’s body, and one hectare can provide a sustainable annual harvest of about ten years of human calories.  A family of four using a hectare for firewood on a sustainable basis would be using more than twice their caloric intake for burning wood.  Very little of that released energy would benefit humans if they burned it over a campfire, as humans did for the entire epoch of the hunter-gatherer; that liberated energy largely went straight into the sky.  The direct benefit to humans would be the energy that went into cooking food, what warmed human flesh, what was used to make tools, and the benefits of scaring off predators and providing light at night.  More indirect benefits would have been ecosystem changes to provide human-digestible calories, such as American Indians burning the woodlands and plains to make environments conducive to animals that they could easily hunt.  In this table, the earliest epochs are the most uncertain, but saying that hunter-gatherer humans used 2.5 times their dietary calories in their economy is probably, perhaps greatly, understating the case.  That 5% efficiency number is also a rough estimate, and both numbers could be refined by a scientifically performed effort.  Maybe somebody has already done it.  The numbers in that table for subsequent epochs are more accurate, and the most accurate of all are those for industrial-technological societies, and I live in one.  The increases in efficiency became more modest with each epoch as the limits of entropy were approached.

When humans began to raze forests and use the resultant soils to raise crops, they were working their way down through the food chain, no longer harvesting ecosystem detritus but destroying entire ecosystems literally at their roots for short-term human benefit.  That practice eventually turned forest ecosystems into deserts.  As this essay will survey, that was a rampant problem in all early civilizations.  Eventually, humans learned to reach even further back into the ecological horizon as they began burning energy stores that were hundreds of millions of years old; coal was first and oil and gas second.  They were burned a million times as fast as they were created.  In all instances, humans were releasing sunlight energy that had been captured and stored by organisms.  In the 20th century, when humans began using nuclear fission, they were going even further back in time and harvesting energy stored via fusion processes in stars billions of years ago.  With each new energy source, humans were harvesting older, more concentrated energy sources, which released far more energy than the previously used source.  In each instance, humans plundered the energy source to exhaustion.  Humans have not lived in “harmony” with nature since they learned to control fire.

Until now, I have generally used the word “epoch” in this essay as geologists do, to denote timeframes smaller than periods.  But in describing the rise of humanity, I will use “epochal” to mean gigantic events, when the human condition before and after the events became so radically different that the two times were like different geological epochs, when the radical changes following the events are considered.  I consider making stone tools, growing the protohuman brain, and the control of fire to be the human journey’s first Epochal Events.  In fact, those events led to human existence.  They were all probably related, and tightly related, and with the current uncertainty I have made them all aspects of the same event.  They could arguably be split, but the energy advantages of stone tools and fire surely contributed to the expanding human brain, and the expanding human brain led to those inventions and more, in mutually reinforcing feedback loops.  The only things that scientists are certain that exist, energy and consciousness, interacted to produce humanity’s first Epochal Event(s).

Stone tools and the control of fire had energy consequences to the human line far above all other effects.  Whether they happened within a few hundred thousand years of each other, or were separated by more than a million years, they were the key technical/mental/social advances in early humanity’s ability to survive on Earth and expand its range to eventually cover the planet.

If habilines began to control fire two mya, one thing is certain: the australopithecine Tesla who banged the first rocks together that fashioned a stone tool, and who was able to continue doing it and eventually taught others, probably via active demonstration and their observation, could not have imagined that his/her invention would lead to a relatively giant descendant (or cousin of a descendant) that slept on the ground, controlled fire, and would quickly migrate to the ends of Earth and traverse distances that were incomprehensible in australo-Tesla’s time.  That relatively quick series of innovations, never before seen on Earth, gave birth to a creature that would have simply been unrecognizable to that australopithecine Tesla; it would have appeared magical.  There have only been a few subsequent Epochal Events in the human journey, and like the first one(s), they were all energy events above all else, and were all dependent on humans gaining the technological prowess and social organization that enabled them to exploit a new energy source, which was dependent on their increasing mental feats.  Each time, the human reality after the Epochal Event was unimaginable to the humans who lived immediately before it (1, 2, 3).  Also, the events and their aftermaths became far more dramatic each time, in shrinking the event’s timeframe and shortening the time until the next Epochal Event, and the energy levels greatly increased each time, and by an order of magnitude for the most recent event.

Did the control of fire lead to Homo erectus, as Wrangham thinks?  Or did Homo erectus merely use it to begin dominating the world?  Was cooking the seminal event in the appearance of humans?  Those questions may not be definitively answered in my lifetime, and led to the somewhat uncertain title of this chapter.  Highly transformative developments coincided with the appearance and dispersal of Homo erectus, which was a radical break from all that came before – biologically, technically, and culturally – and strongly implies great cognitive enhancements.  I believe that the control of fire and cooking would leave deep cultural and biological impacts on the human journey, and because Homo erectus barely changed during its nearly two-million year tenure on Earth, both in biology and in Acheulean artifacts, I favor Wrangham’s hypothesis, at least until the Next Big Finding.  Just as Einstein said that every theory is killed by a fact and that his theories would one day become obsolete, but that their best parts would survive in the new theories, I suspect that significant aspects of Wrangham’s hypothesis will live on in successor hypotheses, and other scientists have been following Wrangham’s lead.

From the initial appearance of Homo erectus about 2.0-1.8 mya, Europe was periodically buried under the ice sheets that began growing and receding when the first stone tools were made, so Homo erectus tended to appear and disappear in Europe.  The fact that humans evolved and spread during an ice age has led to competing hypotheses about many aspects of humanity’s rise.  Although the ice age began about 2.6-2.5 mya, and there have been 17 identified episodes of advancing and retreating ice sheets, particularly in North America and northern Eurasia, the early ones were not as severe, and they did not achieve clockwork-like regularity until the past million years, as the diagram below shows.  (Source: Wikimedia Commons)

But even though they were “regular” on the geologic time scale, driven by Milankovitch cycles, there would have been nothing “regular” about them to evolving humans.  When ice sheets advanced, global climate became cooler and dryer; rainforests shrank and deserts expanded.  Human adaptations to those changes, which could even be discerned in one human lifetime, must have had profound impacts on the human journey.  In short, humans had to readily adapt to rapidly changing conditions, and rapid adaptation would have had selective effects on burgeoning human intelligence and problem-solving ability; those that adapted, survived.  Also, scientists think that the rapidly oscillating climate resulted in migrations and pockets of isolated members of species that then underwent rapid evolutionary adaptation, the kind that leads to speciation.  This may have been partly responsible for the relatively rapid evolution of the human line, particularly in the past million years.[516] 

Although our species, Homo sapiens (named Homo sapiens sapiens if we consider that Neanderthals and an early human are subspecies of Homo sapiens, but I will use Homo sapiens in this essay to denote today’s humans), is the only survivor of the past several million years of human-line evolution, many of our cousins and ancestors were recognizably human.  When did language begin, especially spoken language?  Language certainly predated the appearance of Homo sapiens.  All great apes readily learn sign language, and even when monkeys chatter, the same parts of their brains that control human language are used, and there is plenty of evidence that great ape vocalizations can denote objects and other ideas.  The communicative abilities of crows and their corvid cousins can be hard to believe; they can solve some problems better than great apes can, and birds do not have a neocortex, but another part of their brain seems to function like the neocortex does.  Becoming bipedal created those neck/skull changes that began to form the structures needed for human speech.  If fossils are sufficiently preserved, important anatomical features can provide key evidence for human abilities and behaviors.  Turkana Boy, for instance, had his inner ear, which is responsible for balance, preserved well enough so that it provided more evidence that he did not spend time in trees (it is larger in primates that regularly climb).[517]  Similarly, the outer and middle ear of Homo heidelbergensis, which succeeded Homo erectus, apparently enabled keener hearing than its predecessors were capable of, and may have reflected the beginnings of spoken language.  There is strong evidence that Neanderthals were capable of using spoken language.  As with many other human traits, the potential for language seems to have existed with monkeys (even in dinosaurs), and it kept developing more sophistication over vast stretches of time, and structural and cognitive changes interacted as human language developed into today’s version.

Although many traits that led to human dominance of Earth can be discerned in our distant ancestors, a pile of baggage came along with them.  All great ape societies but bonobos are male dominated, and the most marginal macaque will quickly become patriotic cannon fodder when his society is attacked.  The traits almost always arose from economic costs and benefits, which were always rooted in energy.  How bonobos, also called pygmy chimpanzees, became the only great ape species that is not male-dominated is primarily an economic tale.

The bonobos’ scientific name is Pan paniscus, and they live in the range in red in this image.  (Source: Wikimedia Commons)

The other colors represent ranges of other chimp species.  Bonobos are separated from all other chimp species by the Congo River, which forms the north, east, and west borders of their range.  When the current ice age began 2.6-2.5 mya, the current bonobo range began having droughts and the rainforest shrank.  Gorillas are masters of the rainforest, and when the rainforest south of the Congo disappeared during one of the dry periods (and it seems to be about one mya, when the ice age patterns became regular), gorillas left and never returned.  Humans are the only great apes that can swim, so the Congo was an impenetrable barrier for chimps and gorillas.

Chimpanzee social organization has male and female hierarchies, and societies of up to 120 members.  Fruit trees form the center of a chimp society’s territory, where females forage with their offspring and males form foraging parties that patrol the territorial perimeter.[518]  Chimps have foraging parties of less than 10 members, it ranges between two and nine, and party size fluctuates rapidly.  That is because chimps have to walk kilometers between food sources each day, primarily fruit trees, and varying harvests cannot reliably support larger groups.  In general, the larger a territory, the faster chimps breed, as they have more available energy.

Bonobos have an average party size of about 17, and party sizes are consistent.  How can they have such large and stable foraging parties while no other chimps can?  Because they eat gorilla food.  Because gorillas no longer live south of the Congo, the young leaves and herb stems not available to chimps where gorillas live make for pleasant bonobo traveling snacks.  Since the biomass concentration of gorillas and chimps is nearly the same where their ranges overlap, it meant that bonobos had twice the food supply that chimps did.[519]  Bonobos also evolved to better digest gorilla foods, and larger parties put females on a more equal footing with males.  Bonobos, both males and females, did not tolerate the alpha male model of other chimp societies in which male gangs dominated.

One chimpanzee and gorilla behavior that can be difficult to comprehend, mentally and emotionally, is male murder of infants.  If a chimp or gorilla encounters an infant that he knows he did not sire, he will kill it if he can.  That behavior is also common in monkeys.  Gorillas have a potentate/harem social organization, and when a male matures he is usually ejected from that gorilla society, but might become subordinate to the silverback patriarch (some troupes have more than one dominant silverback, and even up to seven silverbacks in one troupe has been observed).  Bachelor gorillas can try to unseat a silverback to steal his harem, and if successful, the new potentate will kill all the infants he can.  The average female gorilla will lose an infant to murder by a male in her lifetime.[520]  In chimp society, when a female is sexually receptive, she will mate with all males in the troupe, especially the dominant ones, so that every important male suspects that the infant might be his, and thus will not kill it.[521]  That strategy has been nicknamed, “Who’s Your Daddy?”[522]  The strategies of dominant males seem to work, as far as producing the most offspring.  Paternity testing of chimpanzees, for instance, shows that alpha males and their “lieutenants” sire nearly all offspring in a band.[523]

If a silverback dies, either from natural causes or murder by rivals, or chimps murder the males of a rival band and take the females as “booty,” the infants of the dead males will all be killed.  And the next activity tends to boggle people’s minds: the females who lost their infants will then mate with the killers.[524]  That behavior is not confined to great apes: lions (and housecats) and bears also do it.  Humans cannot imagine a woman mating with her child’s killer, but it is standard behavior in those species and provides stark evidence for the Selfish Gene Hypothesis.  A male chimp or gorilla will not invest time and energy in raising offspring that are not his.  Killing them makes the female sexually receptive, as she has a primordial urge to produce offspring.  Female chimps and gorillas need protection from other males, and a male strong enough to kill her mate gets the spoils, including her, and she will then mate with the killer and bear his young, and can stay mated for life.  Female chimps will kill each other’s infants sometimes, as they play their own dominance games, but mating with the killers of their offspring stupefies humans and makes Darwin’s “war of nature” observations difficult to deny.  Male orangutans will not kill infants that they did not sire, but orangutan females are constantly under threat of being raped by non-dominant males (called unflanged).

Bonobos are the only non-human African great ape exception to infanticide, and are also the only great ape species that does not sexually coerce females, humans included.  The reason seems to be the social organization that arose from a plentiful food supply that allowed for larger groups in which females and males actively reduced male violence.  Many behaviors within and between bonobo bands are unknown with chimps.  A male bonobo will remain with his mother for her entire life, and male bonobos do not vie for dominance.  Instead, bonobos have a sexuality that no other animal on Earth has remotely approached.  They settle nearly everything with sex.  Female on female is common, particularly when bands meet, but anything goes in bonobo society, with the sole exception of mothers and sons, as the aversion to inbreeding is rooted very deeply in animals and is also responsible for the human incest taboo.  Bonobo societies are peaceful and seem to live by the slogan, “Make love, not war.”  But it started with their economy, when their primary and dominant competitor moved away.  In recent studies, the only bonobo sexual coercive acts observed are females abusing males, which is also rare.[525]  A likely influence on ending infanticide is that female bonobos, like humans, conceal their ovulation, so males are not cued to compete to be the father.  Also, since virtually all bonobos have sex all the time, there is no way for bonobos to determine paternity.

Humans took a different path 2.5 mya.  There are generally two schools of thought regarding the appearance of Homo sapiens among scientists: one is called the Multiregional Model, and the other is called the “Out of Africa” Model.  In their essence, the Multiregional Model had those Homo erectus migrants eventually evolving into today’s races, and the “Out of Africa” Model had humans evolve in Africa and then spread across the world and replace/displace all other members of the Homo genus.  The rise of molecular biology and DNA testing has largely resolved the issue in favor of the “Out of Africa” Model.  There are also intermediate views and variations of each hypothesis, which generally relate to the invaders mating with the natives, even if they could be classified as separate species.  For instance, Neanderthal DNA is part of the human genome, which reflects interbreeding.  Since Neanderthals were largely confined to Europe and what became the Fertile Crescent, and the migration of the original Homo sapiens was from Africa, sub-Saharan Africans possess less Neanderthal DNA than any other humans.  Africans also have the most genetic divergence, which reflects the idea that humans have lived longer in Africa than anywhere else.  There is virtually no doubt that Homo sapiens evolved in Africa.

Although Acheulean hand axes are rather beautiful, anthropologists have lamented the “boring million years” that existed after Acheulean culture first appeared about 1.8-1.7 mya.  It seems that not much was going on, anatomically or technologically, with the human line, from the first appearance of Acheulean culture to about a half-million years ago.  There is evidence of Acheulean culture spreading in waves across Asia but never quite reaching those in East Asia in what became their refugia.  Acheulean tools were even made by a likely Homo sapiens subspecies less than 200 kya.  The Acheulean hand axe is the longest-lived technology in the human journey, other than the stick and maybe the campfire.

The nexus of Europe, Asia, and Africa has been the site of great migrations, conflicts, extinctions, innovations, and the like, all the way to today, and began when Asian mammals probably drove half of European mammals to extinction 34 mya.  It continued to the invasion of Africa by Asian mammals 18 mya and to the travels of Miocene apes in and around Africa as they migrated outward and then back home, between 16.5 mya and nine mya.  The “friction” between collisions of animal assemblages and human cultures, as well as the geographic and climatic variation in that region, not only gave rise to humanity, but human civilization also first appeared in that region and is at the heart of the world’s attention and woes today.  That is where the energy is.  Advancing and retreating ice sheets made Europe a difficult place for the human line to inhabit, and they sporadically appeared and disappeared for more than one million years, all the way until this current interglacial period called the Holocene.  Homo erectus began appearing in southern Europe about 1.5 mya.  There were three basic routes to Europe.  The easiest would have been largely overland, crossing today’s Turkey to arrive in the region around today’s Greece.  The other two routes crossed the Mediterranean Sea, one via Sicily to today’s Italy, and the other took the Strait of Gibraltar to today’s Spain.[526]  Archeological sites show early humans using all three routes.  In the mountains of Spain is the earliest evidence of the human line in Europe and dates as far back as 1.2 mya.  The remains in that cave also show the first signs of human cannibalism.  Those cannibals are also thought to be human-line members, evolved from Homo erectus and called Homo antecessor.  Today, anthropologists are confident that Homo antecessor gave rise to Homo heidelbergensis, at least as confident as any early human ancestral relationships are, but as can be seen with Stringer's graphic above, another school of thought has Homo antecessor being a dead end.  Chimps have also engaged in cannibalism, so that may be another ancient primate behavior.  With recent advances in human DNA studies, the human genome provides evidence that all early human societies engaged in cannibalism, possibly as a ritual of eating rivals vanquished by violence, maybe as food, and there is a great deal of archeological evidence.[527]

Homo heidelbergensis fossils and artifacts are prevalent in Africa, Europe, and West Asia, and they may have lived from 1.3 mya to 200 kya, for another long-lived species.  They had about the same stature as modern humans, but were more robust and their brains were about the same size as those of modern humans.  They may have been the first humans to bury their dead.  In this human-line narrative, species existences evidently overlap, when one ancestral species coexisted with its probable descendants for hundreds of thousands of years.  From the perspective of evolutionary theory, there is nothing unusual about it.  From a review of how speciation is thought to happen, it is apparent that genetically isolated populations can adapt to new environments and eventually become new species, while ancestral and sibling species can continue thriving, probably in the “homeland,” just as parents rarely die upon producing offspring, and the offspring eventually leave home.  The tree of life on Earth has many branches, and although all branches will eventually end, new twigs from the same branch can grow while the original branch continues growing.  Stephen Jay Gould suggested that a transition to a new species averages about 15-to-20 thousand years.[528]  That is under the “natural” effects of geological and climatic dynamics, and animals trying to survive.  But the human line has changed all that.  Animals make nests, burrows, and other structures that enhance their ability to survive, but humans began making radically different environments called “artificial” today, and the first artificial environments were campfires surrounded by Homo erecti or close relatives trying to stay safe, warm, and well-fed.  Those humans not only used fire to help conquer the world, they also introduced “artificial” variables into human evolution, and the first may well have been the transition to ground-dwelling and the changes that derived from eating cooked food.  Humans introduced radical variables to evolution never seen before on Earth.

Humans are unique in many ways, although a healthy behavior amongst scientists is stating that humanity is “just another species.”  There is even an acronym used in scientific circles to emphasize our mundane status, which is no better or worse than any other organism.  Humans are different, but using it to justify our status bestride Earth is egocentric, and the humility of “just another species” scientists is badly needed in our world today.

During that boring million years, Homo erectus changed from hunted into hunter.  They did not dominate their biomes, but they were also respected by local predators and feared by what they hunted with their primitive weapons.  At what stage big cats and other megafauna in Africa learned to avoid Homo erectus and its descendants is not clear, but it happened, and is thought by most scientists today to be why Africa retained its megafauna, and to a lesser extent Eurasia, when the other continents quickly lost them soon after humans appeared, which is a subject for the next chapter.  But an early indicator of what probably happened, repeatedly in the coming rise and dominance of humanity, is when Homo erectus (or  habilines or australopiths) first made it to Flores Island about 900 kya (scientists have found tools but no human-like fossils), perhaps by rafting: a pygmy elephant, a giant tortoise, and a giant lizard all quickly went extinct.[529]  Today, it appears that once the migrants made it to Flores Island they stayed and forgot how to leave.  They eventually became island-dwarfed and lived on Flores for nearly the next million years, and went extinct soon after Homo sapiens arrived.

The closer the timeline of life on Earth gets to the appearance of humanity, the less our ancestry is doubted among scientists, and there is virtual certainty that Homo heidelbergensis is humanity’s direct ancestor.  Their brains were nearly the size of modern humans and they inherited Acheulean tools from their ancestors and used them for hundreds of thousands of years.  There is plenty of evidence that Homo heidelbergensis migrated to Western Eurasia about 800 kya.[530]  But there is evidence that somewhere around 500 kya that began to change; there is evidence of Homo heidelbergensis using stone-tipped spears that long ago in today’s South Africa.  Wooden throwing spears were recently discovered in today’s Germany, along with butchered horses, dated to about 400 kya.  Scientists today are confident that Homo heidelbergensis was also the direct ancestor of Homo neanderthalensis, and the split began around 500 kya.  The range of Homo heidelbergensis was Africa, West Asia, and Europe, but the advancing and retreating ice sheets of Eurasia, Europe in particular, kept driving Homo heidelbergensis southward, and during one of the retreats, it seems that the ancestors of Neanderthals stayed.  Neanderthals became a cold-adapted species that specialized in hunting big game.  As the evidence demonstrates today, life was a brutal proposition in humanity’s early days, and was particularly harsh for Neanderthals.  They probably could not throw very well and relied on ambush predation.  Scientists have studied Neanderthal bones and compared their injuries to those of rodeo riders, but a recent study cast some doubt on that, partly in light of recent evidence that Neanderthals may have also developed wooden throwing spears.  But whether Neanderthals had to stab their prey in close quarters or eventually learned to throw weapons at them, the studies of early human bones describe a grim existence.  Breaking bones were regular events, particularly skull fractures, and that was for trauma survivors.

Neanderthals invented more sophisticated stone tools about 300 kya, for the first significant advance in more than a million years, and their toolset is called Mousterian, or Mode 3.  Neanderthals had the largest human-line brains ever measured, and they may have also invented the practice of burying the dead and placing grave goods with them, but they could have inherited burial practices from their Homo heidelbergensis ancestors.  As with their ancestors, they cooked and ate vegetables and carved flesh from their corpses, in either cannibalism or a funerary practice.  Neanderthals seemed to be a regional human variation, adapted to colder environments, and the fact that they interbred with Homo sapiens has caused some scientists to classify them as Homo sapiens neanderthalensis.  If they did not become a truly separate species, they were slowly speciating as they adapted to their ice age environment.  Neanderthals built shelters, may have drawn cave paintings, and engaged in activities comparable to Homo sapiens of the time.  There seems to be little reason to call them “primitive” when comparing them to Homo sapiens, particularly the early ones.  The last Neanderthals died out about 30 kya, about the same time that Cro-Magnon humans arrived in the region, and it was no coincidence. 

To revisit the Neanderthal split from Homo heidelbergensis about 500 kya, Homo heidelbergensis stayed in West Asia and Africa.  When evidence of stone-tipped spears being made 500 kya came to light, some scientists placed the beginning of the Middle Stone Age at about 500 kya.  Stone tools have recently been dated using thermoluminescence dating, which works for stone tools heated by fires, and using obsidian hydration dating.  That method measures water absorption into the surface of obsidian tools.  For dating artifacts before the appearance of behaviorally modern humans about 70-50 kya, carbon-14 dating will not work, but other tests have been successful.  Neanderthals dominated Europe and today’s Middle East while Homo heidelbergensis’s home was Africa, and they also ranged to Europe and West Asia.  Whether Homo heidelbergensis existed for only a half-million years or a million is controversial today, but what is not very controversial is that it is probably the direct ancestor of both Neanderthals and Homo sapiens, and the first members of our species appeared in Africa about 200 kya.  There is evidence that other descendants of Homo heidelbergensis may have existed, and a possible descendant was discovered in Siberia.[531]  It also could have been a Neanderthal descendant.  As with the discovery of the “hobbits” of Flores Island, it will not be surprising if scientists find more species that branched off of those early human and protohuman lines and died out when behaviorally modern humans spread across Africa and Eurasia.

When Homo sapiens first appeared about 200 kya, around when Homo heidelbergensis disappeared from the fossil record, it was in Africa, East Africa in particular.  That possible human subspecies or intermediate species between Homo heidelbergensis and Homo sapiens lived in today’s Ethiopia about 160 kya, and there is evidence of Homo sapiens in today’s Morocco about 160 kya.  From Proconsul to Ardi, Lucy, Turkana Boy, and Homo sapiens, East Africa, particularly around Lake Victoria and the Horn of Africa, seems to have been an auspicious place to evolve.[532]  Some have argued that it may only seem that way because that region preserved fossils better.[533]  But stone tools preserve well.  I doubt that the cradle of humanity will move much from where anthropologists have currently placed it, and there are ecological reasons for that region to have been so productive for fossil hunters. 

Advancing and retreating ice sheets had major impacts on events of those times.  Milankovitch cycles have 26,000, 41,000, and 100,000 year oscillations, among others, related to Earth’s solar orientation.  The 100,000-year-effect is the weakest of those three listed above, but for reasons still rather obscure, it has been the tipping point for advancing and retreating ice sheets during the past million years.  The million-year pattern has been creeping glaciation that oscillates, and the glaciations reached their maximum soon before they rapidly retreated and Earth had a warm respite that lasted for 10-20 thousand years before the ice sheets began to grow again, and another 100,000 years passed before the next interglacial interval.  Scientists expect that the current ice age will last for millions more years.  The most extreme glaciation during the current ice age happened between 475 kya and 425 kya. The last glacial maximum, about 25 kya, soon after Neanderthals went extinct, looked like the below map, and the 475-425 kya glaciation map appeared similarly.  (Blue is ice, green is land.  Source: Wikimedia Commons)

Neanderthals seem to have become permanently separated from their Homo heidelbergensis kin when the ice sheets grew about 300 kya, and between 250 kya and 200 kya two glacial events happened, which roughly coincided with the final exit of Homo heidelbergensis and the appearance of either Homo sapiens or that possible transitional species.  The previous interglacial period began 130 kya and ended 114 kya, and it appears that Homo sapiens left Africa for the first time about then; evidence is found in a cave in today’s Israel, and they may have traveled much farther.  When ice sheets advanced, Neanderthals retreated southward, and one controversial area is the overlap of Neanderthals and Homo sapiens in Israel during that interglacial period.  Did Neanderthals wipe out those first Homo sapiens migrants from Africa?  Did they interbreed?

Whatever the case may be, it appears clear that the Homo sapiens population in Africa and Neanderthal population in Europe and the Middle East were isolated for tens of thousands of years, perhaps far more than 100,000 years, and humans used a toolkit like the Neanderthals’ until something happened between 70 and 50 kya.  Just what happened is a matter of great controversy, and in recent years, several disciplines have converged on the issue and are drawing a clearer picture today.  Some key findings that shed light came from global DNA studies, linguistics partnering with evolutionary theory, and brain studies.  In the past generation, as DNA sequencing has been applied to many areas, a startling picture of the human journey has emerged.  Mitochondria retained some of their DNA, probably for flexible power generation.  For animals that reproduce sexually, the mother’s mitochondria are passed to her offspring, while virtually none comes from the father, if any.  Geneticists can measure mutations in mitochondrial DNA and approximate when two different animals shared a common ancestor, whether they belong to the same species or not.  Similarly, regarding nuclear DNA, the Y chromosome produces a male mammal, and mutations in the Y chromosome can also be analyzed to estimate when two men shared the same ancestor.  Putting absolute dates on DNA results has been problematic, but scientists have been aligning DNA results with fossil dates, which are considered more reliable, and have been resolving some limitations.  But if the timing is suspect for such genetic analyses, far more confidence exists for descent relationships.  Human DNA testing is a burgeoning business, used for everything from freeing prisoners falsely convicted to determining paternity to examining the genetic heritage of the sitting U.S. president’s wife. 

The picture emerging from global DNA testing is that all humans on Earth today are descended from a founder population that lived in East Africa, again near the Horn of Africa, around 60-50 kya.  Geneticists think that the founder population amounted to about five thousand people, and of that population, a few hundred humans at most left Africa about 60-50 kya and conquered the world.[534]  If any people ever lived up to Genesis’s instructions to subdue Earth, it would have been them.  It may have been because they were members of the first primate species to master language.

 

Humanity’s Second Epochal Event: The Super-Predator Revolution

Chapter summary:

  • Humanity's progress up to the founder group that left Africa

  • Thin fossil evidence for early human travels and development

  • Global success of the elephant family

  • Wild card of human consciousness

  • Basics of human existence

  • How right Darwin was

  • Universal human traits

  • Human mastery of language

  • Evolutionary impacts on language and brain development

  • Great Leap Forward to behavioral modernity, and Missing Links

  • Founder group's exit from Africa

  • Energy return on hunting large animals

  • Tendency of hunters to overkill

  • First megafauna mass extinction, in Australia

  • Opposition to human agency in megafauna extinctions

  • Myth of the peaceful savage

  • Beginnings of warfare

  • Beginnings of religion

  • Universal human tendency to punish cheaters

  • Unprecedented threat that humans posed to megafauna

  • Human migration pattern from Africa, as established by DNA testing

  • Short-lived Golden Age of the Hunter-Gatherer

  • Why the African and some Asian megafauna survived, and human DNA mixing with other humans

  • Humans invade Europe, and Neanderthals quickly go extinct

  • New stone tool culture is developed from Neanderthal tools

  • Climate change and Neanderthals

  • "Hobbits" are the last surviving non-sapiens humans

  • Humans and the mammoth extinctions

  • First fishermen

  • Increase in human-on-human violence

  • Debates on Western Hemisphere megafauna extinctions

  • Native American migration from Asia

  • Western Hemisphere's megafauna extinctions

  • Humans making the world safe for themselves

  • Megafauna survival on island refuges

  • Human migration after the ice sheets melted, as determined by DNA testing

  • Great increase in human violence once the megafauna were gone

  • Energy superiority of farming over hunting and gathering

  • High death rates of "primitive" warfare

  • Economic basis of all warfare

  • Dramatic effects of the melting ice sheets

  • Why social animals are social, and hunter-gatherer social organization

  • Resource-competition basis for warfare

  • Societal male dominance and societal violence

  • Peaceful beginnings of agricultural societies

  • Fifth (microlith) stone technology culture appears

  • Limbic conditioning of religion and warfare

  • How unimaginable the result of the founder group's journey would have been to the founders

  • Dogs are the first domesticated animals

Anthropologists and primate researchers long believed that culture was the unique province of humanity, but relatively recent scientific findings have disproven that notion.[535]  Capuchin monkeys have cultural learning, and it is more sophisticated with great apes.  It took a few million years after the human/chimp split for our ancestors to learn to make stone tools, and that culture then spread widely in Africa.  The control of fire, appearance of Homo erectus, and development of a new toolset were probably all closely related and at least partly interdependent, but little seemed to change for the next million years or more.  Then the next version of humanity appeared and possessed a larger brain, and new tools and behaviors are evident beginning about a half-million years ago.  The timeframes continually shrank between major events in the human journey.   Only 200 thousand years later, Neanderthals appeared and created a new toolset, and new behaviors are in evidence.  Only 100 thousand years after that, anatomically modern humans appeared.  Only 30 thousand years after that, about 170 kya, new tool-making techniques appeared, as well as humanity’s first known exploitation of the seashore biome, probably due to necessity, where life once again was eked out on the margins, and those humans may have decorated their bodies.  About 100 kya, innovation seems to have accelerated again, and by 75-60 kya there is evidence of bedding and sophisticated tools made with complex processes.  Needles and perhaps even arrowheads first appeared about 60 kya.  There is no doubt among scientists that members of Homo sapiens made those advances, and their artifacts provided evidence of increasing cultural and technical sophistication, which soon left Neanderthals and all other land animals far behind.  About 75-70 kya, a volcanic eruption in Indonesia was Earth’s largest in tens of millions of years, and there is controversy today whether that eruption was partly responsible for the genetic bottleneck that Homo sapiens passed through not long afterward.  What became today’s humanity seems to have nearly gone extinct at that bottleneck.

Those issues will not be resolved in my lifetime, but Homo sapiens migrated past Africa in the interglacial period of 130 kya to 114 kya.  There is evidence and speculation that those humans may have bred with Neanderthals, were killed off by them, migrated across Eurasia, or some combination of those events.  There is evidence that heidelbergensis or Neanderthal descendants, the Denisovans, also migrated across Eurasia, perhaps expanding to Southeast Asia as Homo erectus did.  The Denisovan evidence arose from analyzing DNA from teeth and bones, which is the only physical evidence of Denisovans discovered so far, and their genes are more prevalent in aboriginal Australians and Melanesians.  To summarize, there is substantial evidence that the human line probably populated Eurasia in significant numbers by 200 kya, and perhaps even anatomically modern humans around 100 kya.  They could have driven vulnerable species to extinction, with their advanced toolkit and hunting behaviors, long before behaviorally modern humans left Africa about 60-50 kya.  Homo erectus became extinct less than 150 kya in East Asia or the islands off of it, and the largest primate ever disappeared about 100 kya.  Those two primates coexisted for more than a million years and disappeared concurrent with the rise of humans with sophisticated toolsets.  They may well have been early casualties of humanity’s success. 

To briefly revisit conflicts between specialists and generalists, to that speculation above, scientists ideally want persuasive evidence that humans drove Homo erectus and Gigantopithecus to extinction.  They want Acheulean or later technological artifacts associated with kills of those species.  All that scientists have found for gigantopithecus so far are some teeth and jawbones.  Although such deductive reasoning is sound, the fossil and artifactual record is so thin that such evidence will probably never be adduced, even if it was a common event 150-100 kya.  Gigantopithecus survived for nine million years and disappeared around when more lethal humans arrived, and a camel that roamed today’s Syria went extinct about 100 kya, soon after anatomically modern humans arrived in the vicinity.  Is that a coincidence?  There is genetic evidence that behaviorally modern humans interbred with Neanderthals, Denisovans, and perhaps other early humans, and they all went extinct soon after those behaviorally modern humans arrived.  That they interbred put to bed the hypotheses that they went extinct before Homo sapiens arrived on the scene.  If they went extinct after behaviorally modern humans arrived, as the genetic evidence clearly tells us, the implications are obvious, and any extinction hypothesis that invokes climate change or some other natural catastrophe has some high hurdles to overcome.  Those events were probably early salvos of the Sixth Mass Extinction.

As will be seen in this chapter, the spread of behaviorally modern humans closely coincided not only with the extinction of humans and primates that existed for hundreds of thousands and even millions of years, but virtually all of the world’s large animals went extinct almost exactly when behaviorally modern humans arrived, all except those that had evolved alongside the human line for millions of years in Africa and Eurasia.  Some vanished animals were among the most successful in Earth’s history.

After Africa began colliding with Asia, about 18 mya Asian animals quickly invaded and dominated Africa.  The two primary exceptions were proboscideans and apes, both of which prospered at home in Africa and in Eurasia.  Proboscideans did even better; they did not only become prominent in Eurasia, but they also migrated to North America by 16.5 mya.  They migrated to South America about three mya, as soon as they could, and quickly succeeded in all South American biomes, from rainforest to grasslands to mountains.  They beat apes to the Western Hemisphere by 16.5 million years.  Elephants have passed the mirror test and mourn their dead.  Their huge size and prehensile trunks, as well as their ability to eat a wide variety of vegetation, let proboscideans flourish everywhere that they possibly could.  They even formed biomes as a terraforming force.  Until humans arrived, proboscideans were the most intelligent, adaptable, and successful land mammals ever and arguably outperformed the dinosaurs.  But after nearly 20 million years of global success, they nearly all went extinct soon after encountering behaviorally modern humans.  They went completely extinct in the Western Hemisphere, and there has long been controversy among scientists whether humans caused it, although the debate is fading as evidence of human agency becomes clearer.

Some scientists treat every proboscidean extinction as a unique mystery, unrelated to other proboscidean extinctions, and climate and resulting vegetation changes are hypothesized as agents of extinction (or other causes invoked), when the most probable cause stares at them each morning in the mirror.  The devil is in the details, but regarding the megafauna extinctions, some specialists cannot seem to discern a very clear pattern.  Scientists, because they are human, have an inherent conflict of interest when attributing such catastrophes to non-human causes.  During the remainder of this essay, it will become evident that there is a human penchant for absolving one’s in-group of responsibility for catastrophes and crimes committed against the out-group, and historians, scientists, and other professionals regularly engage in such interest-conflicted acts, whether they were defending their species, race, gender, nation, class, ideology, ethnicity, or profession.  That in-group/out-group difference in treatment has a long history and probably goes back to the beginnings of territorial social animals.

As scientific investigations deal with the human line, the issues increasingly become more complex and difficult to untangle and assess.   This is largely because of human consciousness, which is a wild card, something that if not different in kind, is vastly different in degree, at least for land animals; cetaceans may well be another matter.  Designing falsifiable hypotheses for testing human behavior and consciousness has provided challenges not seen in other sciences, and experiments performed on our primate cousins have also become more humane.  Dissecting chimp brains while they are still alive is as ethically unacceptable today as doing it to humans.  Even today, data on the effects of cold and altitude on humans was primarily gleaned from Nazi experiments on prisoners.  Today’s scientists who study human consciousness and its relationship to physical reality have been limited by ethics and what is perhaps the primary limitation: in studying human consciousness, scientists are studying themselves.  The ideal of objective examination of the material world is hampered by unresolved paradoxes right at the bedrock, and an objective examination of human consciousness, by humans, may well be an impossible goal.

However, studying the human line is in many ways little different from studying other organisms.  Maybe there was love among the protists and trilobites, but life’s journey on Earth seemed to rarely stray far from the essentials of acquiring energy, preserving it, and procreation.  It is little different with today’s humans, even in the most “advanced” civilizations.  Whatever means humans have used since that founder group left Africa 60-50 kya, the primary goal was always the same: survive long enough to produce offspring.  All human societies had to meet that goal, first.  There are no hungry philosophers, and the concept of Maslow’s Hierarchy of Needs can help rank human needs and desires.  If a human does not receive adequate food, water, air, shelter, sleep, and sex, the rest does not matter.  Once those needs are met, social needs become important.  Virtually all higher primates are intensely social.  But for nearly the entire human journey, the primary preoccupation of all peoples for all time was food security.  Until the Industrial Revolution, few humans ever rose much past that most fundamental need of getting enough energy to power their biology.  When preindustrial societies ascended past that level, it was never for long, as famine and civilization collapses always brought humans back to the basics of securing food.  This essay’s primary purpose may be helping humanity past that threshold, where survival needs are paramount and rarely recede from the forefront of human awareness, even when people pretend that they are not.  I live in history’s richest and most powerful nation, near the world’s richest man, and I pass by homeless people each day.  Just as the journey of life on Earth has always been primarily about physical well-being, which is always rooted in the energy issue, so has the human journey, and the study of human well-being is called economics.

While performing the studies that became my website and this essay, one figure loomed, both within orthodoxy and on the fringes: Charles Darwin.  Perhaps because I live in the USA, which may have more hostility toward evolutionary theory than anywhere else on Earth, with Biblical literalism still so popular, I have encountered many attacks on Darwin and Alfred Russel Wallace’s theory of evolution.  I have continually read recent scientific works in which the authors remark, “Darwin was right again!”, as another one of his hypotheses was confirmed by modern scientific investigation.  When Darwin wrote that the cradle of humanity was probably Africa, because that was where the most human-looking apes were, his position was dismissed for generations and most contemporary scientists suspected that humans evolved in Asia.  Among Darwin’s many contributions to science, the most enduring may be that all life on Earth has an ancestry, which can be traced all the way back to the beginning.  Biology’s tree of life is only a more elaborate version of what Darwin began sketching long ago.[536]  When DNA analysis became feasible in my lifetime, the findings led scientists to say, “Darwin was right again!”  Darwin died without ever hearing about genetic theory, but his theory of descent from common ancestry has become the bedrock of evolutionary theory.  In addition, scientists are using that idea to reconstruct trees of human language and religion, among other human constructs, in which all languages and religions today are descended from that founder group of 60-50 kya, and the results are impressive.

Donald Brown published Human Universals in 1991, which noted traits found among all human societies.  That book was published more than a decade before the human genome was sequenced and before scientists amassed the genetic evidence that traced the human lineage to those five thousand people in East Africa 60-50 kya.  Those universal human features were almost certainly possessed by that founder population.  The primary traits of “the Universal People” (“UP”) are listed at this footnote.[537]  Some less-than-universal traits are not on that list, such as women terminating unwanted pregnancies, killing unwanted children, and capital punishment, but were close to universal.  That may mean that some societies discarded those behaviors over time or that most adopted them later; the former situation seems more likely.

There have been many interesting divergences in descendants from that founder population, such as the way that the West emphasizes linear time while the East emphasizes circular time, and some scientists wonder if that has been reflected in the DNA of those peoples by now.  How many of UP’s traits are biological?  How many are culturally and economically dependent?  What is human nature, and can our seeming sentience change or overcome our natures?  Some of UP’s traits are evident in today’s monkeys and apes and are deeply ingrained into human consciousness if not necessarily human biology.  Others have declined in prominence or seeming importance in the historical era, particularly since the Industrial Revolution began.  However, when I compared that list to my American society, which is history’s most “advanced,” all UP traits still exist, to one degree or another.

What heads that list may well be the primary trait that led to UP’s dominance of Earth: their mastery of language.  Although social communication via sound may have begun with dinosaurs and perhaps even earlier, and Homo heidelbergensis had biological features that would have made vocal communication more sophisticated, and Neanderthals had biological features that further enabled speech, scientists strongly suspect that the mastery of language that today’s humans display probably allowed humans to rapidly develop their technology and culture.  It was humanity’s first Internet: a way to communicate ideas and information in a way previously unfeasible and even unimaginable, at a level of sophistication that no other land animal ever achieved.  That invention provided the opportunity for sharing complex ideas, which created positive feedback loops that allowed for quicker cultural and technological advances.  That is not fanciful speculation; linguistics, the study of brain abnormalities, and genetics testing has converged on what seems the most plausible hypothesis today, although in these areas the controversies can be fierce. 

Noam Chomsky has been called “the Einstein of linguistics.”  His influence on my political-economic thought has been profound, and it has been interesting to stumble upon his work in diverse fields, largely related to linguistics and psychology, but he is also a major figure in philosophy.  Chomsky did not find an intellectually satisfying connection between his scientific and political work, but others have.[538]  Chomsky has had an outsized influence on linguistics since the 1950s, his interactive style can be polemic, and his tremendous influence arguably delayed some directions that linguistics has taken.[539]  Darwin’s observations again found new relevance, this time in linguistics; he noted that language acquisition seemed instinctual.  Chomsky observed that any infant on Earth can be placed in any society, and will master the language that he or she was raised with, which is one of UP’s traits.  Darwin thought that human mental traits were developed through natural selection, and although Chomsky thought that there was an innate language “organ” in human biology, he did not pursue its evolutionary implications, and linguistics neglected that connection until recently.[540]  Since the rise of DNA analysis and new directions in linguistics that even Chomsky began taking in his old age, scientists are finding genes and brain regions closely related to language.  The predominant evolutionary models have linked language with other forms of communication such as gestures, and Broca’s area in the frontal lobe is closely associated with those activities.  One way that scientists linked brain regions with activities and traits was when those areas have been damaged by accident or disease.  In 1990, a scientist reported on a London family wherein a large fraction had severe language deficits.  In 1998, geneticists studied the DNA of that family and isolated the FOXP2 gene as the cause.[541]  Neanderthals shared the same gene with Homo sapiens and, together with other anatomical similarities, this suggests that Neanderthals may have had spoken language.

However, all that scientists have determined so far for DNA's function is providing the “blueprint” for making proteins.  Proteins have four levels of structure, and the science of epigenetics studies the highly complex way that genes express themselves.  DNA provides the foundation for life’s structures, and as with Hox genes, the FOXP2 gene is highly conserved in humans, which means that it does not change.  Similar to my analogy of a house’s foundation determining what kind of house can be built on it, those genes form the foundation of the biological structures built from them, and if the foundation is damaged, the resulting house will be defective.  Epigenetics and other factors are important, but if the foundation is sufficiently flawed, the house may not stand at all.

The Great leap forward has been a prominent hypothesis that posited that behaviorally modern humans suddenly appeared.  It was once considered an abrupt event that began about 50-40 kya, but as new archeological finds are amassed, as well as recent advances in genetic research and other areas, the story is familiar.  Although on the geological timescale the event was abrupt, radical, and unprecedented in life’s history on Earth, the “ramping” period seems to have lasted longer than initially thought.  A likelier story is that Homo sapiens first appeared about 200 kya in East Africa, which conforms to a 25-million-year primate pattern of evolutionary innovation.  Homo sapiens inherited culture and tools from their ancestors and continued along the path of inventing more complex technologies and techniques, exploiting new biomes, and reaching new levels of cognition.  There does not seem to be any Missing Link or development that needs to invoke divine or extraterrestrial intervention to explain the appearance and rise of Homo sapiens.  Some Homo sapiens migrated past their African homeland during the previous interglacial period of 130 kya to 114 kya and brought along their technology.  Although they may have disappeared and perhaps became Neanderthal prey, vestiges of their fate are probably yet to be discovered.  They may have contributed to the biological and technological wealth of Eurasian humans and may have begun to drive vulnerable species to extinction with their new tools and techniques.  However, Africa remained the crucible of primate biological and technological innovation, as it almost always had to that time.  By 70-60 kya, isolated African humans reached a level of sophistication called behavioral modernity.  Art was in evidence, needles made clothes and other sophisticated possessions, and they mastered language, which was probably a unique trait among land animals.  They made tools of a sophistication far advanced over other humans, which probably included projectile weapons that radically changed the terms of engagement with prey animals, predators, and other humans. 

Those events happened during a glacial interval; the global ocean was about 70 meters lower than today, and today’s 18 kilometer gap at the Red Sea’s mouth was far narrower about 60-50 kya.  Today, this seems to have been the founder group’s point of exit from Africa.  That route seems likely for a few reasons, one of which is the DNA evidence in the peoples living along Southern Asia's periphery of all the way to Australia, and the other is that Homo sapiens were the first humans to arrive in Australia, and it could only be reached by boat.[542]  Here is the map of the settlement pattern of Eurasia from the founder group, as determined by DNA testing.  (Source: Wikimedia Commons)

Taking a sea route was a new accomplishment by those behaviorally modern humans, and they probably reached Australia about 48-46 kya, because the Australian megafauna began going extinct about then, and that event begins a long and bloody tale that continues to this day.  Earlier extinctions of the megafauna on Flores Island of nearly a million years ago, or Homo erectus and Gigantopithecus between 150 kya and 100 kya, can be considered more equivocal, but there is virtually no doubt among today’s scientists that the Sixth Mass Extinction began in earnest with the human invasion of Australia. 

Before examining the details of the barrage of extinctions that followed behaviorally modern humans wherever they appeared during the next 50,000 years, a brief review of key dynamics is in order, and energy trumps all, as always.  All predators eat the easy meat first, and a cost/benefit decision drives the process, which today’s analysts call EROI.  It was an instinctual process with most animals.  Many human practices today are similar; members of traditional societies cannot provide answers for their mass behaviors other than, “We always did it this way,” or, “It is part of our religion,” but scientists study their practices and find them energetically, even ingeniously, ideal, but nobody in those societies was consciously aware of it.[543]  Societies without such energy-efficient practices failed, and those that religiously followed them survived.

In today’s hunter-gatherer societies, the EROI for killing large animals dwarfs all other food sources.  The EROI, of calories produced divided by those burned during the hours of labor invested, for large game (a deer, for example), is more than 100, and on average four times that of small game, fifteen times that of birds, about eight times that of roots and tubers, and 10-15 times that of seeds and nuts.[544]  The hunter-gatherer EROI for seeds, nuts, and birds is around ten-to-one.  An average-sized adult African elephant carcass provides about 13 million calories, which would sustain a band of 12 people for a year if they could eat it all before it rotted and did not die of protein poisoning.  The EROI for those easily killed proboscideans when humans invaded the Western Hemisphere could have been in the hundreds and even more than one thousand.  Large animals have always been the mother lode of hunter-gatherer peoples, and the consensus among anthropologists is that no instincts urge a hunter to kill only what is needed, but a hunter will kill whatever he can.[545]  That finding partly derives from studying modern hunter-gatherers.  There is no doubt that when early humans intruded into environments that never before encountered humans, where animals would have had no intrinsic fear of humans, people would have had an exceptionally easy time killing all large animals encountered.  Animals without experience around humans, such as Antarctic penguins, are easily approached and killed.  As happened innumerable times in the historical era, intruding humans killed all the naïve animals that they could.  The only animals that survived developed a healthy fear of humans and avoided them, but how many could develop that fear before they were all killed?  From the very beginning of the eon of complex life, large size was an evolutionary advantage.  More than 500 million years later, a new kind of animal appeared that turned that advantage into a fatal disadvantage, as it found a way to mine that energy stored in large animals, and it quickly plundered it to exhaustion whenever it could.

As repeatedly seen in the historical era, if a new technology enabled great numbers of animals to be killed, hunters quickly adopted the practice of killing the most animals that they could and harvested only the choicest cuts, as with bison tongues in North America.  The North American bison was quickly driven to the brink of extinction by American “pioneers.”  When Indians obtained horses from Europeans, they too killed all the bison that they could, and stampeding them off cliffs was a common practice for thousands of years and accelerated when horses made the job easier.[546]  Some Indians used all parts of a bison, but that seemed a minority practice, particularly after horses made hunting far easier, and was probably economically mandated.  Cultural differences between Plains tribes began disappearing with the radical changes that horses and firearms brought to bison hunting, and stealing horses from neighboring tribes became a predilection.  Even today in 2015, fisherman procuring shark fins for soup just cut off the fins and throw the sharks back into the ocean to die, which is driving shark species to extinction.

In the Western Hemisphere, Africa, and Eurasia, the five-to-seven metric ton herbivores and the predators that hunted them became guilds that stretched back to the dinosaurs, but in marsupial-dominated Australia they were a little smaller, and the largest marsupial ever, Diprotodon, reached “only” about three metric tons.  Australian animals enjoyed about 45 million years of isolation from the rest of Earth’s ecosystems, and large herbivore/predator guilds thrived there as they did elsewhere.  After appearing about 1.6 mya, Diprotodon quickly went extinct about 46 kya, and their bones have been found with what appear to be butchering marks on them.  The next largest denizen of Australia, Zygomaturus, weighed about 500 kilograms and went extinct when Diprotodon did.  Megafauna are variously defined as animals weighing at least 45 or 100 kilograms, which is about as massive as humans.  About 90% of Australia’s megafauna went extinct soon after humans arrived.  Lizards of up to two metric tons disappeared, a 500 kilogram, three-meter tall flightless bird also disappeared after its family had a 15 million year existence, a gorilla-sized kangaroo, and so on.  A number had fossil records of more than 10 million years, to go extinct shortly after humans arrived.  The list of suddenly extinct Australian megafauna is horrifically impressive.  I have yet to see a disinterested scientist or academic deny the idea that humans were primarily responsible, and almost certainly solely responsible, for Australian megafauna extinctions.[547]  When a “referee” paper was published in 2006, which assessed the state of the debate, the authors attributed Australian megafauna extinctions entirely to humans.[548]  There is evidence that those early Australians engaged in setting great fires.  On Borneo, about the same time that humans first invaded Australia, near Niah Cave, humans also burned the forests with abandon, as they probably tried to transform the rainforest environment into something friendlier to humans.[549] 

Along with huge Australian herbivores, their predators went extinct.  Australia’s “marsupial lion” appeared about 1.6 mya and went extinct about 46 kya, when its prey did.  There is a “loyal opposition” to the idea of human agency for Australian megafauna extinctions that regularly produces papers that attribute the extinctions to climate change.[550]  However, the vast majority of research results points very clearly to human agency in the extinctions.[551]  The battle of the scientific papers will not end soon, but this is where the pattern recognition of the generalist can greatly assist, while the specialists’ obsession with minutia can get them lost in the details.  Although human-agency skeptics sometimes seem to look at the big picture, their picture is not nearly big enough, in my opinion, and a generalist analysis follows. 

What I have yet to see human-agency skeptics discuss is that guilds of multi-metric-ton herbivores and their attendant predators appeared hundreds of millions of years ago, and attaining large size was both an offensive and defensive strategy that goes back to the "arms wars" of the Cambrian Explosion.  When mass extinction events happened, the race began anew.  Ornithischians rose to prominence along with flowering plants.  When the Cretaceous extinction wiped out those dinosaur guilds, it was not long before large herbivores began to reappear, with mammals in those niches.  Within 25 million years of the bolide event, mammals reached their maximum size and remained there for the next 40 million years, until humans arrived.  Although species emerged and went extinct, just as they had for the entire eon of complex life, that guild stayed relatively constant in size.  When humans arrived, entire guilds disappeared.  The five-to-seven ton herbivores and their predators vanished and were replaced by guilds a tiny fraction of their size.  Car-sized glyptodonts inhabited a niche that ankylosaurs once resided in, and soon after humans arrived, only dog-sized armadillos remained.

What human-agency skeptics have ignored or argued around are unique features of the megafauna that went extinct and the humans that preyed on them, while they examined minutia.  Proboscideans were Earth’s most successful land mammals ever before humans arrived.  As modern research has discovered, African elephants help create the biomes they live in, as terraforming agents.  They were far from idle browsers and grazers, but had outsized impacts on the vegetation, soils, and geological features such as water holes.  Dinosaurs may have had similar biome impacts, and it was probably a feature of that large herbivore guild.  Scientists have been finding plenty of evidence that vegetation changes that human-agency skeptics attribute to climate change may well be largely the result of the guild’s disappearance, not a cause.  Researchers in Africa have also discovered that changes wrought by elephants created biomes dependent on elephant management.  When elephants disappeared, so did the biomes that they created, which is why smaller species could also disappear when the large herbivore guild vanished.  Although Australia was the only non-Antarctic continent without proboscideans 50 kya, and its guilds were comprised of somewhat smaller animals, probably reflecting inherent differences between placental and marsupial mammals, Australia's large herbivores probably had similar biome impacts.

Human-agency skeptics emphasize climate change above all other factors, but that seems a weak argument.  Many of the suddenly extinct Australian megafauna had lived for more than 10 million years and longer, such as that family of large, flightless birds.  Many others appeared during the current ice age and lived for more than a million years, to suddenly go extinct when humans appeared.  Scientists have counted 17 glacial episodes during the current ice age, and they have had a clockwork-like regularity for the past million years.  The most severe episode yet was more than 400 kya.  How can guilds that lived uninterrupted for at least 40 million years, in an increasingly cold and arid world, and survived many climate fluctuations of the current ice age in fine shape, suddenly go extinct, worldwide, wherever humans appeared, and it was all due to climate?

As impressive as the capabilities and survival history of the global megafauna were, what seems far more difficult to explain away are the humans that arrived when the global megafauna went suddenly extinct.  The only megafauna of note to survive were those that had lived with humans in Africa and Eurasia for more than a million years and learned to avoid them, as almost all game animals do today.

During the early days of imperialism, Thomas Hobbes argued that so-called primitive peoples lived lives that were “nasty, brutish, and short.”  Jean-Jacques Rousseau later promoted the idea that humanity’s natural state was gentle.  During the imperial “explorations,” conquests, and early anthropological studies, Hobbes’s vision prevailed, until the world wars of the 20th century.[552]  Anthropologists reacted to the devastating world wars in the 20th century's first half by creating a neo-Rousseauian “peaceful savage” meme that dominated thinking regarding “primitive” and ancient peoples.  In the early 1990s, as I began the study that became my website, which eventually led to this essay, I was influenced by “the peaceful hunter-gatherer” meme.  However, evidence has been adduced from numerous disciplines that that idea is false and can even be seen as a romantic notion of looking back to a vanished golden age.  Archeological evidence helped overturn the “peaceful savage” myth.  Often, it was not new evidence coming to light, but archeologists were no longer blinkered by their indoctrination and denied what their eyes told them.[553]  The evidence finally prevailed, and on the heels of that overturned dogma, genetic evidence provided new insights into the human journey since that founder group left Africa.

When protohumans mastered stone tools and fire, they eventually transformed from hunted to hunter, and there is no persuasive reason to believe that an early application of their new ability to inflict wounds would have not been used on each other.  As this narrative reaches the rise of Homo sapiens and the archeological record’s changing toolsets before people became sedentary, those artifacts may well reflect the conqueror’s toolsets, and the vanished toolsets also represented vanquished and exterminated peoples.

To briefly revisit UP, men have always committed vastly more violence than women, were the primary hunters, and almost always dominated all societies.  In general, the higher women’s status, the healthier the society was.  The !Kung people of Africa stayed isolated hunter-gatherers to the present day.  Their click language, with its click sounds shared with other African groups, such as the last full-time hunter-gatherers left in Africa, the Hadza, probably sounded like the language that the founder group left with and has since been lost beyond Africa.[554]  Genetic testing has demonstrated that the !Kung and related groups remained in Africa when that founder group left, and their geographic isolation and warlike ways kept them genetically isolated.  Genetic testing also traced the migration path to Australia, and found peoples that stopped along the way, as part of a coastal migration that eventually reached the Pacific side of Asia and maybe all the way to the end of South America.  One reason why the coastal route was probably the first was that it was warm and relatively easy.  Around 60 kya, the global climate warmed a little.  It was about the warmest period in the 100 thousand years before this interglacial period before it began oscillating toward the glacial maximum around 20 kya.

The Andaman Islands are positioned off the Malaysian coast.  Sailors avoided the islands for centuries, as the natives killed anybody who landed and burned their bodies.  The Andamans looked like African pygmies.  The British established a penal colony on the Andaman Islands in the late 1700s, when about five thousand aboriginal Andamans lived on the main islands.  The Andaman population collapsed from the usual diseases, mayhem, and alcohol that Europeans brought with them, and they were nearly extinct within a century of British contact.  Less than one thousand aborigines survive today.  The genetic and other evidence has been used to make a convincing case that the aboriginal Andamans were island-dwarfed descendants of the original inhabitants.  The Andaman Islands were never connected to the mainland, so the aborigines probably descended from people who stopped and stayed during that founder migration from Africa.

The Andamans are members of a racial group called Negritos, which appear to be remnant populations of the original migration from Africa.  They all survived in marginal environments where they subsisted as hunter-gatherers, while later agricultural immigrants dominated arable lands.  About 50 kya, a few thousand years before the migration to Australia happened, the sea level was lower and the islands of Sumatra, Java, and Borneo formed a contiguous peninsula today called Sundaland.  New Guinea, Australia, and Tasmania were also connected and formed a continent today called Sahul.  Deep water lay between those two “lost continents,” and biologists drew lines between them that noted the distribution of animals and plants that did not cross open water.  Wallace’s Line is farthest north, followed by Weber’s Line, and Lydekker’s Line is farthest south.  Those lines mark the limits of migrations from Sundaland toward Sahul, which followed sea level changes.  About 48-46 kya, behaviorally modern humans crossed the water in boats to Sahul, and the peoples of New Guinea, Australia, and Tasmania largely lived in isolation until Europeans arrived.  Those peoples have Denisovan remnants in their DNA, which probably means that they interbred with them while driving them to extinction on Sundaland and Southeast Asia, before some migrated to Sahul.  Aboriginal Australian isolation was almost certainly maintained in the way that Andaman Islanders did it: by killing strange peoples who came ashore.  However, in 2012, a paper was published regarding evidence of contact about four kya with people probably from India, when the dingo, microlith technology, and some Indian DNA admixture were introduced into Australia, which seems related to a colonization of northern Australia by an immigrant population.[555]  More of those kinds of migrations of human DNA, technology, and domesticated species have yet to be discovered, and some may even be significant.  

When Europeans invaded Australia in the late 1700s, the aborigines they encountered were in a state of almost constant warfare.  What seems to be the case is that the founder group of Australia lived in the easy meat days and they grew and spread across the continent in a few thousand years.  Once the golden age based on easy meat ended, they reverted back to their territorial natures and formed 600 separate societies, with between 500 and 1,000 people in each one.  They all had unremitting hatred for their neighbors, with whom they were in constant warfare.  The aboriginal genetic diversity supports the idea that those societies did not interbreed with each other, but stayed insular.  They were all patrilocal and violent.[556] 

About 43 kya, lowering sea levels due to increasing global glaciation formed a land bridge to Tasmania.  People migrated there, too, to become isolated when the seas rose again.  The peoples of New Guinea’s highlands were the world’s most isolated, not “discovered” until the 1930s.  When Europeans first encountered them, the highlanders did not know that a world existed outside of their highland home; they thought that they were Creation’s only people.  Unlike other relict populations of the original African migrants, New Guinea Highlanders practiced agriculture and lived in villages, but they were as violent as the others.

Except for New Guinean highlanders, initial European contact with all of those relict populations was universally disastrous, just as it had been in the Western Hemisphere and elsewhere for centuries.  Those initial contacts happened in anthropology’s early days, and Alfred Radcliffe-Brown studied the Andamans in the early 20th century, when they were tattered remnants of the people of a century earlier.[557]  The San people were also devastated by European invasion.  When the Dutch invaded what became South Africa, the Southern San were driven to extinction while the !Kung survived in the Kalahari Desert.  Andamans, !Kung, and Aboriginal Australians all had/have strikingly similar religious ceremonies, which were marathon singing and dancing sessions that could last all night.  Some rituals lasted for months.  Their rituals are very likely what the first religions looked like, which were strenuous ordeals in which people reached frenzied states that left them exhausted.  Today’s leading hypothesis is that those rituals created group cohesion that held their society together.[558]  The social glue of monkey and ape societies is grooming, but humans seem to have replaced it with conversation when they mastered language, and those early rituals further cemented the bonds. 

Social cohesion was not only attained by the benefits of social interactions, but also by punishments when freeloaders did not pull their economic weight in a society.  Scientists have developed a concept called reciprocal altruism that is not altruistic (giving without an expectation of individual gain) at all, but more of a societal accounting concept.  Universal cooperation is seen as good for all of society’s members, and acts of “altruism” will eventually be “reciprocated” by some member of the society, if not the member initially helped.  A trait of UP is ensuring that exchanges are fair and that cheaters are punished.[559]  The carrots and sticks of rewards and punishments are probably as old as the earliest social animals, but as with all areas like this, humans have achieved the most sophisticated behaviors.

What all early UP societies had in common was that although they formed in-group cohesion with their rituals, it also meant that out-groups were fair game, and the connection between religion and warfare precedes that migrating founder group, perhaps more than 70 kya.  As will be discussed later, warfare and violence have been enduring human behaviors for the entire human journey, spanning from before the human/chimp split to today, with a brief hiatus when Homo sapiens spread to open lands.  Monkeys have wars, so that primate behavior probably has a history of tens of millions of years.

To my knowledge, nobody has ever invoked a climate change hypothesis for the mass extinction of South American mammals when the land bridge formed that allowed for invasion from North America, even though the formation of that land bridge probably triggered the current ice age and the North American invasions of South America.  Most South American mammal species quickly went extinct when outcompeted by more cosmopolitan invaders that had survived many millions of years of intercontinental invasions.  It was a purely Darwinian event in which animals with greater carrying capacities prevailed.  There was no big picture awareness of events by the invaders or invaded, just as there had never been during life’s history on Earth.  They all just tried to survive, and previously isolated South American mammals quickly lost the game.  The survivors were able to live in niches that no North American animals did, such as New World monkeys.

Earth had never before hosted anything like behaviorally modern humans.  Nothing came close.  They wielded fire and began using it for offensive purposes, to shape environments to their liking.  They had sophisticated stone tools and weapons, they mastered language and could engage in group behaviors that no other land animal remotely accomplished.  They probably had sophisticated projectile weapons, and if the !Kung example is instructive, they may have also known how to put poison on their weapons.  One !Kung arrow can bring down a 200-kilogram antelope in less than a day.[560]  What kind of animal in the Western Hemisphere and Australia, that had never seen anything like a human before, and would have been the mother lode kill of the invaders, and the large ones all reproduced slowly, could have withstood that onslaught?  None that I can think of.  Neanderthals were ambush predators of megafauna that were wary of humans, and whatever projectile weapons they may have had, they would have been inferior to those that behaviorally modern humans left Africa with about 60-50 kya.  Neanderthals still lived off of those animals, with many broken bones and undoubted deaths suffered during hunts.  That would have been nothing like what the invaders of the Western Hemisphere and Australia encountered.  They could have walked right up to all of those animals with no conditioned fear of humans and stuck their spears into them, maybe not even needing to use projectile weapons, much less poisoned ones.  That scenario has been called the Blitzkrieg Hypothesis, but it would not have seemed a rapid event to the invaders.  It would have been a butcher shop’s version of the Garden of Eden.  Farther than they could imagine, in every direction, were animals with no fear of humans that could be killed so easily that it may have literally become child’s play.[561]  One argument by human-agency skeptics is that continental animals were subject to predation and would have begun fleeing fast.  That seems like a weak argument, and here is why. 

The genetic testing that has been performed on humanity in the past generation has shown that the founder group’s pattern of migration was to continually spread out, and once the original settlement covered the continents, people did not move much at all, at least until Europe began conquering the world (and there were some farmer displacements/absorptions of hunter-gatherers).  There is little sign of warfare in those early days of migration, and the leading hypothesis is that people moved to the next valley rather than be close enough to fight each other.  Any conflict would have been easily resolved by moving farther out, where more easily killed animals lived.  Also, in those virgin continents, people need not have roamed far to obtain food.  Today, an !Kung woman will carry her child more than 7,000 kilometers before the child can walk for himself/herself.  If an !Kung woman bears twins, it is her duty to pick which child to murder, because she cannot afford to carry two.  That demonstrates the limitations of today’s hunter-gatherer lifestyle, but in those halcyonic days of invading virgin continents (which had to be the Golden Age of the Hunter-Gatherer), those kinds of practices probably waned and bands grew fast.  When they reached their social limit they split, and the new group moved to new lands where the animals, again, never saw people before.  Unlike the case with humans, there would not have been a grapevine so that animals told their neighbors about the new super-predator.  The first time that those megafauna saw humans was probably their last time.  It is very likely, just as with all predators for all time, and as can be seen with historical hunting events such as the decimation of the bison or today's shark finning, that those bands soon took to killing animals, harvesting the best parts, and moving on.  To them it would not have been a “blitzkrieg,” but more like kids in candy stores.  After a few thousand years of grabbing meat whenever the fancy took them, or perhaps less, those halcyonic days were over as the far coasts of Australia were reached and the easy meat was gone.  When that land bridge formed to Tasmania about 43 kya, people crossed and were able to relive that “golden” time for a little while longer, until all the megafauna was gone on Tasmania.  They also may have worked their way through the food chain, in which the first kills were the true mother lode.  Nobody even deigned to raise a spear at anything less than a Diprotodon or similar animal until they were gone.  Then they started killing smaller prey, which eventually did wise up and were harder to kill, so humans had to work at it again and the brief golden age was over.  The huge fires that accompanied the Australians as they shaped the new continent to their liking, maybe recreating the savanna conditions that they left in Africa, may have also been used to flush out animals if they began to avoid humans.

All continental and even most island ecosystems that humans soon invaded contained predators, but they would have been as ill-equipped to deal with the newcomers as their prey.  They were capable of defending themselves, so were probably rarely hunted except by the most foolish young men.  No predator on Earth would have wanted to confront fire-wielding humans with their array of weaponry and group skills.  The megafauna predators went extinct when their prey did, and probably not because of much direct human violence.

When the first Europeans arrived in Australia, islands off the coasts had not been inhabited for about 10 thousand years, when the oceans rose and cut them off from the mainland.  On King and Kangaroo islands, the wombats, emus, kangaroos, and other animals were so tame that people killed them with no effort whatsoever.  Europeans on one island even built a hut for their wombats to sleep in at night, and they just pulled one from a hut when needed and slaughtered it.  No mainland animals acted remotely like that.  They are shy and furtive around people, for good reason.[562]  It did not matter if the environment was warm, dry, wet, or cold; all large animals quickly died off when humans arrived.  New Guinea had a similar megafauna extinction pattern: sudden and total.

Africa and Eurasia were another matter, as humans had been living and evolving there for around two million years, and had been hunting for at least several hundred thousand years.  Like those Negritos and other relict humans, some animals found refugia or were lucky to live in them and did not go immediately extinct.  That coastal route was only the founder group’s initial route.  The Fertile Crescent, India, and Southeast Asia were key points of settlement and radiation, where the founder group’s descendants further spread across Eurasia.  There are traces of other relict human species DNA in the modern human genome, which some think might be Homo erectus.[563]  Denisovan and Neanderthal DNA are in the human genome, which probably means that humans interbred with them as they drove them to extinction.  Their contribution to today’s human genome is small, on the order of a few percent.  Many thousands of years later, as Europe conquered the world, they interbred with all of the peoples that they drove to extinction, and there is little reason to doubt that something similar happened as that founder group’s descendants conquered Earth.

The African and Eurasian megafauna learned to fear and avoid people after hundreds of thousands of years of human hunting, but those extinct humans may have been worthy adversaries and unwary behaviorally modern humans could have been occasional prey.  Whatever behaviorally modern humans did to those virgin continents, there is no doubt that they moved to the top of every food chain that they encountered.  Nothing could withstand those people, and the easy meat fueled humanity’s initial global expansion.  That may be when humans became “energy windfall opportunists.”  All animals took energy windfalls when they found them, which fueled all those “golden ages” of the evolutionary past, but humans have quested after them to this very day.  The first instance was probably when that founder group exploded across the planet and got all the easy energy that they could with their new, irresistible methods, and drove all other human competitors to extinction.  It seems that Denisovans quickly succumbed in the warmer climates and the “hobbits” effectively hid in their refugia for tens of thousands of years while behaviorally modern humans passed them by.  But Neanderthals were a different matter.  There are no known Neanderthal sites on the Mediterranean’s African shores, which would have certainly been quite inhabitable.  But contemporary remains of modern humans are found on the southern shores.  The divide is evident where Spain and Morocco meet at the strait of Gibraltar.  I consider it quite likely that inhabitants of the northern and southern shores were mutually hostile, and the Mediterranean formed a frontier.  The reason for not mixing was not because they adapted to different biomes and lifestyles, but they adapted to different biomes due to mutual hostility.  That did not mean constant warring, but they knew enough to avoid each other and conflicts were not worth it.  But the arrival of behaviorally modern humans, however, changed the terms of engagement. 

By about 45-40 kya, that northward migrating band from the founder group reached Europe.  Although the exact route is in dispute, the genetic evidence supports the idea that the group originated from a migration into the Levant, probably via the east end of the Arabian Peninsula.  Those invaders are called Cro-Magnons today.  When they reached the Levant, they began migrating along the Mediterranean’s northern and southern shores, and Neanderthals began disappearing.  The process took several thousand years at minimum, and has been called a border war with Neanderthals, while others have called it a genetic assimilation.[564]  The way that humans drove the megafauna to extinction, and then engaged in warfare as they were driving the megafauna extinction, seems to favor a violent end for Neanderthals, and the “blitzkrieg” of humans migrating across the length and breadth of Australia in a few thousand years was not in evidence for the migration/invasion around the Mediterranean’s periphery.  Neanderthals (nor maybe ancient Homo sapiens along the southern shore), do not seem to have gone quietly or easily and may have been the biggest obstacle to Earth’s conquest by behaviorally modern humans.

About 40-35 kya, a new stone technique, developed from the Neanderthals’ Mousterian technology and called Châtelperronian, appeared in today’s France and Spain.  It was succeeded by a new stone tool technology that appeared about 30 kya called Mode 4, or Aurignacian, and the people making those tools also made cave paintings.  Aurignacian technology was a Cro-Magnon invention of unprecedented sophistication, with blades instead of flakes.  There is considerable uncertainty about the exact dates when those two technologies appeared, but the consensus is that Aurignacian succeeded Châtelperronian, and Cro-Magnons invented the Aurignacian and Neanderthals used the Châtelperronian.  There seems to have been cultural interaction between the two peoples, as well as genetic interchange.[565]  The controversies regarding Neanderthal and Cro-Magnon interaction and mutual influence will not end in my lifetime, but what virtually everybody agrees on is that by about 30-27 kya, Neanderthals were extinct, and maybe even thousands of years before that.  The Cro-Magnon/Neanderthal controversy is one of the more heated in anthropology, and there are two basic camps on the Neanderthal extinction, just as with the Australian megafauna extinctions: behaviorally modern humans did it, or climate did it.

In the historical period, when technologically advanced humans encountered less advanced ones, there was cultural and genetic interchange, but in the end, the technologically advanced peoples marginalized the less advanced ones or drove them to extinction.  If any place on Earth could have been used as an illustration of the climate change hypothesis for the megafauna extinctions, ice age Europe would have been it.  Ice sheets extended so far southward that Neanderthals lived in relatively few refugia, but I highly doubt that it caused their extinction.  Neanderthals lived for at least 300,000 years and survived radical climate changes just fine.  Human-agency skeptics have invoked unusually violent climate changes that coincidentally appeared when behaviorally advanced humans arrived around the world, but that seems to be grasping at straws.  Again, there is nothing climatically unique about the past 60,000 years, not compared to what had happened during the past million years, so invoking climate-change effects for humans and animals that weathered the ice age’s vagaries just fine seems to be a huge conjecture that may be politically motivated.  Human-agency skeptics have crafted different kinds of climate explanations for each major extinction, such as drying in Australia, getting colder and dryer in Europe, or getting warmer and wetter when most of the extinctions happened.  At most, climate was a proximate cause, not the ultimate one.  The ultimate one was people virtually every time.

About 30-27 kya, after Neanderthals made their final exit, the only other humans on Earth were “hobbits,” hiding in their refugia.  They disappeared around when behaviorally modern humans arrived, too.  For the “hobbits,” a volcanic explanation has been proffered for their extinction, although they probably coexisted with modern humans.  A problem I have noticed with the arguments of human-agency skeptics is that the fossil and archeological record is currently too thin and the dates too equivocal to confidently place closely occurring events in sequence and establish causal relationships that precludes behaviorally modern human influence.  In all such extinctions, I have seen no convincing arguments and evidence that rules out the involvement of behaviorally modern humans, and their “contribution” to those extinctions is perfectly logical and understandable, if not something to beam with pride over.

As scientists have been putting this picture together, one irony is that Cro-Magnons had black skin, and Neanderthals might have had light hair and eyes, as an adaptation to the cold climates that they lived in, as with Europeans today.  It turns the racist aspect of Europe’s conquest of the world on its head, and has been noted in some scientific corners.  For the remainder of this essay, as all other human species were extinct but for the “hobbits” by 30 kya, the word “human” will refer to behaviorally modern Homo sapiens.

From about 32 kya to 22 kya, Gravettian culture prevailed in Europe.  That culture produced the first ceramics and art such as the Venus of Willendorf.  By 20 kya, pottery appeared in China.  But as far as human expansion is concerned, the Gravettian (and related Pavlovian cultures) are most notorious as mammoth hunters extraordinaire for those that lived on the mammoth steppe near the ice sheets.  To revisit proboscideans, they could not swim to Sahul, but flourished everywhere else they could get to.  At more than 10 million calories per carcass, they were the ultimate hunter-gatherer kill.  Also, near the ice sheets, meat could be stored in the ground.  Cro-Magnons did just that, and that “freezer” full of meat led to the first seasonally sedentary humans.  It long predated the Domestication Revolution when people could be sedentary year-round, but while the megafauna lasted, the first signs of what came later appeared as Cro-Magnons created villages around frozen mammoth meat.  Gravettians hunted along migration routes and set traps and ambushes for mammoths.[566]  For thousands of years, mammoths were the primary focus of Gravettian hunters, and many scientists believe that humans at least helped drive European mammoths to extinction.  Gravettians probably used the bow and arrow, and using poisoned arrows on mammoths would have been child’s play, not a hazardous undertaking.  They also tended to focus on the easy meat: the young, relatively defenseless, tender mammoths.  Killing the offspring alone would have driven the slowly reproducing mammoths to extinction, and as the interglacial period began around 15 kya, there would have been new pressures on mammoths.  One of them was that fewer mammoths meant that they were not terraforming their environments like they used to, and the warming climate probably reduced their range.  For a mammoth facing humans, there was literally no place to hide (except maybe in the living room), and there is little reason to think that hunters would have eased up when mammoth numbers dwindled.  If anything, their efforts would have increased to get the last ones, as they competed and fought over the final mammoths.  In one lifetime or even several, the changes would have been barely noticeable, if at all.  There was simply no way out for mammoths, and they went extinct south of the European ice sheets under the ministrations of Cro-Magnon hunters.  More evidence of their fate is some mammoths surviving in refugia: islands where humans did not arrive until thousands of years later.  Island-dwarfed mammoths survived on St. Paul Island in the Pribilof chain off of Alaska until less than six kya, and went extinct when humans arrived.  Several hundred apparently full-sized mammoths survived on Wrangel Island near Siberia and went extinct less than five kya, when humans arrived.  In today's France and Spain, Gravettians also semi-settled along the migration routes of reindeer and red deer.  From Spain across Europe, into today's Russia, Gravettians hunted migrating herds, and not only the mammoth was driven to extinction, but also the wooly rhino, the Irish elk, the musk ox, and steppe bison were driven to extinction as the ice sheets retreated.[567]  Neanderthals had been ambush hunting in similar fashion, and those animals, like the African megafauna, grew wary of humans, and killing those animals probably took planning and guile. 

The earliest evidence of fishing is from a man in China from 40 kya who apparently subsisted on freshwater fish, and evidence of harpooned seals dates from 16 kya in Southern France.  The techniques of today’s fisherman, such as hook-and-line tackle, did not appear until well into the Neolithic, which began about 12 kya.

This chapter’s reconstruction is largely based on the latest scientific findings as of 2013, along with a little interpretation and speculation on my part.  For instance, those mammoth villages have been discovered, with pits for mammoth meat and houses built from mammoth bones.  But scientists argue whether Cro-Magnons killed those mammoths or merely scavenged them.  I have little doubt that they were primarily hunted mammoths, not scavenged, and killed along their migration routes.

There is great controversy regarding how violent Neanderthals were and how much Cro-Magnons hastened their demise, but there is no doubt that sophisticated stone tools were used for far more than hunting mammoths.  A Gravettian child’s vertebra was found with a projectile point lodged in it.[568]  Flint blades and projectile points lodged in vertebrae and other bones, and skulls broken by stone weapons, are common finds in Stone Age graves around the world.[569]  Before the Paleolithic Era’s end about 12 kya, cave paintings depict torture, people pin-cushioned with arrows, and other violent scenes.[570]  In 1964, a cemetery discovered in Egypt held the remains of dozens of slaughtered people, which has been dated to about 13 kya.[571]  Such finds are increasingly common, and are often situated near coveted resources such as riverbanks.  The Golden Age of the Hunter-Gatherer was long gone by 15 kya in the Eastern Hemisphere but would briefly flourish again in the last continents that humans conquered.  Here is a map, based on DNA analysis, of the dates of human migration to the Americas.  (Source: Wikimedia Commons)

The idea that the American mastodon was killed off by hunting was first proposed by George Turner in 1799, and Jean-Baptiste Lamarck, an early evolutionist, thought that humans exterminated the extinct ice age mammals.  By 1860, Richard Owen wondered whether anything but humans could have caused that mass extinction.[572]  Therefore, when Paul Martin first proposed his Overkill Hypothesis in 1966, it was by no means novel, but he started the modern debate and the controversy quickly focused on North America, beginning about 15 kya.

As this narrative shows, the North American extinctions came relatively late in the process, but they have been by far the most controversial.  The reasons for that appear to be several.  One is that North America is the home of history’s richest and most powerful nation, and when Martin first published his proposal, the USA was in the midst of a cultural awakening (as well as an imperial slaughter), and the awesome crimes that Europeans committed against indigenous Americans were brought to widespread public awareness for the first time.  American Indian activist Vine Deloria dismissed the idea of overkill, and until his death he attributed the extinctions to catastrophic celestial events.  He was a follower of Immanuel Velikovsky’s work.[573]  The idea that American Indians hunted North American mammoths to extinction also conflicted with the then prominent “peaceful savage” and “ecological Indian” themes, so the denial partly reflected political bias. 

Scientists are unanimous that the Western Hemisphere’s indigenous peoples primarily came from East Asia, but there has been a cottage industry for centuries proposing other ideas.  When Thomas Jefferson sent the Lewis and Clark expedition to North America’s west coast in 1804 to sketch the ultimate reach of empire, they were alerted to find the lost tribes of Israel.[574]  But genetic, anatomical, archeological, and other evidence has long since settled the issue of where American Indians came from, and by far the leading hypothesis is that humans migrated to North and South America beginning about 15 kya, and there may have been a migration along the Pacific coastline, which continued the pattern established about 60-50 kya out of Africa.  As the ice sheets began melting about 15 kya in North America, a corridor between them formed and humans walked to North America about 11 kya.  Those arrivals founded the Clovis culture.  The sudden disappearance of virtually all the megafauna of North and South America followed those humans, particularly those that came by land and spread.  That situation is where the original “Blitzkrieg Hypothesis” label was used.  

Proboscideans abounded across the Western Hemisphere’s length and breadth, camels and horses evolved in North America tens of millions of years earlier and still lived there.  North America hosted giant beavers, the largest carnivore ever, the largest cats and wolves ever, and a stunning variety of huge animals that must have been a wondrous sight.  South America had a less spectacular assemblage, but was still highly impressive.  Earth’s largest land animal during the Pleistocene, next to the largest mammoths, was the ground sloth, which survived the Great American Biotic Interchange, and existed for about 30 million years.  Within a couple of thousand years of initial contact by those humans that came overland, virtually all of the Western Hemisphere’s megafauna went extinct.  In South America, the invading humans made dwellings out of proboscidean hides, so they certainly did not go extinct before humans arrived.  The South American Toxodon survived the Interchange, to go extinct after humans arrived, and projectile points are found with many Toxodon skeletons.

The Western Hemisphere, more than anyplace else, has been the focus of human-agency skeptics who claim that climate change did it all, or that the human contribution was insignificant.  Their arguments have failed to convince me or virtually anybody else looking into the issue.  There has been a trend to put “nuance” into evolutionary dynamics, in which multiple causes for events are considered, particularly extinction events.  People can go overboard on nuance, which can obfuscate important issues and confuse ultimate and proximate causes.  I wonder if some of the “nuance” I am seeing is intentionally misleading or is merely another scientific fashion run amok.[575]  If an ultimate cause overwhelms everything else, focusing on the “nuance” of dynamics that are minor at best is misleading.  North America had the greatest ice sheets of the current ice age, and Arctic proboscideans would have been affected somewhat, but they survived the previous 16 glacial events just fine and were a minor aspect of the Western Hemisphere’s megafauna holocaust.  South America did not have ice sheets, but merely larger glaciers than today’s along the Andes.  From Arctic tundra to Tierra del Fuego, from desert to rainforest, all large animals quickly went extinct soon after humans arrived, when it got warmer and wetter.

My initial 1990s interest in climate change and other hypotheses for megafauna extinctions has gradually turned into skepticism and dismay.  At the footnote that ends this sentence, I provide some context for my skepticism of climate change and other non-human-agency hypotheses.[576]

The Clovis culture’s killing implements abruptly appeared in the archeological record and disappeared just as fast, after the easily killable megafauna went extinct.  Today’s North American megafauna are nearly all migrants from Asia, not North American megafauna that learned to avoid humans.  Bison are the only significant exception, although they came from Asia, too, and explaining their survival remains a minor curiosity, but is about the only circumstance not neatly aligned with the overkill scenario.  The “referee” paper concluded that although the South American extinction was the greatest of all, it is the most poorly investigated and that the overkill hypothesis cannot yet be attached to South American extinctions.  That may be a prudent position for a specialist who pronounces judgment only when all the evidence is in, but I will be among the most surprised people on Earth if the pattern of 50 thousand years did not continue there, especially since it had no ice sheets.  There can be no more pertinent example than comparing Africa to South America.  They inhabited the same latitudes and have similar climates, separated by the Atlantic Ocean.  Africa was the home of humanity, where its animals had millions of years to adapt to the human presence, and Africa only lost about 10% of its megafauna (probably to human hunters with their advanced weaponry) while South America lost nearly all of its megafauna, and quickly.  Climate change did it?  How could it have even contributed?

Gorillas and chimpanzees suffer very little from predation in their rainforest homes.  Maybe they made it that way, ridding their environments of threats, and today, other than humans, the greatest threats to gorillas and chimps are other gorillas and chimps.  Humans began their journey when a chimp-like ancestor left the dwindling rainforest, probably because it was the loser of rainforest life and was forced to live on the margins.  It learned to walk upright as a result, began to make tools and grow its brain, and several million years later it reacquired the level of security that those gorillas and chimps had, where it mastered its environment, a global environment, and the only threat that humans really faced afterward were each other.  Achieving global mastery was humanity’s Second Epochal Event, as humans became the greatest predators that Earth has known.

As the megafauna that fueled humanity’s global expansion went extinct, all human populations became relatively immobile, and even hunter-gatherers had proscribed ranges.  There were no more virgin continents to fill with people, and then humans began to turn on each other in earnest as they fought over their reduced energy supplies.  Between the coalitionary killing of chimps and the human warfare of the late hunter-gatherer phase, there seems to have been an intermediate stage that lasted from up to a million years ago among Homo erectus.  Until hunter-gatherers began forming segmented societies (with some hierarchy) in the past 30 thousand years or so, the risks of killing one’s neighbor outweighed the advantages, especially when resolving conflicts meant easily moving to new, unoccupied lands.  Although there was probably plenty of interpersonal violence, warfare did not appear until there was resource competition among the humans that conquered Earth.[577]  It is even speculated that when Homo erectus left Africa nearly two mya, it was the path of least resistance to resolve local resource competition.

Some island refuges around the world where humans had not yet invaded, such as New Zealand, Madagascar, the Caribbean, and Polynesia, including Hawaii, retained their megafauna (the large birds were often not quite large enough to meet the megafauna definition, but they were relatively large).  That huge sloth survived in the Caribbean for several thousand years after its mainland brethren went extinct upon meeting humans, and when humans reached the Caribbean islands, that sloth quickly made its final exit.  Most of those islands were not invaded by humans until the historical era, and the story was always the same: the rapid extinction of all easy meat.  The pattern is painfully clear, ever since that founder group left Africa, and it continues apace and accelerates.  The Sixth Mass Extinction is well underway; more than half of Earth’s remaining species may go extinct in my lifetime and almost certainly will by the year 2100 at the current trajectory.  Today’s primary culprit is habitat destruction, as the world’s poor raze tropical forests to raise crops and procure firewood.  But humans have been working their way to the bottom of Earth’s food chains as they scrape the last morsels from the ecosystems.  When Europe learned to sail the oceans, an oceanic holocaust began, which began with the global ocean’s primary megafauna: whales.  They comprised an energy mother lode that was plundered when humans reached the level of technical sophistication that could exploit it, and they did it until the 1960s when nearly all of Earth’s whales were at the brink of extinction.  Humans have never practiced conservation except when forced into it as they began feeling the pinch of exhausting their energy resources.  It is no different today with the exploitation of fossil fuels.  Humanity’s energy windfall opportunism may be its most characteristic behavior.

As ice sheets retreated and today’s interglacial period began, humans already at the margins of those ice age environments simply spread toward the Arctic as far as they could.  From then until Europe began conquering the world 500 years ago, there were few mass migrations of note, such as the Bantu expansion in Africa, when pastoral nomads conquered parts of Eurasia, and when agricultural peoples displaced hunter-gatherers, particularly in Australia and the Americas.  But even with those migrations, it could be more of a cultural and technological migration than a human one, in which the “invaded” peoples adopted the often energetically superior practices of the “invaders” rather than being replaced by them.  Genetic testing has shown that this was largely the case in Europe (although perhaps more for the women than the men), which has been one of the greater surprises of global genetic testing, although the research is in its early days, and more controversial findings are sure to come. 

The evidence of inter-human violence, both between early Homo sapiens groups and what happened to Neanderthals, Denisovans, Homo erectus, and the “hobbits” is more circumstantial than finding corpus delicti.  But after the megafauna were gone, the evidence becomes staggering for universal human violence, the kind that anthropologists pretended was not there for generations.  Those early slaughters before the Holocene were only a prelude.  When those relict populations were discovered by Europeans, they were all tremendously violent, both the hunter-gatherers and the agriculturalists of New Guinea’s highlands.  Their level of warfare technology obviously could not compete with Europe’s, and their style of warfare was dismissed by early European observers as ineffective and ritualized, but those Europeans understood “primitive” people’s warfare as well as they understood their cultures, which was not well at all, but that is also a trait of UP.[578]

Hunter-gatherer lands are far more sparsely populated than agricultural or industrial lands because of how much energy people can extract from their environments.  Japanese rice farmers can extract 10 thousand times as much food energy from a hectare of land as Cro-Magnon hunter-gatherers could.[579]  At Japanese rice farmer levels of productivity, the yard of the home I was raised in could have met my family’s food requirements.[580] 

The hunter-gatherer means of production and style of warfare seemed hopelessly inept when compared to European methods, but proportionally the violence of those societies was more than an order of magnitude greater than that seen in European societies.  As an example, the Chippewa people lived near North America’s Great Lakes and European expansion decimated them.  They had population attrition rates from warfare (between Indians) about four times that of Germany and Russia in the 20th century.  In the 20th century, which is history’s most war-torn in terms of absolute war deaths, if it had the war death rates of typical nonliterate societies instead of around 100 million war dead, there would have been two billion war dead.[581]  The high death rates from pre-state warfare were due to their political-economies.  In “civilized” war, the goal was conquering the enemy and extracting taxes from them, which could range from women (fertile females have been war booty for at least ten million years), to fighting-age men, to food, gold, and other economic windfalls that are often called “tribute” in preindustrial societies.  That can only be accomplished with sedentary populations.  For hunter-gatherers, about the only “asset” other than useful women would have been the rivals’ lands, so complete genocide and taking the vanquished’s lands and women was a goal met often enough, although a phenomenon appeared that continues to this day.  The economic bounty was often taken almost absent-mindedly, while the obsession was on humiliation, acquiring war trophies, etc.  Modern statesmen play a similar game, but it is for show, not taken seriously by the war planners, and used to manipulate the masses.

The vicious, take-no-prisoners approach to pre-civilized (also called “nonliterate” or “native,” formerly called “primitive”) warfare was typical, and surprise raids that killed everybody in their beds was the preferred means of attack.  That graveyard in Egypt is probably an early example of that mode of warfare, in which nobody was spared.  A couple thousand years later, about 12 kya, arrows, slings, and maces were added to humanity’s growing arsenal, and the deadliness of warfare escalated.  Projectile wounds and all manner of trauma became common findings in excavations of European graves from eight kya to four kya.[582]  As anthropologists gradually abandoned their “peaceful savage” meme, they began sorting out ultimate and proximate causes for pre-civilized warfare. 

Anthropologists have derived these reasons why all societies go to war: defense, plunder, prestige, and control.[583]  Only states have control as a motive, because they can only tax sedentary peoples with economic surpluses.  “Defense” is a territorial motive (retaliation and revenge also neatly fit into this category), which is economic, plunder is nakedly economic, and prestige only reinforces or enhances a man’s economic status.  So, all motives for war are ultimately economic in nature.  All wars had some kind of proximate cause, some triggering event that began the hostilities, and feuds could last for generations, but when the bickering and noise was removed from the signal, nonliterate societies, just like civilized ones, fought primarily for economic reasons, and resource access was always first and foremost.[584]  Because land is the source of all wealth (particularly in preindustrial civilizations), as it is where the energy comes from, all societies, from the smallest band of hunter-gatherers to today’s modern states, fight over territory, which is no different in kind from what chimps do.  It has been that way since macaques and may have begun with some of the earliest social animals.  At the bottom of it all, all people instinctually know that it is all about economics, with the rest just noise, if sometimes pleasantly diverting noise.

This interglacial period has also witnessed spectacular geophysical events, largely related to ice sheet effects.  The ice sheets of the Northern Hemisphere sequestered an immense volume of water, which lowered sea levels.  When the global ocean rose with melting ice sheets, portions of continents such as Japan and New Guinea became islands.  The land bridge of Beringia slipped beneath the seas and isolated the Western Hemisphere’s aboriginal peoples.  They were not “rediscovered” to any significant degree until Columbus sailed in 1492.  Near my home in Washington State is one of many stark remnants of the prodigious floods that attended melting ice sheets.  When the obvious evidence of vast floods were linked with ice age melting nearly a century ago, the idea was initially dismissed as crazy speculation, as it conflicted with uniformitarian beliefs of the day.  It is now universally accepted as one of many events when ice dams broke and awesome floods scoured the Northern Hemisphere.  Catastrophists have long invoked celestial explanations for the demise of the mammoth while others have cited glacial floods.  Those buried mammoths that keep appearing as the Arctic melts due to global warming were probably killed in the innumerable floods that attended melting ice sheets, but that is a far cry from driving the species to extinction.  The most violent glacial event may have created a global climate change about 12 kya, and the resulting thousand year period is called the Younger Dryas.  The current ice age began in the North Atlantic, and when the Laurentide ice sheet melted, vast floods into the North Atlantic may have reversed Earth’s oceanic circulation and dramatically changed Earth’s climate systems.  If agriculture had developed by then, the Younger Dryas would have created epic famines, wars, and population displacements.  But because few humans were sedentary at that time, that did not happen.  However, that event may have been responsible for humanity’s next Epochal Event: the Domestication Revolution, which is the next chapter’s subject. 

As can be seen with bonobos, economics is the foundation of social organization.  Social animals are social because their survival chances increase when they combine their efforts.  All animal social behaviors are interpreted as strategies toward meeting life’s essential requirements.  How far has human behavior “risen” past those basics?  In this narrative of the journey of life on Earth, where can we draw the line of when the human line became “sentient”?  Are we sentient even now?  That question is not easily answered.  Bonobo females can be brutal in coercing males back in line with their social plan, and chewed-off fingers have been noted.  Although bonobo life was filled with sex and cooperation, social enforcement could be savage.  Hunter-gatherer social organization is egalitarian, partly because when people carry their only possessions on their backs, there is little opportunity for economic inequality to gain political power.  However, hunter-gatherers have very strict and vigilantly enforced social norms with ancient roots that ensure that level economic and political playing field.  A successful hunter must share his kills with all, and anybody who tries to dominate the band immediately has a coalition formed against him (it is always a male, reflecting that ape heritage), putting him back in line.  If a man gets too far out of line, there are two fatal punishments.  One is banishment from the band.  The life expectancy of a solitary hunter-gatherer is minimal; banishment is usually a death sentence.  The exile may try to join another band, but that is highly risky, partly because any man banished from his band is going to be suspect.  The second is capital punishment, and to avoid initiating a feud, the band will “hire” kin of the condemned to perform the execution.[585] 

Studies of warfare have shown that absolute population density has little influence on how warlike societies are.[586]  However, the proper way to analyze population density and conflict is probably not in absolute terms, but relative terms.  Hunter-gatherer bands slaughtered each other over access to resources such as waterholes, stone quarries, and salt deposits.[587]  Ancient states of the Fertile Crescent and Mediterranean fought over access to forests, arable lands, and low-energy transportation lanes (usually waterways), and no informed observer thinks that the USA would have invaded Iraq in 2003, after more than a decade of genocidal economic sanctions and helping to bankrupt its own economy by hosting a huge military presence in that region, if the USA was sitting atop enough high-EROI oil to power its economy for centuries.  It is the relative abundance of resources that supports a people’s means of production that largely determines how warlike they are going to be.  Scarcity leads to violence, whether it is a gang of chimps looking for a neighbor to murder or history’s richest and most powerful nation invading peoples half a planet away to steal their energy resources. 

Some nonliterate societies do not engage in warfare.  They are a vanishingly small proportion of the world’s native societies, but almost without exception, they are not warlike because they are geographically isolated.  The most important variable in predicting a society’s level of internal and external violence is male dominance.  Monkeys are matrilocal (matrilineal), and males leave their society of birth to mate, gorillas and chimps are patrilocal (patrilineal), where females leave their natal society to mate, and humans have both kinds of pre-state societies, along with some minor variations.  Patrilocal societies are run by gangs of related men, are by far the most violent, engage in the most warfare, and women are subjected to the most violence.  Patrilocal societies can also have harems or many “wives” for the alpha males.[588]  Patrilocal societies make up nearly 70% of the world’s native cultures that have been documented.  Neanderthals and australopiths appear to have been patrilocal.  The primary determinant of patrilocal or matrilocal residence in humans is the economic contribution of women.  In general, where gathering and horticulture brought in more calories than hunting, women had more influence and the society tended to become matrilocal.  Those relationships only hold for societies that are not economically centralized.  When surplus redistribution appeared, men began to dominate, and the chiefdom was the first step toward state formation.[589]  Organized violence only began increasing as states began to form.

The next chapter will explore the issue more fully, but the rise of agriculture was a peaceful process, and all “pristine” civilizations began peacefully.  They only became violent when early states formed and men dominated.  The formation of urban communities and states always followed the invention of agriculture.  The pattern seems to have been that women dominated food production in pre-state horticultural/agricultural societies, so they were matrilocal (or women had high status).  With the formation of states and the rise of male domination, women’s status universally declined, and did not rise again until industrialization.

Internally, pre-state societies could be quite violent, even the “peaceful” ones, usually with men going at it.  In matrilocal societies, the fighting was more like wrestling matches than deadly encounters.  When women fought each other, it was often in “cat-fighting” style with which they tried to disfigure each other’s faces as a way to make them less attractive to men.[590]

The Solutrean culture (x. 22 kya to 17 kya) succeeded the Gravettian culture, to in its turn be succeeded by Magdalenian culture (c. 17 kya to 12 kya), and Mode V tools (also called microliths) appeared.  Those cultures were in France and Spain, the Neanderthals’ former range, in the refugia from the ice sheets that blanketed northern Europe, and the Magdalenian culture spread northward as the current interglacial interval began.  

Europe was a crucible for violence probably ever since the human conquest of Neanderthals, and evidence for warfare and mass violence increases as the timeline progressed from then.  But going back to those chimp gangs, violence is not instinctual as much as calculated, and is a response to economic scarcity above all else.  However, those early religious rituals were not only a method to form group cohesion; they were also a way to condition men to throw their lives away while trying to take the lives of others.  The rituals and rites of passage for men were often extremely painful ordeals that conditioned them for the short life of a warrior, and forming highly contrasting in-group/out-group beliefs that facilitated killing other people.  The portion of the human brain where emotions appear to be seated, in the limbic system, is no larger than in our great ape cousins.  It is well-known that fear shuts down the neocortex, as animals prepare for fight-or-flight responses, and it is no different with humans.  However, the response is much more dramatic with humans, with their huge neocortexes and frontal lobes.  So the human response to fear is losing much of what makes humans seemingly sentient.  Those religious rituals seem designed to bypass the neocortex and form a bridge to the limbic system where emotions rule.  Religion seems to have arisen as a response to warfare, but that will be explored in the next chapter, which covers the civilizing of humanity, which is the Third Epochal Event.

Just as the reason why our ancestors may have left the trees and why Homo erectus may have left Africa, that founder group may well have left Africa as an act of desperation, driven to the margins by their neighbors.  If they left about 60-50 kya, as seems the most likely timeframe in light of today’s evidence, by 10 kya the entire planet had been conquered.  Behaviorally modern humans were atop all terrestrial food chains outside of Africa, and in Africa megafauna avoided them, so there was nothing on Earth that threatened human existence except for other humans.  Like the way in which the australopithecine Tesla who made the first stone tool could not have imagined the Homo erectus that emerged from his/her act a half-million years later, by 10 kya (about a tenth as long as the previous epochal innovative interval).  Several million descendants of that founder group were spread across the planet, from tundra to desert to rainforest, and they filled all inhabitable continents.  The people existing 10 kya would have been anatomically recognizable and all had UP’s traits, as they do today.  However, with cave paintings, microliths versus what the founders left Africa with (several million people versus a few hundred), the immensely diverse climates and the tools used to survive in them, as well as their mutually unintelligible languages, the founder group’s members would not have comprehended a tour of their descendants’ world.  The founder’s descendants even began to look different as evolution marched onward, and many racial differences would have been noticeable, although the bizarre white skin, blond hair, and blue eyes had yet to appear in Homo sapiens.  Some people of 10 kya even had companions called dogs (first domesticated as long as 33 kya, wolves were domesticated more than once, and the modern dog was domesticated about 15 kya), which would have seemed a miracle, terror, or strange beyond imagining.  The world’s large animals paid the ultimate price for fueling that expansion, and the Sixth Mass Extinction thus began.

 

Humanity’s Third Epochal Event: The Domestication Revolution

Chapter summary:

  • Seasonal variation beyond rainforest homes of humanity's ancestors

  • First permanent human settlements

  • Natufian culture

  • Humanity's four pristine civilizations

  • Connection between agriculture and civilization

  • The absorption/displacement of hunter-gatherers by farmers

  • Earth's carrying capacity and the human population before the Domestication Revolution

  • Group selection and human societies

  • Çatal Höyük

  • Climate change and the beginnings of agriculture

  • Developing class systems, and difference in the diet of the sexes

  • Animal domestication

  • Plundering Earth's forests begins

  • Plow and smelted metal

  • Quick environmental demise of early civilizations

  • Energetic benefits of water travel

  • "Tyranny of distance" and civilization

  • Cities led to professions

  • World's first city

  • Invention of the sailboat

  • Bronze Age

  • Wheel

  • Mass warfare begins, and its brutality

  • Repression of original religions in cities

  • Divine status of elites

  • Appearance of the palace

  • Legitimacy of pristine states

  • Religious indoctrination and justification of elites and states

  • Using abstract symbology to manipulate the mass mind

  • Invention of writing

  • Epic of Gilgamesh

  • Salination and siltation wreck Sumer

  • Akkadian conquest of Sumer, resurgence of Ur, and barbaric laws

  • Floods and droughts of deforested Mesopotamia

  • Modern debates over societal collapse

  • Cities and the obsolescence of clan organization

  • Domestication around the world

  • This essay's departures from orthodoxy

  • Reasons for civilization, and the thin agricultural surplus of early civilization

  • Early civilizations' effect on atmosphere

  • Basics of civilization

  • Scarcity assumption beneath all ideologies

  • Early conservation efforts

  • Mesopotamia's environmental refugees, including Abraham, and the hazards of taking the myths of religion literally

  • Old Testament's purpose

  • Reliable food supply of the Nile River Valley

  • Extinction of the Nile's megafauna

  • A pharaoh's "job" of controlling the Nile's flood

  • Egypt's Old Kingdom and building the necropolis at Giza

  • Extinction of the Mediterranean megafauna

  • Mesopotamia's scarcity of wood, and settling the Eastern Mediterranean

  • Rise and fall of the Minoan civilization of Crete

  • Rise and fall of civilization on Cyprus

  • Iron Age

  • Rise of Mycenaean civilization

  • Troy and the Trojan War

  • Collapse of Bronze Age civilizations of Eastern Mediterranean

  • Peak of Phoenician civilization

  • Rise of Greek civilization after centuries of forest recovery

  • Athens enters its classic period

  • Greek wars with Persia

  • Athenian war with Sparta begins

  • Desertification of Athenian hinterland

  • Athens's Sicilian expedition

  • Greeks discover conservation, but too late

  • Greek classic period ends, and Alexander the Great conquers most known civilizations

  • Rome founded

  • Thick forests near early Rome

  • Roman Republic founded

  • Rome expands and conquers Carthage and Corinth

  • Rome controls entire Mediterranean

  • Roman civil wars begin

  • Roman Republic ends and Roman Empire begins

  • Rome's brutality toward people, animals, and hinterland

  • Roman astonishment that Italy was once heavily forested

  • Romans discover conservation, but too late

  • Rome's rape-and-plunder economy

  • Today's arid moonscapes of the Mediterranean periphery, where lush forests once stood

  • Rome invades the British Isles and establishes short-lived iron industry

  • Cyprus is again deforested to provide Roman bronze

  • Rome's bath fleet scours Mediterranean periphery for wood

  • Importance of the Roman Emperor's wheat fleet

  • Unsustainability of all early civilizations

  • Manipulating economic yardsticks

  • Two centuries of Roman "peace" ends as Rome begins running out of energy

  • Rome debases its currency

  • Declining EROI and declining energy surplus of declining civilizations

  • Rome's decline and fall

  • Civilization's unsustainable energy practices, not climate, was always primarily responsible for their demise

  • China's developmental trajectory

  • The invasions of pastoral societies into agricultural societies

  • Human genetic adaptations to northern climates – blond hair, white skin, blue eyes

  • Decline of women's status in civilization

  • European invasions and the rise of violence

  • Bantu expansion

  • The lack of rainforest civilizations

  • Mesoamerican civilization's developmental trajectory

  • Andean civilization's developmental trajectory

  • Relative environmental gentleness of the Western Hemisphere's Stone Age cultures

  • Mississippian culture

  • Escalation of indigenous violence with European weapons

  • Unsustainability of deforestation and agriculture

  • The similarities of all agrarian cultures

  • Inability of hunter-gatherer peoples to imagine civilization

There are dry and wet seasons in the tropical rainforests where gorillas and chimpanzees live, and they must seasonably change their diets to adapt to available foods.  Beyond those rainforests, seasonal variation is more pronounced and, once the easy meat was gone, people survived by engaging in the hunter-gatherer lifestyle familiar to today’s humans.  A sexual division of labor existed: men hunted and women gathered.  Men had the strength and speed required to hunt wary animals, particularly large game, while women were less mobile, partly due to caring for children. 

Gravettian mammoth villages probably hosted humanity’s first semi-sedentary populations, but that short-lived situation ended when mammoths did.  The primary necessity for a sedentary population’s survival was a local and stable energy supply.  One energy supply tactic, as could be seen with those mammoth hunters, was storing food in permafrost “freezers.”  Seasonal settlements existed where people subsisted on migrating animals or when certain plants had a harvestable and seasonal stage of development. 

Although eating roots has a long history in the human line, permanent sedentism began by harvesting nuts and seeds.  In the Levant, in a swath of land that includes today’s Israel and Syria, about 13.5 kya the Kebaran culture (c. 18 kya to 12.5 kya) made acorns and pistachios a dietary staple.[591]  Mortars and pestles were in the Kebaran toolkit for processing acorns, which must be pounded into a paste and soaked to leach out tannins, and that work fell exclusively to women.  Domestication often meant artificial selection to reduce/remove plant features that protected against grazing.  That made the plants more palatable to humans, but it also made them more attractive to other animals.  What all major crops developed by humans had in common was the domesticated plants' existence in tropical or warm temperate regions with a dry season.  Those plants developed strategies to survive the dry season and stored energy in seeds, roots, and legumes.  People learned to exploit that stored energy and they domesticated those plants.  Many of today’s domestic crops could not survive in the wild, and protecting crops from other animals and competition from other plants has been an integral part of the Domestication Revolution.[592]  Similarly, many domestic animals would have a difficult time surviving in the wild, including people.

The Natufian culture (c. 15 kya to 11.8 kya) succeeded the Kebaran culture.  The Natufian village at Tell Abu Hureyra in today’s Syria was established about 13.5 kya and was situated on a gazelle migration route.  The residents of that village of a few hundred people also harvested “wild gardens” of wheat and rye.  Those villagers became Earth’s first known farmers, and they had dogs.  The original settlement was abandoned during the Younger Dryas and resettled after it ended.  The effect of a harsher climate may have spurred the origin of agriculture, which began there about 11 kya.  By seven kya, the settlement had grown to several thousand people, and was then abandoned due to aridity.  No evidence of warfare is associated with the settlement.  A compelling recent hypothesis is that agriculture could not have developed in warfare’s presence, as farmers would have been too vulnerable to raids by hungry hunters.[593]  In the four places on Earth where civilization seems to have independently developed: the Fertile Crescent, China, Mesoamerica, and the Andes, no evidence of violent conflict exists before those civilizations, fed by the first crops, began growing into states.  Those states are called “pristine” states, as no other states influenced their development.  Also, it is considered likely that a primary impetus for beginning agriculture in those regions was the decimation of animals to hunt.  Not only was the easy meat rendered largely extinct, but those animals would have also been competitors for crops.  The peaceful agricultural villages that feminist authors have long written about, in which women's status was closer to men's than at any time before the Industrial Revolution, actually existed, if only for a relatively brief time, in only a few places.

Only when economic surpluses (primarily food) were redistributed, first by chiefs and then by early states, did men rise to dominance in those agricultural civilizations.  Because the rise of civilization in the Fertile Crescent is the best studied and had the greatest influence on humanity, this chapter will tend to focus on it, although it will also survey similarities and differences with other regions where agriculture and civilization first appeared.  Whenever agriculture appeared, cities nearly always eventually appeared, usually a few thousand years later.[594]  Agriculture’s chief virtue was that it extracted vast amounts of human-digestible energy from the land, and population densities hundreds of times greater than that of hunter-gatherers became feasible.  The debates on the subject may never end, but today it is widely thought that Malthusian population pressures led to agriculture's appearance.[595]  The attractions of agricultural life over the hunter-gatherer lifestyle were not immediately evident, at least after the first easy phase, when intact forests and soils were there for the plundering.  On the advancing front of agricultural expansion, life was easy, but as forests and soils were depleted, population pressures led to disease, "pests" learned to consume that human-raised food, and agricultural life became a life of drudgery compared to the hunter-gatherer or horticultural lifestyle.[596]  Sanitation issues, disease, and environmental decline plagued early settlements, and humans became shorter and less healthy not long after they transitioned from hunter-gatherers to farmers, but the land could also support many times the people.  Another aspect of biology that applies to human civilization is the idea of carrying capacity.  Over history, the society with the higher carrying capacity prevailed, and the loser either adopted the winner’s practices or became enslaved, taxed, marginalized, or extinct.  On the eve of the Domestication Revolution, Earth’s carrying capacity with the hunter-gatherer lifestyle was around 10 million people, and the actual population was somewhat less, maybe as low as four million.[597]  On the eve of the Industrial Revolution in 1800, Earth’s population was nearly a billion, and again was considered to be about half of Earth's carrying capacity under that energy regime.  No matter how talented a hunter-gatherer warrior was, he was no match for two hundred peasants armed with hoes.  

The Selfish Gene Hypothesis explains plenty, and one reality is that women will always have a genetic investment in their offspring no matter who the fathers are.  As civilizations rose and men climbed atop the hierarchies, they all had enhanced reproductive rights (many wives, harems, etc.), and many women found the situation tolerable and even attractive, although there could be coercion in the unions and there are many obvious disadvantages to being a "kept" woman.  However, being a wife/concubine for an elite man usually meant a pretty good life and children being provided for.  The biggest losers in such societies were non-dominant men, who had diminished procreation opportunities (and eunuchs guarded harems, for instance).  With the rise of DNA testing, a repeating dynamic is seen: when one people at a higher economic level (energy use) encountered another, the women from the poorer culture bred with the men from the richer culture, and men from the poorer culture began vanishing from the gene pool.  It is particularly noticeable among agriculturalist expansions into hunter-gatherer lands, such as the Bantu Expansion and from the Fertile Crescent into Europe and North Africa, and seems to be implicated in the spread of Mesoamerican farmers into the USA's Southwest.[598]  The general pattern during the Neolithic Expansion seems to have been farmers migrating to arable land and establishing agricultural communities that were surrounded by hunter-gatherers, and it seems more common that the farmer populations expanded and displaced (the men)/absorbed (the women) the hunter-gatherer population than hunter-gatherers learned agriculture.  After a career of studying human migrations, Peter Bellwood had this to say about what motivated them:

 

"Why did ancient populations commence their migrations?  My instinct would be to place a need for land and resources as the most common causal factor in situations in free and considered migration (not forced by war or other sources of desperation) both in the prehistoric past and in more recent history."[599]

 

In other words, the motivation was primarily economic, usually after depleting the energy resources of the lands that they migrated from, whether they were megafauna, forests, or soils.  After the Neolithic Expansion, migrations that displaced the natives seem rare, at least until Europe began conquering the world.  That is the general pattern that I have noticed, but controversies are ongoing in 2015 as I write this.  During Spain's genocidal invasion of the Caribbean, in which about the only immigrants were European men with dreams of riches or captured African men who looked forward to short lives of slavery, the surviving native women became concubines for the invaders and native male DNA vanished from the genome.  Recent research regarding Puerto Rico showed a complete eradication of native men from the genome.[600] 

Today, people practicing the hunter-gatherer lifestyle are usually dependent on the production of nearby agricultural societies.  Pure hunter-gathering, of the kind performed before the Domestication Revolution, has almost entirely vanished.[601]

Darwin made the case for group selection, but believed that natural selection primarily worked at the individual level.  The idea of group selection has become prominent in my lifetime, if controversial.  Anthropologists and biologists see evidence of group selection, not only in social creatures such as termites, but also in the ability of human societies to survive competition with their neighbors.  Hunter-gatherer societies eliminated disruptive members by banishment or death, which has been argued to have been reflected genetically in eliminating uncooperative people from society.  Those kinds of activities may have helped cull the human herd of “uncooperative” genes.[602]  When Europe conquered the world, it had the highest energy usage, by far, of any peoples on Earth, which was why it always prevailed.  When high-energy societies met low-energy societies, the results were almost always catastrophic for low-energy societies.[603]  Hunter-gatherer societies have no chance in a competition with societies possessing domesticated plants and animals, much less industrialized societies.  Whether they are species or human civilizations, the generation of energy surplus determines their viability.

Another early Fertile Crescent village, Çatal Höyük, in today’s Turkey, existed from 9.5 kya to 7.7 kya, and was another peaceful agricultural settlement in which the inhabitants numbered several thousand people.  It was arguably Earth’s first city, but it was more like a large village, without the civic features typically associated with cities.[604]  The society seemed classless, and women and men had roughly equivalent status.  This is one of the brief social golden ages that feminists have studied.  The first domesticated sheep appeared at Çatal Höyük, and the beginnings of cattle domestication appear there as well.  Çatal Höyük’s residents raised wheat, barley, and peas.  Pottery and obsidian mining and tool-making were major crafts, and those people made the world’s first known map.  Çatal Höyük did not have walls, there was no sign of warfare, and many “shrines” dotted the settlement, which probably supported a hunter-gatherer religion.  Çatal Höyük was abandoned in a pattern that would repeat itself in the Fertile Crescent and Old World many times in succeeding millennia; it appears that deforestation and resultant desertification may have spelled the end of Çatal Höyük, as was probably also the case with Tell Abu Hureyra.

In an event that favors the hypotheses of climate-change advocates, there was a dip in global temperatures beginning about 8.2 kya, which lasted for a few centuries.  It was probably caused by remnants of the North American ice sheets melting and the resultant flush of freshwater into the North Atlantic.  It was a less severe event than the Younger Dryas, but it still caused epic droughts around the world.  Some scientists think that the uncertainty caused by those cooling events helped spur agriculture, to enhance food security.  Climate change from that event could be why Çatal Höyük was abandoned, and Tell Abu Hureyra survived the event, to only be abandoned several centuries later when another major dip in global temperatures occurred. 

Those two early settlements may have been abandoned partly due to those climate events, but they would have also deforested their hinterlands and desertified the region, and the settlements were permanently abandoned.  In the Jordan Valley, settlements were abandoned at the same time, which is thought to be because a thousand years of agricultural settlements eroded and deforested the land, and sufficient crops could no longer be grown.  That pattern of population growth and apparent overtaxing of the environment was common all across the Fertile Crescent around eight-to nine kya, and populations migrated away from the first settlements in search of new lands to exploit, and animal herding became a more commonplace method of sustenance. [605]  Environmentally harmful practices combined with droughts destroyed many civilizations in the millennia after those early abandonments, including the Mayan, Anasazi, and Harappan civilizations.[606]

A contemporary of Çatal Höyük, Çayönü Tepesi, near Anatolia, had indicators of developing class systems, and male/female differences in diet.[607]  Cattle seem to have been first domesticated about 10.5 kya in the vicinity, and is also where pigs may have been first domesticated.  Many progenitors of cereal crops still grow wild in the region.  The apple may have been the first domesticated tree fruit, and was raised in that region as early as 8.5 kya.  Early on, people also began to domesticate fiber-producing plants, and flax was among the first domesticated fiber plants.  Fiber crops often competed with food crops for field space, especially when foreign conquerors reoriented that subject population’s efforts, which led to starvation in the subject population.  A recent example is when the British forced Bengal to grow jute, indigo, and opium instead of food, and Bengal had a huge famine soon after the British conquered it.

Goats were first domesticated in today’s Iran about 10 kya.  Pigs were first semi-domesticated in the Fertile Crescent as long as 15 kya, and were independently domesticated in China about eight kya.  Combining domesticated plants and animals appeared fairly early.  Farmers realized that animal manure could fertilize crops, so the close association of pastures and cropland became a standard feature of Fertile Crescent civilizations.  Early domestic animals were all herd animals, and humans replaced herd leadership.  Since humans are herd animals, their understanding of herd behaviors probably made their efforts more successful.

Just as growing large became a strategy for extinction for the world’s megafauna when a super-predator appeared that could kill them, forests are the greatest biological energy stores that Earth has ever seen.  Trees are Earth’s “megaflora,” and they suffered the same fate as megafauna wherever civilization appeared.  When humans became sedentary, they razed local forests to gain building materials and fuel, and the freshly deforested land worked wonderfully for raising crops, at least until the soils were ruined from nutrient depletion, erosion, salination, and other insults.  Domesticated cattle pulled the first plows, which began more than seven kya.  When humans began to smelt metal, beginning about 8 kya, deforestation was easier, so a dynamic arose in the Fertile Crescent in which bronze axes easily deforested the land.  The exposed soil was then worked with draft animals pulling bronze plows, and this increased crop yields but also increased erosion.  That complex of deforestation, crops, draft animals, and smelted metals yielded great short-term benefits but was far from sustainable, as it devastated the ecosystems and soils and also impacted the hydrological cycle, which gradually turned forests into deserts.  Earth was also deforested by the enormously energy-intensive Bronze Age smelting of metal.  During the Mediterranean region's Bronze Age, the standard unit of copper production was the oxhide ingot (because it was worth about one ox), which weighed between 20 and 30 kilograms.  It took six tons of charcoal to smelt one ingot, which required 120 pine trees, or 1.6 hectares (four acres) of trees.[608]  Kilns for making pottery also required vast amounts of wood.  Wood met many energy needs of early Old World civilizations, which were all voracious consumers of wood.

In the Fertile Crescent today, the ruins of hundreds of early cities are in their self-made deserts, usually buried under the silt of the erosion of exposed forest soils.  As the Mediterranean Sea’s periphery became civilized, the same pattern was repeated; forests became semi-deserts and early cities were buried under silt.  Before the rise of civilization, a forest ran from Morocco to Afghanistan, and only about 10% of the forest that still existed as late as 2000 BCE still remains.[609]  Everyplace that civilization exists today has been dramatically deforested.[610]  Humanity has reduced Earth’s plant-based biomass by more than a third since agriculture began.  The only partial exceptions are places such as Japan, but they regenerated their forests by importing wood from foreign forests.  North America and Asia have been supplying Japan with wood for generations.  As civilizations wiped themselves out with their rapaciousness, some people were aware enough to lament what was happening, but they were a small minority.  Usually lost in the anthropocentric view was the awesome devastation inflicted on other life forms.  Killing off the megafauna was only a prelude.  Razing a forest to burn the wood and raise crops destroyed an entire ecosystem for short-term human benefit and left behind a lifeless desert when the last crops were wrenched from depleted soils.  In the final accounting, the damage meted out to Earth’s other species, not other humans, may be humanity’s greatest crime.  Humanity is the greatest destructive force on Earth since the asteroid that wiped out the dinosaurs, and our great task of devastating Earth and her denizens may be far from finished.

Since humans began to make advanced tools and valuable goods, they exchanged them, beginning as early as 150 kya, and cities have always been situated on low-energy transportation lanes.  Before the Industrial Revolution, these lanes were almost always bodies of water.  Before the Industrial Revolution, it took only about 1-2% of the energy to move goods across a body of water, such as a lake or ocean, as it did overland.  A peasant in Aztec civilization, for instance, could as easily and quickly bring more than 40 times the weight of goods by canoe on a trip across the Valley of Mexico’s lakes to Tenochtitlán as he could by carrying a load on his back along the causeways.[611]  In 1800, it cost as much to ship a ton of goods more than 5,000 kilometers to American shores from England as it did to transport it 50 kilometers overland in the USA.[612]  In England, in the 13th century CE, it cost about as much to transport coal across five hundred kilometers of water as it took to move it across five kilometers of land.[613]

The main reason for low-energy transportation lanes was so that energy supplies (primarily food and wood) could feed the cities, and that flow of energy was often reciprocated with the flow of manufactured goods.  The standard pattern of early cities was energy supplies flowing to the cities and city-manufactured goods flowing outward, and cities thereby became hubs of exchange.  The so-called “tyranny of distance,” which means how far goods could be effectively transported to cities, limited the size of their hinterland and thus limited a city’s size.[614]  More energy-intensive and energy-efficient transportation enlarged the exploitable hinterland, which allowed cities to grow.  The introduction of the wheel could improve matters, but not always.  In preindustrial Islamic cultures, the camel was often a more energy-efficient form of transportation than wheeled carts.[615]

Sedentism was the primary outcome and benefit of agriculture.  When people became sedentary, they could accumulate possessions, develop new skills, sleep under the same roof all year, and engage in daily communication with many others.  Just as language was the first “Internet,” cities provided a quantum leap in the quick dissemination of information and ideas.  The development of professions is the most important feature of urban life.[616]

The world’s first true city is widely considered to be Eridu, which was established near the mouth of the Euphrates River about 7.4 kya, or about 5400 BCE (“Before Common Era,” also called BC, for “Before Christ”, but BCE is today’s convention, just as “CE” has replaced “AD”).  Eridu had a population of about 5,000 people at its peak.  Eridu was the first city of what became Sumer, which was an agglomeration of city-states.  Sumer was established along and between the Tigris and Euphrates, and the ancient Greeks called the region Mesopotamia, which meant the land between the rivers.  Çayönü Tepesi was in the Tigris’s watershed, and it and many settlements like it engaged in deforestation, agriculture, and raising domestic animals.  Their practices were not sustainable, as the newly exposed soils washed away, and what remained was depleted of nutrients, although farmers began using manure, both of humans and domestic animals, to restore soil fertility, from the early days of agriculture.  Eridu engaged in a practice that characterizes cities to the present day: they harnessed gravity; upstream water flows supplied cities with water and goods were brought down rivers.  But in what became Mesopotamia, it also brought silt and salt from upriver deforestation and erosion.

Sumerian city-states engaged in irrigation, which raised the water-tables.  When the water table in those waterlogged soils reached the surface, the soils turned white with salt, especially with the high evaporation of those hot lands, and it would no longer support crops.  The only solution was to stop irrigating and let the land go fallow as the water table fell, but the population pressures did not allow for it, so the process inexorably created saline soils, silt-filled canals due to upland deforestation, and today those Sumerian cities are all buried in silt in a desert.  Eridu was a seashore city, and today its ruins lie more than 200 kilometers inland.  But before silt and salt wrecked that civilization, many seminal inventions appeared.  The sailing ship appeared in early Sumer.  Gravity took a ship downstream, and wind power helped it move back upstream.

About 3800 BCE, the Sumerian city of Ur was established at the new mouth of the Euphrates; Eridu was already becoming an inland city, although more from a sea level decline than silt at that time.  The ruins of seaside Ur reside more than 200 kilometers inland today.[617]  (Source: Wikimedia Commons)

The word “urban” is derived from the Sumerian “ur.”[618]  About 5000 BCE, the Sumerian city of Uruk was established, upriver on the Euphrates from Eridu, and Uruk became Sumer’s first great city, with a population of about 50,000 at its peak.  About 5000 BCE, people began smelting copper.  The earliest evidence for copper smelting currently comes from a mountain in today’s Serbia.  In the Fertile Crescent, inventions quickly spread, and by about 3300 BCE, smelters learned to add tin to copper and the Fertile Crescent’s Bronze Age began.  Metal had obvious advantages over stone, and Bronze Age civilizations in river valleys quickly appeared; the Harappan Civilization formed in the Indus river valley about 3300 BCE, and the first civilization in the Nile river valley formed about 3100 BCE.  The wheel was invented around 3500 BCE and immediately spread.  Whether it was invented in Sumer, the Indus river valley, or somewhere else in the region is still debated, but its advantages were instantly obvious, particularly where draft animals could pull them.  When the Spanish conquered the Aztecs, they found that Mesoamerican peoples had independently invented wheels, but just had them on children’s toys, and the likely reason was that they had no draft animals, not after the megafauna holocaust of several millennia earlier.

Warfare, in which polities fought over water and land, began in earnest in southern Mesopotamia about 4000 BCE, and the third millennium BCE (2999 to 2000 BCE) was a time of constant Mesopotamian warfare.  The sieges that city-states inflicted on each other were brutal.  When one city conquered another, the men were all killed or blinded and enslaved, and the women and children were enslaved.[619]  Slavery began appearing at the beginning of the Domestication Revolution.  Slavery only made economic sense in sedentary populations, and by the time of early civilizations and writing, slavery was a universal institution.  Enslaving somebody when people lived nomadically would have been impractical.

Making mounds from corpses of defeated soldiers was common in official accounts of battles during the third millennium BCE.  One of the first walled cities was Uruk’s colonial settlement Habuba Kabira, which was founded around 3500 BCE along the Euphrates in today’s Syria, but it was abandoned after several generations.  Those wars led to the first written treaties, which were largely concerned with citizens who found themselves on the wrong side of the new border.[620]  Conscription was an early feature of civilization, closely akin to slavery, although the arrangement was temporary and conscripted soldiers were often promised land for their coerced services.  Draft-dodging became one of early civilization’s art forms.

Stratified urban populations and the agricultural hinterlands that they exploit comprise civilization’s primary structure to this day.  Soldiers, craftsmen, merchants, priests, and other professions appeared with urban civilization.  Slaves only made economic sense among sedentary preindustrial peoples, and forced servitude is the hallmark of early civilizations.  The singing and dancing rituals of hunter-gatherer peoples were repressed by priesthoods of urban religions for thousands of years.  On early Fertile Crescent pottery, scenes of dancing people proliferated, which depicted a tradition that probably lasted unbroken for more than 60,000 years.  By about 3500 BCE, those dancing scenes began to disappear from pottery, as professional priesthoods conquered the ancestral religion.  Western religions have been stifling “ecstatic” religions ever since.  Today’s Pentecostals and Shakers have rituals that hail back to religion before civilization.[621]  The professional urban priesthood became spiritual middlemen, and direct interactions with other dimensions and “ecstatic” states were discouraged or forbidden.  Belief and “faith” replaced direct experience, and later, “sacred” texts recorded the alleged deeds and words of spiritual leaders, who were usually religious rebels themselves and did not leave any writings behind.  The priesthood not only monopolized the texts but also their interpretation, and again became well-paid middlemen between the divine source and the flock.

Early elites claimed divine status, and the priesthood abetted the fiction, and a universal practice among early civilizations was erecting monumental architecture.  The ziggurat was the first such structure.  Anthropologists think that monumental architecture may be a form of societal/elite display, so that a society can flaunt the resources used to make such overawing showings, both to encourage submission to the society's obvious wealth and power, and to also discourage attempts to compete with it.  In Sumer, ziggurats were not only the center of the state religion, but also held precious metals such as gold.  The priesthood directed mass economic activity, such as organizing irrigation projects.  In some ways, the priesthood was only adapting to urbanization.  Their professional ancestors developed calendars and other methods of synchronizing vital activities such as plantings and harvests, with their attendant festivals; mistimings by mere days could lead to famine.  Sumerian temples had statues in their central place of worship, in human form, bedecked with jewels and other precious adornments.  Offerings of food were presented to the statues, which temple personnel ate that night.  In the third millennium BCE, temples owned land and had their own workforce, which was again a “voluntary” one that discharged religious obligations.  Although those temples performed valuable societal functions such as taking in orphans, the earliest urban religions were obviously businesses and could become rackets, in a pattern that continues to this day.[622]

Later, palaces appeared, and Sumerian palaces and their related elites are seen today as more of an intrusive dynamic from rural societies, as a kind of invasion and conquest rather than a natural outcome of Sumerian urban life.  The elite arguably performed some kind of exchange function, but a common idea among anthropologists is that elites became elites because they could, not because they performed a necessary societal function.  In early cities, elites usually arose from new professional classes that created and controlled markets.[623]  In early Mesopotamian states, palace activities were largely centered around elite lifestyles, not administering state functions.[624]  Sumer was the first pristine state, and when other pristine states arose, something like convergent evolution happened.  They all had similar features, which included: male domination, divinely sanctioned heads of state with harems and other extravagances in their capital cities, including elite-aggrandizing monumental architecture, forced servitude, human sacrifice and/or public executions to terrorize the populace into submission, conscripted “cannon fodder” infantry led by elite officers, fortified cities, taxation, and so on.[625]  All pristine states passed through similar developmental stages, and some features appeared earlier or later than others, with minor variation among their attributes, but they all had remarkable resemblances, which probably reflected human “nature,” in which UP everywhere reacted to analogous economic conditions in comparable fashion.

After consolidating their ill-gotten positions, the elite can rule more gently.  Sociologist Steven Spitzer stated:

 

“Pristine states, precisely because they lack legitimacy, must develop and impose harsh, crude, and highly visible forms of repressive sanctions; developed states, having successfully ‘re-invented’ consensus, can achieve social regulation through a combination of civil law and relatively mild forms of ‘calculated’ repression.”[626] 

 

The greatest threat to all ruling classes has almost always been those that they rule.  Only after their rule was secure, usually via bloodshed, did Sumer’s elites perform state duties to provide some superficial legitimacy for their status, and priesthoods attributing divine status or divine sanction to secular elites has always been an effective strategy.  The close relationship of secular and religious authority is evident at the very beginnings of civilization.  Even today, the British Queen rules the Church of England, which is a tradition in Europe that goes back to Roman emperors.[627]  The laborers drafted to build cathedrals, palaces, and monuments to aggrandize the elite would always perform more efficiently if they were doing it from religious belief rather than coercion, and the world’s monumental architecture was primarily built with “free” labor, not slave labor, as a way of performing religious duties.  Combining religious and secular ideologies can even be seen in supposedly secular civilizations, such as American schoolchildren being trained to worship flags, with the words “under God” as part of their daily recitations.

The human ability to think abstractly was exploited by social managers from civilization’s earliest days.  Fixating people on irrational symbols, and then manipulating those symbols for elite benefit, is arguably a universal trait of civilized peoples.  Even today, a great deal of politics is the rational manipulation of irrational symbols; as with the earliest religion, the neocortex is bypassed in favor of connecting with the limbic system, and people are easy prey to the cynical manipulation of emotionally charged symbols.  The effects of childhood indoctrination and conditioning can last for the victim’s lifetime.  When people mistake symbols for reality, they are easily manipulated.  Large-scale ideological indoctrination probably began in Sumer, as the priesthood concocted and promoted various beliefs.  Symbology replaced reality, including the acceptance of the secular elite as deific, getting slaves to accept their status, and getting commoners to give food to the priesthood to fulfill some divinely ordained obligation.  Religion passed from experience to belief with the rise of civilization.  I am not suggesting that pre-civilized religions were necessarily enlightened.  They had shamanic intermediaries too, but with the rise of civilization, the priest class had to work hard to justify the obviously unfair social organization that accompanied stratified populations.  Direct religious experience was disparaged and suppressed while the priesthood’s religious indoctrination was promoted.

Although there is evidence that writing began about 5000 BCE, Sumer became the first literate civilization about 3000 BCE, after their invention of cuneiform around 3300 BCE.  Mesopotamian peoples had used clay tokens for accounting since about 8000 BCE, and elite accounting was typical of the first writing systems, or tales to aggrandize the elite.  For instance, the quipu of the preliterate Incas was an accounting tool.  By the Third Dynasty of Ur, silver became the official unit of accounting, to be supplanted by gold a millennium later, probably due to Egyptian influence.[628] 

One of the earliest known works of literature is the Epic of Gilgamesh, dating to as early as the Third Dynasty of Ur, which began about 2150 BCE.  A brief review of the epic highlights elite themes and dynamics of early civilization.  Gilgamesh was a king of Uruk around 2500 BCE, and was one-third man and two-thirds god.  In the epic’s first tablet, he used his kingly prerogative to sleep with Uruk’s young women the night before marriage, and his subjects beseeched the gods for assistance.  The gods responded by creating a “wild man” to distract Gilgamesh, and after Gilgamesh defeated him in battle they became friends.  Gilgamesh then suggested that they travel to Lebanon’s cedar forest and kill the demigod guardian of the forest.  They journeyed to the cedar forest, killed the demigod, deforested the groves, and rafted back to Uruk with the demigod’s head and a particularly large tree to be used in a temple.  After the wild man’s untimely death at the hands of the gods as punishment for killing the demigod, Gilgamesh then made otherworldly journeys to learn how to become immortal.  After defeating stone giants and felling more than a hundred more trees, Gilgamesh built a boat to survive the coming flood, sent by the gods, and in a story that almost certainly inspired the Old Testament’s tale of Noah, Gilgamesh survived the flood along with the animals he saved, and gods gathered around the sweet smell of Gilgamesh’s sacrifice.  After more adventures in an attempt to become immortal, Gilgamesh lamented his folly. 

The writers of the Epic of Gilgamesh knew that deforestation led to droughts, and Gilgamesh’s war against the forest foreshadowed the fate of numerous Old World civilizations.[629]  The city-states of southern Mesopotamia made regular journeys to Lebanon’s cedar forest.  The ruler of Lagash, not far from Uruk, had plans for aggrandizing his legacy and leveled cedar forests and rafted their logs downriver to Lagash to fulfill his grandiose schemes.[630]  The city-states of southern Mesopotamia deforested upstream river valleys and rafted logs to their downstream cities.  Wars between the city-states, and wars of foreign conquest to secure forests and navigable rivers (particularly the Tigris, Euphrates, and Karun of today’s Iran), were common then.  Wood became such a coveted commodity that it could approach the value of precious metals and stones, and Akkad’s rulers placed names on mountains corresponding with what tree predominantly grew on each one.[631]

What came with the logs, however, was silt and salt.  Southern Mesopotamia practiced irrigated farming, so salination and siltation eventually wrecked Sumer.  By the Third Dynasty of Ur around 2100 BCE, the king Ur-Nammu made dredging silt from canals a high priority, and his dredging initiative temporarily revived agriculture and made Ur’s port navigable once again, which had already filled with silt.[632]  Wheat is more sensitive to saline soil than barley.  In 3500 BCE, wheat and barley were grown in equal amounts, but salination began taking its toll.  By 3000 BCE, when Sumer became the world’s first literate society, their tablets record Sumer’s decline.  By 2500 BCE, wheat amounted to only 15% of the total crop.  By 2100 BCE, wheat comprised only 2% of Sumer’s crops.  Wheat was not the only casualty.  Salt-tolerant barley did better, but crop yields began falling precipitously around 2400 BCE, and a steady decline reached only a third of 2400 BCE yields by 1700 BCE.[633]  Sumerian people began migrating upriver to lands that had not yet been devastated, Sumer’s population declined by more than half, and famine was a regular visitor as croplands became white with salt.

Upriver from Sumer, the Akkadian Empire began to form, which was the world's first empire.  Akkadians began defeating Sumer around 2300 BCE.  Akkad’s first king was Sargon, who bloodily came to power, captured Uruk, and dismantled its walls while conquering Sumer.  That began a pattern of rising and falling empires in the Fertile Crescent that characterized the region for thousands of years.  The Akkadian Empire collapsed after only 180 years of existence, and there was a resurgence of Ur under its Third Dynasty around 2100 BCE; the oldest preserved laws were then written.  The Code of Hammurabi, written when Babylonians ruled in their turn a few centuries later, reflected earlier Sumerian laws, and they notably documented the barbarity of their times.  Murder and robbery were capital crimes, but capital punishment was also meted out for offenses such as stealing a slave, deflowering a wife before the husband could (when the deflowerer is killed), or a wife is unfaithful (when the wife is killed).  A boy striking his father would lose his fingers or hand.  “Eye for an eye” came from the Code of Hammurabi.

Just as precipitation ran to the ocean in floods before plants colonized land, denuded lands and razed forests no longer held water like a sponge, and transpiration no longer contributed to the hydrological cycle.  Rampant deforestation contributed to flooded Mesopotamian rivers, and the region also became drier.  The flood that Gilgamesh survived, which is evident in the archeological record, was probably related to deforestation, although a great deal of speculation exists regarding the origins of flood myths.  The Black Sea is one candidate for flood legends, where the rising interglacial global ocean flooded the lake to levels higher than during the glacial period.  Another hypothesis has rising seas flooding the lower end of Mesopotamia.  There are arguments that the legend of Atlantis related to a seashore civilization drowned under a rising interglacial ocean, but I think that an increasingly deforested Sumerian hinterland gave rise to the floods of legend.

Just as with megafauna extinctions or the Neanderthal extinction, there are plenty of scientists and scholars who argue that human-agency is not responsible for the decline and collapse of civilizations, question whether they collapsed at all, assert that climate change did it, or invasion did it, and so on.  The battle of competing hypotheses is part of the process of science, but all scientists whose hypotheses deflect responsibility from humanity (their in-group) have an inherent conflict of interest, and their work should be examined with that in mind.  In the historical era, particularly when Europe conquered the world, the rapid deforestation and desertification of newly conquered lands was evident.  Within a century of the Spanish conquest of the Aztecs, a valley of verdant forests and fertile farmland was turned into a semi-desert by deforestation and sheep grazing.  That valley is known as the Mezquital Valley today, because the desert-dwelling mesquite is the dominant tree in that semi-desert.  British invaders of Australia did the same thing to New South Wales within 50 years, via deforestation and sheep grazing.[634]  Streams quickly dried up, but flooded when it rained, as the “sponge” of the forest ecosystem was removed, so flood and drought accompanied deforestation.  Atlantic islands were quickly denuded and desertified by invading Spaniards and Portuguese.

Since 2003, I have been a student of collapsed civilizations, and there are vigorous academic disputes on the subject.  Jared Diamond sees collapses as a result of environmental degradation, while Joseph Tainter perceives it as declining marginal returns on investment in complexity.[635]  Thomas Homer-Dixon views it as a decline in a civilization’s EROI.[636]  Other scientists propose climate explanations, particularly droughts.[637]  What they are all stating, in one fashion or another, is that the civilizations ran out of energy.  All resources are either energy or energy makes them available, whether they are food, timber, water, metal, or today’s hydrocarbon deposits; wars are once again fought in Mesopotamia to secure energy.[638]  Tainter’s idea of declining marginal returns in investment in complexity is perhaps the most prominent current explanation, but it also did not engage the dynamic’s physics, which others have done.  Homer-Dixon has perhaps elucidated the energetics the most clearly, with his concept of declining EROI, for which he writes articles and gives public speeches.  Homer-Dixon’s ideas also incorporate C.S. Holling’s ecosystems theories.  Whether climate change did it, humans wiped out their environments, or humanity has reached Peak Oil and a global collapse is just around the corner, it always meant a decline in energy-delivered resources as well as energy itself.  Tainter’s moment of a civilization’s collapse was when a hungry urban professional returned to rural life to gain greater energy (food) security, but a long, often slow decline usually led to that moment, as a society’s return on investment in complexity declined or, as Homer-Dixon stated it, the EROI and resiliency declined to a disruptive level where the energy surplus dwindled and civilization eventually collapsed.  Just as with wars, the ultimate cause was economic, but some kind of triggering event was the proximate cause, which was warfare often enough.  But Rome was sacked three times in less than two centuries only after centuries of declining EROI and surplus energy.[639]  That pattern of deforestation, agriculture, and the resulting environmental degradation that reduced the society's EROI and surplus energy is common to the decline and fall of all early civilizations that have been studied.[640]

When historians debated the causes of Rome's decline and fall, for instance, they were merely debating proximate causes, which was understandable, as the science of energy did not yet exist when Edward Gibbon wrote his tomes. Once scientists began to study the issue, running out of energy became seen as the ultimate cause, even though scientists still argue over environmental causes, for instance, but what some seem to miss in their arguments is that they are all just ways of saying that the civilization ran out of energy, whether humans contributed to the environmental failure (and declining EROI and surplus energy) or not.[641]

One key feature of Mesopotamian life resulted from wars and migrations: in cities, social organization along family or clan lines became obsolete, and professional associations became prominent.  Mesopotamian cities absorbed invader cultures while also adapting to them, and ancient Mesopotamian civilizations became multicultural.[642]  The first cities also had many problems to solve, such as sanitation, in which the water supply and sewage system had to be separated.  Also, in a pattern that continues to this day, upriver settlements usually flushed their sewage into the rivers, as they no longer had to concern themselves with it, but it obviously affected downstream civilizations.  In many poor nations today, as major rivers enter the oceans they are virtually open sewers as they become increasingly polluted as rivers pass settlements and cities.  Also, the domestication of animals is generally considered to be the origin of many epidemic diseases, and the close quarters of urban-living often meant epidemics that decimated urban populations; the Plague of Athens in 430 BCE, during the Peloponnesian War, was one of the earliest recorded epidemics.  Filth, pollution, and crowding were major problems for early cities, and life expectancy was always lower in the cities than in the hinterland.  Life expectancy in cities did not rise to the hinterlands' until the 20th century.[643]  Surplus population from the hinterland repopulated all cities in history until the 20th century.

Fertile Crescent civilizations are universally regarded as humanity’s first.  In China, people began to domesticate millet around eight kya, which was about 3,000 years after Fertile Crescent farming began.  Some scientists are skeptical that Chinese domestication really developed without any Fertile Crescent influence, even if it was just the idea of domestication.  Similarly, agriculture began in the Western Hemisphere in Mesoamerica, and people domesticated squash about 10-8 kya.  The potato could have begun domestication in Peru at about the same time.  Those are the primary places where plants were domesticated independently in the Western Hemisphere, and the practice spread.  Plants were independently domesticated in only a handful of regions on Earth.[644]  Whether the idea of domestication passed between regions where it is thought to have appeared independently, where the pig, for instance, may have been domesticated independently in the Fertile Crescent and China, nearly all domesticated plants and animals were probably domesticated once, and the idea/technique/offspring spread.  The horse, first domesticated about 4000 BCE, is an instance when genetic evidence points to domestication happening once, with a limited number of stallions, and wild mares were subsequently incorporated into domestic herds.  Once a herd animal was domesticated in the Fertile Crescent, the idea of domesticating herd animals certainly made subsequent domestication events less innovative.  The Domestication Revolution, even if it happened in as many as nine places independently, as with the previous two Epochal Events (stone tools/controlling fire, and that found group that left Africa), the people who initiated the Third Epochal Event were relatively few.  Probably only a few hundred people were beacons of innovation, or maybe even only a few dozen or less, when they are added together, and the domestication of animals in the Fertile Crescent may have had a lone inventor, or handful of them, who initiated the process, and the domestication of plants may have had similarly few inventors.

As has been evident in this essay so far, and will become more evident, scientific orthodoxy and I do not agree on everything, far from it.  Not only is mainstream science imprisoned by barriers erected by a faction of the global elite, as paradigm-shattering scientific findings and world-changing technologies are ruthlessly suppressed, but all of my fellow travelers were, to one extent or another, mystical in their orientation.  Their mystical persuasion had nothing to do with beliefs, studying sacred texts, or other indoctrination, but their experiences.  Brian O’Leary was a staunch advocate of scientific testing of “paranormal” phenomena.  After I had dramatic experiences that initiated my mystical awakening, I also performed experiments and witnessed many undeniable events that clearly demonstrated that the materialistic models of consciousness that dominate mainstream science rest on false foundations.  Brian nearly lost his life, courtesy of the USA’s military, when he looked into the UFO phenomenon, after being made an offer he could not refuse, and the attack shortened his life.  Far more is happening than the TV news tells us. 

The physical dimension is not the only one, and accomplished psychonauts can visit others, some of whom I know, and some have even brought back designs for inventions used in every Western home today.  Scientists call flashes of inventive insight “the creative moment,” but there is often far more to it than novel and poorly understood brain activity.

When scientists attribute all “beliefs” in the “supernatural” to superstition, wishful thinking, reaching a delusionary “high” by stressing the body to exhaustion, like a substance-induced state, and other human foibles, they err.  Instead of considering that accomplished mystics can visit other dimensions or gain perspectives regarding this one that could be called “magical,” scientists tend to see those “primitive” states that may provide windows to other dimensions as nothing more than “a distorting mirror.”[645]  There is something real at the root of religious behavior and belief, but just as with everything else in a world of scarcity, people corrupted it into a way of getting fed, men used it to gain sexual access to women, and the like.  The same scandalous behaviors haunt today's New Age community.  No worthy mystic is going to ask people to “believe,” have “faith,” memorize “sacred” texts, and the like.  Those are the tools of religious racketeers.  People can seek their own experiences, and there is a mountain of scientific data that supports the reality of “paranormal” phenomena.  Even calling it “paranormal” is misleading.  Those abilities of consciousness are normal, if only underdeveloped in the West and abused by charlatans and other opportunists.  Many “mystics” have faked such abilities, but relatively few in the milieu do.  For all the many failings of organized religion and the rampant mystical hucksterism that abounds, materialism is a religion and not much different from the world’s religions, but its founding articles of faith are called “assumptions.”  I understand and can even appreciate the seductions of the rationalist-materialist paradigm, but it rests on a false foundation.  There are some highly sophisticated ways of viewing the cosmos and the human role in it that have little to do with dogma and the usual trappings of organized religion; a lot of it can be tested, even scientifically.[646]

One enduring question about civilization is “Why?”  Why would somebody leave a village for a shortened life expectancy in a city?  Ever since the ancient Greeks and Confucius, that question has been asked.  There are two basic theoretical camps: one is integration theory, and the other is conflict theory.  Integration theories have people moving to civilization because of the attendant benefits, which are obviously many.  Conflict theories, of which Karl Marx was a proponent, have elites exploiting civilizations in service of greedy and vain motivations.  Academics have written that integration theories account best for providing life’s necessities for the masses, which is why they migrate to civilizations, and conflict theories best explain elite appropriation of economic surpluses.[647]

In Sumer in the third millennium BCE, about 80% of the population lived in cities so that they could sleep behind fortifications to protect against attack.[648]  However, about 80-90% of the population was engaged in agriculture.  Before industrialization, the vast majority of civilized populations were involved in agriculture, as the surplus could only support a small non-agricultural population, which was comprised of professionals and the elite.  All elites for all time have engaged in conspicuous economic consumption as the mark of their status, as a form of display.  Until the Industrial Revolution, except for the brief Golden Age of the Hunter-Gatherer, the primary preoccupation of all people for all time has been food security, as hunger was a constant specter.[649]  Just as the energy surplus defines the fortunes of individuals and species, it also defines the fortunes of civilizations.

People on the edge of starvation will rarely if ever display enlightened activities in relationship to their environment or each other, as they battle for survival.  Early farmers could see the effects of deforestation, erosion, and soil exhaustion, but gentle, sustainable practices were often defeated by market forces, imperial prerogatives, and warfare.[650]  What could be obvious to farmers was not evident to potentates sitting on distant urban thrones, merchants, or money-changers, and as the city conquered what became the hinterland, short-term economic plunder took precedence over long-term environmental management far too frequently.

Until the 20th century, people had no idea how their activities impacted a portion of their environment that may end up hastening humanity’s demise more than self-made deserts: the atmosphere.  Agriculture and civilization meant deforestation, and there is compelling evidence that the Domestication Revolution began altering the composition of Earth’s atmosphere from its earliest days.  The natural trend of carbon dioxide decline was reversed beginning about 6000 BCE.  Instead of declining from about 260 PPM at 6000 BCE to about 240 PPM today, which would have been the natural trend, it began rising and reached 275 PPM by about 3000 BCE.[651]  At the beginning of the Industrial Revolution, atmospheric carbon dioxide concentrations were about 40 PPM higher than the natural trend would suggest.  When a forest is razed and the resultant wood is burned, which is usually wood’s ultimate fate in civilizations, it liberated carbon that the tree absorbed from the atmosphere during photosynthesis.  Methane is a potent greenhouse gas, and human activities began measurably adding methane to the atmosphere by about 3000 BCE, which coincided with the rise of the rice paddy system in China.[652]  In nature, methane is primarily produced by decaying vegetation in wetlands, both in the tropics and the Arctic, and human activities have increased wetlands even as they made other regions arid.  Domestic grazing animals and human digestive systems also contribute to methane production.  Atmospheric alteration by human activities has only come to public awareness in my lifetime, but human activities have had a measurable effect on greenhouse gases since the beginnings of civilization, even though the effects were modest compared to what has happened during the Industrial Revolution, as humans burn Earth’s hydrocarbon deposits with abandon.

All early cities were built in warm climates, to take advantage of their “energy subsidy.”  Heating cool-climate buildings is extremely energy-intensive, and growing seasons are shorter farther from the equator, which explains why cool-climate civilizations developed much later than warm-climate civilizations.[653]  From its beginnings in the region that included Anatolia, Mesopotamia, and the Levant, agriculture made its inexorable march across the land masses and spread to the farthest arable reaches of Europe before 3500 BCE.[654]  As agriculture spread, so did warring empires.  What is called the Near East and Mediterranean region was slaked with blood early and often, as empires rose and fell.  Sumer was conquered by Akkad, and when Akkad fell, Ur had a resurgence, to be supplanted by Babylon, which was supplanted by Assyria, which was supplanted by a neo-Babylonian civilization, which was supplanted by Persia, which was supplanted by Macedonians led by Alexander the Great, whose military methods were unsurpassed for the remainder of humanity’s preindustrial times.  Alexander’s forces could have arguably defeated Wellington’s forces at Waterloo in 1815.[655]  The wars over control of Mesopotamia have continued until this day.  History’s richest and most powerful nation recently invaded the region to secure hydrocarbon energy while purveying blatantly fraudulent rationales which fooled nobody except for the imperial citizenry, and even they largely winked at the “noble” rationales given.

The rest of this chapter will trace many important preindustrial developments which helped set the stage for the Industrial Revolution, which is humanity’s fourth and most recent Epochal Event.  But until the last few centuries in Europe preceding the Industrial Revolution, the basics among all civilizations did not appreciably change.  Agriculture provided a local and stable energy supply that allowed for sedentism, forests were removed to make way for crops, and domestic animals were used to provide labor and/or flesh products, while their manure helped replenish soil nutrients depleted by agriculture.  Virtually everywhere that agriculture appeared, so did civilization, with varying levels of urbanity.  Elites dominated all civilizations, and they almost always invoked either a divine nature or divine sanction to justify their status, and they always engaged in conspicuous economic consumption.  Cities situated on low-energy transportation lanes, which were almost always bodies of water, exploited forested and agricultural hinterlands, which were worked by peasants and slaves, while cities housed professionals and the elite.  Forests and agriculture provided the primary energy supply of all preindustrial civilizations, which was usually supplemented with the products and services of domestic animals.  All preindustrial civilizations were steeply hierarchical - economically, socially, and politically – and the means of production provided small surpluses that supported a small elite and professional class.  Fighting over resources and plunder has been the primary predilection of all civilizations for all time, except for a very brief interlude at the beginnings of pristine civilizations.

Those basics never really changed, and environmental destruction accompanied all civilizations, as razing forests and growing crops could never really be sustainable and certainly could not form the foundation for economically abundant societies.  Economic scarcity, which is always rooted in energy scarcity, was as deeply ingrained into all ideologies as thoroughly as those early religions that accessed the limbic system to reinforce group cohesion.  Economic scarcity was and is so pervasive that it is an assumption of all of today’s dominant ideologies.  As with all assumptions, scarcity has become a barely visible framework to adherents of all dominant ideologies.  If energy were abundant, scarcity-based realities and ideologies would quickly become obsolete, as well as many societal features that are scarcity’s side-effects, such as elites, greed, warfare, exchange professions, and environmental destruction.

In the waning days of early Mesopotamian civilizations, conservation became a concept.  That pattern was repeated innumerable times over the succeeding millennia, when an early golden age of civilization, with the lands blanketed in forests and fertile soils, gave way to increasingly desertified lands and a conservation ethic began to take root.  It was always too little and too late, however; the civilization collapsed and left behind a wasteland that did not recover for centuries, often to be devastated once again should civilization reappear.  To those other universal aspects of preindustrial civilizations, that dynamic should be added.

As southern Mesopotamia slowly became a wasteland, people began migrating away as environmental refugees, and perhaps the most famous is Abraham, the Old Testament’s founder of the Israelites.  Abraham migrated from Ur around 2000 BCE and ultimately settled in Canaan.  I respect the inspiration likely behind Jesus, who was a historical figure, but modern archeologists and historians have not been able to establish much historical accuracy in the sacred texts of Judaism, Christianity, or Islam.  There is little or no evidence that Moses existed, or that the Exodus, conquest of Canaan (the evidence is that Israelites were Canaanites), and many other Old Testament events really happened.  If there was any historical truth at all, the facts were inflated into fantastic stories designed to serve various agendas.

Key events in the popular story of Jesus's life, such as the virgin birth and resurrection, were already circulating in other religions of the day.  There is little evidence that Muhammad existed, and if he did, he probably lived around Jerusalem, not on the Arabian Peninsula.[656]  After a career of archeological investigation in the region where the Biblical Israel was founded, one anthropologist likened the Hebrew Bible to propaganda with tiny bits of historical truth in it, as some facts are needed to help people swallow fanciful stories.[657]  To modern observers not under the thrall of limbic conditioning, tales of people living to be nearly a thousand (Old Testament), or more than 40,000 years (Sumerian King List) are not taken seriously.  But literalist interpretations of ancient texts abound, whether they come from religious fundamentalists or scholars such as Velikovsky and Sitchin who tried to explain mythical events as if ancient texts depicted literal truth.  Promoting symbols and myths as literal reality is a major component of how modern populations are controlled.

The first five books of the Bible, called the Pentateuch, are considered by today’s scholars to have been a political tract written centuries after the alleged events occurred.  It was like the fabrications in the American history taught to today’s schoolchildren as a way to cultivate blind obedience to the state.  Early Israel and Judah were tiny kingdoms in the hills, sandwiched between Assyria and Egypt, which were warring regional powers.  Israel was destroyed about 722 BCE after the Israeli king defied the Assyrian king, and ten of Israel’s tribes were forcibly relocated by Assyria and became lost to history.[658]  The Assyrians forcibly relocated more than four million people.  Those “lost tribes” became the focus of all manner of fantasy for millennia.  Writing the Pentateuch was an understandable effort to help Israelites survive, as a kind of nationalistic parable.  The New Testament and Koran were also written long after the alleged events, accompanied by huge political battles over what the official story would be.  Whatever divine inspiration Jesus may have had access to, ("love the enemy" (AKA "out-group") is perhaps the most enlightened message ever given to humanity), or other figures in Judeo-Christian or Islamic tales, what is certain is that priesthoods and rulers shamelessly distorted them to serve their agendas of amassing and maintaining wealth and power, in a pattern that begins with the first civilization and lasts to this day.

The Nile River's valley made the rise of Egyptian civilization possible, and it had the Old World’s most reliable food supply.  Even today, half of Egypt’s population lives on the Nile’s delta.  Annual floods brought silt from deforestation and erosion from the highlands to the delta, which kept the fields fertile.[659]  Unlike the Mesopotamian disaster, salination was not a major problem for Egyptians, except at Faiyum and irrigated areas above the flood line.[660]  The Egyptian and Harappan civilizations were not pristine, as they were beneficiaries of Fertile Crescent innovations, and arose from hunter-gatherer societies that did not pass through the learning and evolutionary curve for domesticating their plants and animals.  Those pristine civilizations may have been the only places on Earth where civilization could first appear.  If not for those regions where people domesticated plants, humanity might still be living like aboriginal Australians did for nearly 50,000 years.

The peoples of the African rainforests found life relatively easy, mainly because of the same bounty that kept the gorillas and chimps at home in them, while loser apes were driven to the margins and learned to walk upright.  For those reasons, agriculture and civilization came late to the rainforests.[661]  Domestication in equatorial Africa was likely not pristine but the result of diffusion from the Fertile Crescent.

Although Africa did not lose its megafauna as Australia and the Americas did, visitors to North Africa today from 10,000 years ago would be amazed, and not just because of modern civilization, but because of all the megafauna that disappeared from North Africa and how desert-like the environment became.  Before Egyptian civilization arose, the Nile valley hosted nearly the full complement of iconic African megafauna, with elephants, hippos, lions, rhinos, giraffes, and many others, and a staggering abundance of waterfowl lived in the Nile valley and on its delta.  That early graveyard of slaughtered humans (which was only discovered because the dammed Nile would soon sink it), was on the Nile’s banks, so humans had been fighting over the Nile’s resources for many millennia when civilization appeared there.  Migrants from the Fertile Crescent began settling in the Nile valley beginning about 6000 BCE, not long before Çatal Höyük and Jordan Valley settlements were abandoned.  Around 3600 BCE, the Nile’s villages began their rise to civilization, and about 3100 BCE the first polity that controlled Upper and Lower Egypt appeared and dynastic rule began.  Gold was mined for the first time on Earth on an industrial scale, and Egypt set the standard for labor brutality in gold mining, not pyramid building.  In one of many juxtapositions of the “divine” and profane that would be seen in subsequent civilizations, gold was a sacred metal in Egypt, and pharaohs were literally depicted as sons of the solar deity Ra.  A pharaoh’s primary “job” was interceding with the gods to ensure a proper annual Nile flood.  When the floods failed, so did the peasantry’s faith in the nobility, and droughts brought an end to pharaonic dynasties; subsequent rulers were more modest about their divine abilities to affect the Nile’s annual flood.[662]

By the end of the Old Kingdom around 2200 BCE, elephants, rhinos, wild camels, and giraffes were locally extinct in the Nile valley or on the brink of it.[663]  Old Kingdom ships sailed to Lebanon to raze their trees by 2650 BCE, which was a century before the Great Pyramid of Giza was built.  Slaves do not seem to have built the pyramids, but mainly agricultural workers working for a wage during the off-season.  The entire Giza complex was built in about a century and remains the ultimate elite-aggrandizing monumental architecture.  It has been estimated that all the energy of Egypt’s agricultural surplus for a century was devoted to building the complex at Giza.[664]  Ancient Egypt reached the height of its power during the reign of Amenhotep III in about 1350 BCE.  Amenhotep III claimed that he personally killed 102 lions; hunting lions was the ultimate sport of pharaohs, after playing with their harems.  Tutankhamun, the pharaoh with the resplendent tomb, ruled a generation after Amenhotep III.  The lives of thousands of slaves paid for the solid gold coffin and the mask of Tutankhamun’s mummy. (Source: Wikimedia Commons)

Nubian gold mines were filled with the skeletons of dead miners.  Nobody survived mining for the pharaohs; they were uniformly worked to death, whether they were men, women, children, elderly, or disabled, and an endless supply of new slaves replaced the dead ones.[665]  The Incas had a Sun god religion, and gold became a sacred metal reserved for royalty (and silver was sacred and represented the Moon, which Incan royalty also claimed descent from), but they did not work people to death to obtain it.[666]  A great deal of Nubian gold ended up in royal tombs, to be looted after the New Kingdom collapsed after the Twentieth Dynasty in about 1060 BCE.

The relatively gentle river valleys of Mesopotamia and Egypt saw long, slow declines in their environments, but when civilization came to the more mountainous periphery of the Mediterranean Sea, environmental damage came much faster and more dramatically, particularly as the Stone Age gave way to the Bronze and Iron ages.  Before civilization arrived, the Mediterranean’s periphery was heavily forested and, as with Lebanon, cedars were plentiful.  Today, Lebanon has several small groves of cedar, as a kind of museum of former greatness, and efforts to regenerate the cedar forests are ongoing.  The Mediterranean islands had their own megafauna extinctions about 12 kya, and island-dwarfed hippos and elephants went extinct soon after humans arrived.  Any land that can support hippos is blessed with an abundance of water, and islands such as Crete and Cyprus were blanketed with verdant forests before the rise of civilization. 

As people fled from the increasingly barren and devastated Fertile Crescent, Bronze Age settlements began growing on the Mediterranean’s east end.  During the Babylonian reign of Hammurabi, wood was extremely scarce and his agents were charged with finding more wood.  Under Hammurabi, illegal woodcutting was a capital crime.  The search for wood extended past deforested Lebanon to the Mediterranean’s periphery, and Crete’s inhabitants began to trade wood for luxury items with Near East civilizations.  The nearly extinct Near East cedar was reserved for palaces and temples in Mesopotamia, but on Crete, cedar was so abundant that it was used for tool handles and had other mundane purposes.[667]  Trade with the Near East quickly boosted Crete from a forested hinterland, isolated in the eastern Mediterranean, into a powerful state, at least while its forests lasted.[668]  In early Minoan civilization, wood was used lavishly.  The Minoan success influenced the nearby Peloponnesian peninsula, and Mycenaean civilization began about 1600 BCE.  Minoans developed the still-undeciphered Linear A script.  Mycenaean Greeks developed Linear B, which has been largely decoded and was all elite accounting; it is likely that Linear A also was only accounting.  About 1700 BCE, the Minoan palaces were destroyed, probably by an earthquake.  The palaces were rebuilt on a grand scale, and settlements expanded in the Minoan golden age, which lasted about three centuries.  Then a swift decline collapsed the Minoan civilization by 1200 BCE.  Mycenaeans then annexed the island.

Many reasons were proffered to explain the Minoan decline and collapse, including the now-rejected idea that a volcanic eruption did it.  What is increasingly cited as the reason for the Minoan decline (and was probably the ultimate reason for its collapse), was that Minoans depleted their energy supply, primarily via deforestation.  Minoans, just as with many other collapsed civilizations, exceeded their land's carrying capacity.  For organisms, carrying capacity always meant food and the ability to reproduce, but for civilizations, it also meant the energy needed to run the civilization’s moving parts, including transportation and the energy used to build structures and goods.  If we revisit the “decision” that life faces, whether to use energy to fuel biological processes or build biological structures, civilizations faced the same choice.  Humans commandeered the energy that a tree invested in its growth, and there were two basic ways to use it: liberate the energy in the structure by burning it, or use that structure for building human-usable tools or structures, which included buildings and ships.  Metal smelting used stupendous amounts of wood, as did pottery-making and fireplaces and furnaces to heat buildings.  Minoans also built a tremendous fleet of ships for trade and military dominance.  When rebuilding Minoan palaces, Crete’s inhabitants used wood exuberantly, but by 1500 BCE, the use of wood in palaces declined precipitously, and when Mycenaean Greece annexed Crete, the forests were gone and Greeks used Crete for pasturing their sheep.[669]

In relatively recent history, deforestation and the introduction of sheep was an effective method of turning forests into deserts.  Within a few centuries, Crete was turned from thick forest to sheep pasture, and a civilization arose, briefly flourished, and vanished.  In the Fertile Crescent and the Mediterranean's periphery, introducing goats was another way to ensure that forests never reappeared.  Goats easily climb into trees to eat them, but the primary damage that goats and sheep inflicted was that they ate any attempts by the forest to regenerate.  Also, their hooves pounded the ground, flattening and compacting the soils, and completed the process begun with deforestation of killing the soil’s role in the hydrological cycle.[670] 

As Minoan civilization collapsed and Mycenaean civilization expanded, the forests of Cyprus were the next to go.  Beginning around 1300 BCE, Cyprus, with largely intact forests and rich copper deposits, became the center of bronze production, and a deforestation effort even more spectacular than Crete’s commenced.  Again, Crete and Cyprus once hosted hippos, and in the pre-deforestation period on Cyprus, pigs roamed the forests.  As the moist woodlands quickly disappeared, pigs could no longer be raised, and goats and sheep were introduced to graze the denuded hillsides.[671]  Mycenaean Greeks also rapidly deforested the Peloponnesian Peninsula, but they took steps to at least try to protect their urban areas from the flooding and erosion that deforestation caused, by building dams and dikes to prevent and redirect floods.  The Cypriots took no such measures, and torrents and silt washed down the exposed hillsides and quickly buried and washed away towns and filled harbors.  By 1100 BCE, the harbor at Hala Sultan Tekke was completely filled with silt and its use as a port ended.  Similarly, Enkomi quickly silted up and changed from a coastal city to an inland one that was often flooded with mud and debris from the hillsides.  In 1050 BCE, the town was abandoned, as well as 90% of Cyprus's settlements.[672]  In less than three centuries, Cyprus was turned from a heavily forested island into a deserted wasteland, and the collapse of Cyprus ushered in the Mediterranean’s Iron Age.

Copper, silver, and gold are in the same elemental family, and all are relatively non-reactive and could be found in nuggets; they were worked before humans learned to smelt metal.  Copper melts at 1085o C, and a good campfire is still a couple hundred degrees Celsius short of that, so it is thought that copper was first smelted in pottery kilns.  Iron, however, melts at 1538o C, and was not smelted until people made blast furnaces, which oxygenates fires to reach those high temperatures and must use charcoal.  Iron smelting was probably first accomplished around Anatolia, maybe as early as 2500 BCE, but smelted iron did not begin to become common until a thousand years later, and the Iron Age of Mesopotamia did not begin until around 1300 BCE.  The earliest iron smelting operation remains yet found dated to 930 BCE in Jordan.  Iron is lighter than bronze and can better hold an edge when made into steel.  When the Iron Age appeared, cultures changed.  Felling trees became easier, warfare became deadlier, and plows became more effective.  The sword did not become ubiquitous until the Iron Age, as iron was much more abundant than the tin required for bronze.[673]

Mycenaean Greece arose from Minoan influence, and the Greeks quickly set about to reproduce the Minoan “success.”  There was a seductive logic to deforestation and agriculture.  The products of deforestation were the very stuff of civilization, as cities were built from and supplied by plundered wood and crops raised on exposed soils.  Goats and sheep were pastured on the former forest’s soils.  It all made great sense, if only short-term.  As Mycenaean civilization quickly expanded via those dynamics, people only saw it as “progress” and something to be celebrated, not viewed with alarm.[674]

As the Peloponnesian plains near shore were deforested, settlements expanded into the hills.  Pottery operations began relocating far from settlements so they could have unchallenged access to fuel for their kilns.  The deforested hillsides of Mycenaean Greece unleashed torrents of mud during the rainy season.  The Mycenaean port of Pylos was surrounded by barren lands, and the pine forests were long gone.  Mycenaean engineers built earthworks that rerouted the local river around Pylos.[675]  Today, the typical Mediterranean “soil” is either limestone bedrock or reddish “soils” that lie atop the limestone and remain after forests and brown topsoils are removed.  The Mediterranean’s “soils,” climate, and biomes are not “natural,” but are the result of millennia of Mediterranean civilization.  In its turn, Mycenaean civilization collapsed, for the same reasons as the others, as its energy practices were anything but sustainable.

In the late Mediterranean Bronze Age, Troy, situated on the waterway between the Black Sea and the Mediterranean, became a coveted port that sat near Scamander Bay.[676]  The Trojan War, made famous by Homer, was fought about 1200 BCE.  It was long thought to be a fanciful tale, but archeologists doggedly searched for Troy and excavated it in the 19th century.  Scamander Bay is long gone, filled with silt, and Troy was buried by nearly ten meters of silt.  The wars that Mycenaean Greeks fought with their neighbors, as with all wars, were primarily resource-based, as the environmentally devastated homeland could no longer support the people in their accustomed style. 

By 1150 BCE, the civilizations of Mycenaean Greece, the Hittite Empire of Anatolia, the New Kingdom of Egypt’s in Syria and Canaan had all collapsed, and many causes have been considered, but the deforestation and desertification of the region must have been a major influence and was probably the ultimate cause.  In Pylos, post-Mycenaean farmers began planting olive trees instead of farming grain, as olive trees can grow in depleted soils and can even grow in the limestone bedrock.  Olives became a famous Greek crop because of Greece’s lost soils.  Contemporary observers noticed the environmental devastation that Mycenaean civilization inflicted, and the epic Greek tale Cypria clearly attributes the decline and collapse of Mycenaean civilization to overpopulation and related environmental ruination, and Zeus saved the land by getting rid of humans.[677] 

Greece entered a 300-year Dark Age and their forests began to recover; a great migration of Greeks to the Anatolian peninsula commenced, and the pattern of deforestation, siltation, and desertification was repeated.[678]  Myus was a port city founded by fleeing Mycenaean Greeks, and today that port sits more than 20 kilometers inland, buried beneath the silt of upriver deforestation and agriculture.  Ephesus suffered an identical fate, and that pattern repeated across the entire Mediterranean's periphery and reached its peak with Roman civilization.

As Mycenaean and other civilizations declined and fell, Phoenician civilization saw its civilization peak between 1200 BCE and 800 BCE, and its great fleets ruled the Mediterranean.  As with the preceding powers, Phoenicians established colonies on the Mediterranean’s periphery that had not yet been devastated, and established Carthage about 850-810 BCE. 

After centuries of ecological recovery, Greek civilization began to rise again beginning about 700 BCE, and it was an Iron Age civilization, not a Bronze Age one.  Those Greeks were humble farmers, able to use partially regenerated forests for a self-sufficient lifestyle that could later be seen in the Protestant work ethic and the pioneering spirit.  The poet Hesiod hectored his farmer audience with homilies that could have been uttered by Ben Franklin’s Poor Richard.  Athens was established before 1400 BCE and became an important Mycenaean city.  It began its resurgence in the late years of Greece’s Dark Age, and between 900 BCE and 300 BCE it became one of the more remarkable experiments in the human journey.  By 600 BCE, the reviving civilization had once more eroded the Greek countryside, and Peisistratus, also known as the Tyrant of Athens, offered a bounty to farmers to plant olive trees, as it was about the only crop that could grow on the badly eroded hills, and farming them did not increase erosion.  Greek cities never became very large because the environment could not support large cities.  When Greek cities reached about 20,000-to-30,000 people, new colonies were established.  That practice led to the Greek colonies that dotted the Mediterranean’s periphery.[679]  Also, those colonies founded during the Greek classic era became a hinterland that helped support Athens.  There is still debate whether commercial, military, or Malthusian incentives/pressures led to Greek colonization, but with the obvious environmental degradation of Greece, I lean toward Malthusian dynamics being the impetus, and the other factors were making the best of the situation.  People rarely leave their homelands if they do not have to.

In 508 BCE, Athens entered its classical period, which lasted for nearly two centuries.  In those two centuries, so much was invented by Greek philosophers and proto-scientists that it has been studied by scholars for thousands of years.  One provocative question that scholars have posed is why the Industrial Revolution did not begin with the Greeks.  The answer seems to be along the lines of Classic Greeks not having the social organization or sufficient history of technological innovation before wars and environmental destruction ended the Greek experiment.  The achievements of Greece over the millennium of their intellectual fecundity are far too many to explore in this essay, but briefly, the Greeks invented: democracy, Western philosophy, Western medicine, the watermill, a monetized economy, systematic historical thought, branches of mathematics such as geometry, while developing other branches to unprecedented sophistication, and heliocentric astronomy, which included the idea that Earth was spherical in shape.  Long after the Classic Greek period was over, Hellenic intellectuals and inventors kept making innovations that had major impacts on later civilizations, such as Heron of Alexandria (or some other Greeks) inventing the windmill and steam engine.

For all the nascent enlightenment fermenting in Greece, it was still limited by its resource situation and was in regular warfare with its neighbors.  Greek colonies along the Anatolian peninsula’s edge were conquered by Lydia, led by Croesus, who minted the first standardized coins, of electrum, which is a naturally occurring gold/silver alloy.  Croesus was defeated in his turn by Persians led by Cyrus the Great.  In 499 BCE, Anatolian Greeks waged a war that threw off Persian rule, but started a series of wars with Persia that lasted to 449 BCE.  Building the fleets that defeated Persia began decimating Greece’s forests once again, and much of the diplomatic wrangling and outright battling was to deny the belligerents access to forests to build their fleets.[680]  Conquering and then destroying entire cities was a Persian tactic and common for the time, and the Persian extermination of Athenian forces at Thermopylae is one of history’s legendary battles.  When Athens emerged victorious (after the Persians sacked and burned Athens), they probably had the world’s greatest navy.  Building the Parthenon was one of many civic undertakings during Athens’s golden age, but it lasted only a generation, and few today would call it very golden.  In the world’s first “democracy,” slaves outnumbered citizens and women were virtual prisoners in their homes.

Athens began a war with the Spartan-led Peloponnesian peoples that lasted from 431 BCE to 404 BCE.  The war was largely another naval one, and fighting over forest access was the prominent dynamic; Spartans invaded Attica and leveled its trees, turning it into a barren wasteland.[681]  In the aftermath of Attica’s destruction, a disease broke out and accompanied Attica’s refugees to an increasingly overcrowded Athens and initiated one of the world’s first recorded epidemics, today called the Plague of Athens.  Historians and scientists have made many guesses as to the disease’s identity.

As the war continued, the Athenian hinterland was turned into a desert.  Plato described the deforestation of Mount Hymettus, which remained barren until my lifetime, when the Greek government began to reforest it; many trees could only be planted by blasting holes in the limestone bedrock.[682]  When Attica's residents returned home after the Spartan occupation, they built their homes with a southern orientation to take advantage of sunlight, as wood was scarce.  After five years of peace with Sparta subsequent to signing a treaty in 421 BCE, Athens took to the offensive again and pretended to intervene in a war in Sicily to protect Ionian colonists, but they really did it to conquer Sicily and plunder its forests and other resources, and thereby build another naval fleet to conquer Sparta.  The Sicilian Expedition was a catastrophe for Athens, and it lost most of its navy.  There were other setbacks and victories, but a starving and besieged Athens finally surrendered to the Spartans in 404 BCE.  The environment around Athens could feed nothing but “bees,” and where wolves once abounded, not a rabbit could be found.  As Athens slowly became the center of a wasteland, the changing perceptions could be seen in contemporary writing.  When forests were plentiful in 700 BCE, Greek authors wrote of trees in pragmatic fashion or as impediments to progress.  As the forests disappeared along with the ecosystems they supported, an ecological consciousness began to appear.  Plato and Aristotle placed forests at the root of a civilization’s health, and Plato gave trees a major role in his Utopia.  Conservation only became an idea when the environment had already been ruined by “progress."[683]  Numerous commentators of the day wrote about the connections between forests and a healthy water supply, and many clearly saw the relationship between deforestation, erosion, and desertification, including Plato.[684]  Aristotle and his professional heir Theophrastus wrote about ecological ideas.  Theophrastus could be considered the first ecological writer, and he had the beginnings of an ecosystems approach.  He noted that when the region surrounding Philippi was deforested, it became dryer and warmer.[685]

By 395 BCE, Athens joined in the Corinthian War against Sparta, and the diplomatic maneuvering included Persia and Egypt.  Sparta prevailed with the treaty signed in 387 BCE, but Athens also began recovering and Persia had unchallenged rule over Ionia for more than 50 years, until Alexander the Great of Macedonia conquered them all; he began his rise to empire in 336 BCE.  Possessing a military prowess unsurpassed until the advent of industrialized warfare, Alexander's troops conquered all early civilizations of note that sprang from the Fertile Crescent, including Greece, Egypt, Anatolia, Mesopotamia, Persia, and all the way to the edge of India and the Himalayas, as can be seen in the below map.  (Source: Wikimedia Commons)

Alexander died in a Babylonian palace in 323 BCE, perhaps from poisoning, and his legacy created new connections between East and West, widely spread Greek culture, and helped inspire the next imperial aspirant: Rome.

Rome began as a settlement of shepherd’s huts, became a city around 750 BCE, and they naturally fought their neighbors.  Etruscan civilization dominated the northern Italian Peninsula during Rome’s early years.  But as with all civilizations previously reviewed, they only appeared where the essentials of a stable and relatively abundant energy supply could be exploited, which consisted of a navigable body of water, exploitable forests, and arable land that was usually exposed for agriculture after the forests were removed.  Greek colonies on the southern end of the Italian peninsula influenced Etruscan culture, which in turn influenced Rome.  The Italian Peninsula and vicinity was about the last region in southern Europe that had timber suitable for shipbuilding, and forests near Rome boasted fir and silver fir, which were ideal for building naval ships.  Some of Rome’s hills were named after trees that grew on them, such as oak, laurel, and willow.  Thick forests grew near Rome in its early days; a warring tribe was able to elude the Roman army by disappearing into a forest near Antium (now called Anzio), and near today’s Naples were the “Avernian” woods, which meant “birdless,” because the trees were so thick that birds did not enter it.  A little north of Rome sat the Ciminian forest, a deep and dark forest which no Roman dared enter before 310 BCE, when a Roman expedition explored it.  The Senate forbade such a dangerous expedition into the unknown, but the intrepid party investigated the forest and the Roman public avidly followed news of their findings.[686]  Like early Crete, early Rome’s most important export was wood, sold to obtain finished goods from more developed eastern Mediterranean civilizations that had already lost their forests. 

Between 540 BCE and 535 BCE, Carthage and Etruria combined to fight Greek colonies where today’s Marseille is and on Corsica.  The Greeks won, but it was a Cadmean “victory” that ended their Corsican settlement.  Etruscans ruled Rome in its early days.  Around 509 BCE, Rome overthrew its monarchy, established its independence from Etruria, and formed what today is called a republic.  It held a tension between the aristocratic ruling class (patricians) and the commoners (plebeians).  Centuries of interactions and wars with Etruria concluded with the final battle in 282 BCE, and Etruscans were absorbed into Roman culture and disappeared as a people.  Etruscan cities became Roman cities, and Etruria’s fate was a preview of the polyglot empire that Rome would become, as it absorbed conquered peoples.

As with Spartans, Macedonians, and other contemporary cultures, military prowess was greatly honored, in that “might makes right” Roman world.  Rome began battling its neighbors early and often, in wars of both offense and defense.  Its strategies and tactics borrowed from the Greeks, and it expanded its control over the Italian Peninsula.  Other than an invasion and sack of Rome around 390 BCE by Celts, Rome was usually on the winning side.  The bountiful forests in the vicinity allowed Rome to rebuild after it was sacked and burned.  Even when Rome lost, such as against the Greek Pyrrhus in 280 BCE, he remarked that he could not withstand another “victory” like that, and that comment immortalized him.

As Rome rose, it subdued its neighbors with a mix of diplomacy, alliances, and military superiority.  Once it conquered and digested the Italian Peninsula, it played on a larger stage, and the first war with Carthage began in 264 BCE, which was initially fought over Sicily but came to mean Mediterranean dominance.  Rome built its first navy during that First Punic War, and local forests provided the wood.  The First Punic War lasted to 241 BCE and ravaged both sides, but Rome prevailed.  Rome’s success partly relied on its ability to attract private investment for building its navy.  Once Carthage was dealt with, Rome began a series of wars across the Adriatic that lasted for generations, and by 218 BCE, it was at war with Carthage again.  Hannibal led elephants through the Alps in the Second Punic War, and an axiom of warfare was born in that war, which is, “The only battle that you have to win is the last one.”  Hannibal defeated the Roman armies in his battles, but had logistical problems and could not gather sufficient forces to conquer Rome.  Rome simultaneously fought a war in Macedonia, which was a preview of the imperial troubles it would have centuries later, when it became an empire.  Carthage was a merchant power and hired mercenaries to fight its wars, which has rarely proven effective.  The Second Punic War ended in 201 BCE and Carthage’s Mediterranean influence became a shadow of its former glory, and a couple of generations later, Rome completely destroyed Carthage in a “war” that was essentially an extermination campaign.  Rome burned Carthage to the ground in 146 BCE and Carthage’s 50,000 surviving citizens lived short lives of slavery after that, and Carthage’s settlements became Roman settlements.  The same year, the Greek city of Corinth suffered the identical fate at Rome’s hands, and Rome ruled the Mediterranean virtually unchallenged.  Ancient warfare had always been savage, but the fate of Carthage and Corinth marked a change in how Rome conducted its wars and helped set the stage for Rome’s transformation into an empire.

The lake that surrounded Tenochtitlán greatly increased its effective hinterland, as the lake was one big low-energy transportation lane.  The Mediterranean Sea was essentially one huge lake that provided a low-energy transportation lane to all civilizations along its periphery.  Rome was the only power to ever really control all of it for any length of time, and that was a key to its dominance.  Romans invented the lateen sail, which made it easier to sail into the wind, and lateen sails were used in the first ships that Europe used to conquer the world.

In 112 BCE, Rome fought a war against the last resistance in Northern Africa, but the war displayed signs of internal corruption in the Roman Republic, where officials were for sale.  Military conquest, with its resultant spoils of plunder, quickly became the Roman way.  Rome eventually became a huge parasite that provided almost nothing of value to the world while sending its soldiers to distant lands to conquer and rape them, and plunder routes into Rome’s maw covered vast distances.[687]  During the height of the Roman Empire, about 50 million imperial subjects were exploited to essentially feed the capital city’s residents, of which hundreds of thousands received free food.  As the Republic became more far-flung and dominated the Mediterranean’s periphery, soldiers began having more allegiance to their generals than the Republic, and that situation contributed to the civil wars that ended the Roman Republic and began its status as an empire.

Scholars have argued whether the civil wars began in the second or first century BCE, but political strife began with a proposal for land reform, tendered in 133 BCE.  After Rome’s republican conquests it was flush with slaves, and rich landowners began to create great plantations.  The farmers that had been Rome's backbone were pushed off the land and outcompeted by slave labor.  The situation was a preview of today’s agribusiness conglomerates.  The land reform measure tried to reverse that trend, which enraged rich landowners.  Slaves also began rebelling; the first slave revolt began in 135 BCE, and the third and last one, led by Spartacus, ended in 71 BCE.  Those slave revolts cost about a million lives.  Roman politics was a very bloody affair; the losers of political contests would be murdered, along with their entire families and supporters.  The man who proposed the populist land reform law, Tiberius Gracchus, was murdered in the Senate in 133 BCE, along with more than 300 of his supporters.  A decade later, his brother, Gaius Gracchus, was elected to office and pursued the same land reforms, and he was murdered along with 3,000 of his followers.  That was the beginning of the Roman Republic’s end.

In 63 BCE, a conspiracy to overthrow the Republic was exposed by Cicero, and in 60 BCE the First Triumvirate was formed and its three members, including Julius Caesar, all came to violent ends; then the Roman civil wars began in earnest.  The Second Triumvirate was formed in 43 BCE, and included Augustus Caesar and Mark Antony, of Cleopatra fame.  After Augustus defeated Mark Antony and Cleopatra’s fleet in the Battle of Actium in 31 BCE, the Roman Republic ended and Rome became an empire, the greatest that humanity has known.  At its height, it governed a quarter of humanity.  From the beginnings of the Roman Republic in 509 BCE to the fall of Constantinople to the Turks in 1453 CE, Rome as a republic or empire lasted for nearly two millennia.  Its impact on Western Civilization, and hence the world, has been incalculable.  There are far too many important lessons to be learned from the Roman experience than this essay can explore, but I will try to keep the lessons within this essay’s theme and purpose, which is humanity’s relationship to energy and our collective future.

To modern observers, Imperial Rome’s rapaciousness and brutality may be its most notable aspects.  Rome’s favorite entertainment was watching people being forced to murder each other.  It was originally an Etruscan funerary rite but began getting out of hand by 200 BCE, and by 100 BCE the “games” were state-sponsored.  By Rome’s imperial days, emperors tried to exceed their predecessors with gory spectacles.  The Coliseum, built at the height of the “Peace of Rome,” became the center of that imperial entertainment, but arenas dotted the Empire.  The Roman Empire’s gladiatorial games consumed at least a million lives.  With such blatant disregard for their innumerable victims, Romans could not be expected to display much enlightenment in their relationship with their fellow species, and in fact, the Roman Empire was by far the most environmentally destructive polity of the ancient world.  The environmental devastation that previous civilizations imposed on their environments was merely a preview.  This litany will start with animals.  Although Egyptian civilization drove all megafauna and many other species to extinction in the Nile Valley, Rome initiated waves of wildlife extinctions that covered all of North Africa, and a great deal of it was for entertainment in the arenas. 

Mock “hunts” were staged in the arenas, and a law forbade using African animals for that purpose, but in 170 BCE an official exemption was issued and animals then died in the arenas in mind-boggling numbers.  Crocodiles and hippos from the Nile, elephants and lions from northern Africa, tigers from India, polar bears eating seals, and leopards, bears, bulls, and other animals unfortunate enough to be caught ended up in the arenas.  They were often used as instruments of execution of condemned people, including criminals, Christians, and other enemies of the state.  Cicero mentioned one lion that executed 200 people in the arena.  But there was also a professional class that “hunted” those animals, and the animals were also regularly pitted against each other.  Augustus had 3,500 animals killed in 26 such events; 9,000 were killed to dedicate the Coliseum, and Trajan’s victory over Dacia was celebrated with 11,000 wild animals killed.[688]  Elephants, rhinos, and zebras went extinct in North Africa, but some lions survived in the Atlas Mountains until the 20th century.  Lions, leopards, and hyenas once lived in Greece, and leopards lived in Anatolia as late as the first century BCE.  The Roman arenas were primarily responsible for their extinction.[689]  Hunting animals to extinction are rare events today; most animals go extinct due to human-caused habitat destruction, but the Roman arenas were a kind of continuation of the Golden Age of the Hunter-Gatherer, at least until the animals went extinct.  But habitat destruction was also widespread during Rome’s reign.

The EROI of elephant flesh easily explains the Cro-Magnon obsession with mammoths, as well as why they disappeared so quickly along with the other megafauna, but the really big game were whales.  Whales are an order of magnitude larger than elephants; the largest blue whale is about 25 times the size of today’s largest elephant.  Claudius played “gladiator” with a trapped killer whale at the Roman port of Ostia, and by about 500 CE, whales had been hunted to extinction in the Mediterranean.  Until humans achieved the social organization and technological prowess that allowed them to sail the seas and hunt whales, that energy source remained unexploited.  After Rome collapsed, professional whaling did not resume for another millennium (other than Basques in the Bay of Biscay, beginning around 1050 CE), when Europeans learned to sail the oceans with history’s greatest energy technology to that time: sailing ships that could navigate the Atlantic and Pacific Oceans.

Livy, writing during the reign of Augustus (27 BCE to 14 CE), recorded the astonishment of his contemporaries when they learned that the Ciminian forest was once as dense as those at the Empire’s edge, in today’s Germany.  Augustus and Agrippa had just returned from the frontier in Germany, and the Roman public was amazed that such a forest existed; nobody suspected that only a few centuries earlier, the Italian Peninsula had such forests.[690]  When the soils of the deforested hillsides came down, they often formed marshes and swamps.  Malaria is Italian for “bad air,” and by about 300 BCE, Greece got malaria from its deforestation and marshes, and the Italian Peninsula got it a couple of centuries later.[691] 

Compared to the Greeks, Romans were not very innovative; they largely copied the peoples they conquered, but the Romans did invent window glass in the first century CE.  Just as with those earlier civilizations, as Rome began turning Italy into an arid land, shorn of its forests, Romans began to learn conservation, and they used glass panes and oriented their homes to the Sun, to reduce fuel use.[692]  Just like the Greeks, as the forests disappeared, the day’s writers developed a romantic view of forests as places for quiet contemplation and, as in Hammurabi’s time, wood rustling became a lucrative pastime for the Italian Peninsula’s thieves.[693]  The first technology suppression stories that I have heard of came from Rome.  Pliny wrote that Tiberius heard that unbreakable and flexible glass was invented and he suppressed it, as it would be more valuable than gold and would wreck the monetary system.  Vespasian was rumored to have rejected a column-moving machine because it would eliminate the need for strong backs and produce unemployment.[694]  The stories were probably not true, but such technology suppression “conspiracy theories” have existed for millennia.

Rome had an underdeveloped economy.[695]  It largely relied on military conquest and plunder, not developing its domestic economy.  Nevertheless, on an absolute scale, Rome was unprecedented; a lead spike in Greenland’s ice cores in the first century CE provides evidence of Rome’s level of industrial activity.  The world’s lead mining did not reach Roman levels again until the 1700s.  The arenas were only one venue of many where slaves died.  The mines in Spain were also charnel houses that consumed lives at an astonishing pace.[696]  Many Carthaginians ended up in the mines, and Spain was deforested just like Greece, Italy, Anatolia, and the like.  Modern observers, like those first century Romans, would scarcely believe that those arid nations hosted lush forests not long ago.  I spent two months in Europe when I was 16 and traveled the length of Italy, Greece, and the former Yugoslavia, and sailed through the Greek isles.  I vividly recall the tremendous olive groves of Delphi and starker scenes, in which islands were nothing more than barren rock.  The mountains could possess an austere beauty like a moonscape, and had I been told that all of those places hosted thick, moist forests a few millennia ago, I might not have believed it, either.

The Italian Peninsula, during Rome’s Republic days, hosted ceramic and glass-making industries.  Those industries died out in Etruria and moved to today’s France, in the Rhone river valley in particular, and by 300 CE, the industry died in France due to deforestation and moved to today’s Germany before the Western Roman Empire collapsed.  Rome invaded the British Isles, too, and leveled about a thousand square kilometers of Great Britain’s forest for its iron industry.  Within a century, the region was deforested and mining collapsed.[697]  As with those earlier civilizations, silt filled ports.  Ravenna was a coastal town before the Roman conquest, near the mouth of the Po River, and today it sits several kilometers inland.  Ostia, Rome’s port at the Tiber’s mouth, was abandoned after numerous dredging and earthworks projects, filled with silt, and became a malaria-infested marsh, and Claudius built an artificial harbor at Portus.  Trajan enlarged it, but ultimately Trajan built the new port at Civitavecchia, 80 kilometers away, which proved a very costly move.[698]  Numerous Roman ports suffered similar fates, such as Paestum.[699]

Cyprus’s inhabitants learned the lesson of the first forest holocaust, and for the next millennium they carefully managed their wood resources.  Then the Romans arrived, and two centuries of Roman copper operations completely deforested Cyprus.  It was not the last time that Cyprus’s forests became the focus of imperial plunder, because after a couple of centuries of recovery from Rome, Islamic and Christian empires fought over its forests.[700]

North Africa was treated the same way.  The Carthaginian environs became one big plantation for Rome, and centuries of Roman farming and deforestation turned Carthage into today’s desert nation of Libya.[701]  Rome ruthlessly deforested North Africa, especially near Morocco’s Atlas Mountains.  The lavish lifestyles in Rome, compared to the short lives of misery of those who supported it, has no greater contrast in the ancient world and arguably in world history.  The Romans loved their baths and bacchanalian delights, and a fleet of 60 ships sailed the Mediterranean to find wood to heat Rome’s baths.  Most of their loads came from Africa’s forests.[702]  I believe that it is the only time in world history when firewood was freight for seagoing ships, and the relatively calm “lake” of the Mediterranean made that enterprise feasible.  The energy-density of wood and the energy costs of shipping it make firewood uneconomical for shipping by sea, except in the Roman Empire’s insane economy.  Roman aristocrats developed a fetish for a type of sandarac tree, and within a century that species was driven to extinction.[703]

After defeating Antony and Cleopatra’s fleet, Augustus and succeeding Roman Emperors made the Nile’s breadbasket their personal preserve.  The arrival of the emperor’s wheat fleet from Egypt each year was a big event for Rome’s hungry mouths.  It is somewhat fitting that the last remnant of the ocean that has provided humanity with most of its oil was also the low-energy transportation lane for Earth's greatest empire.

Economics is the study of humanity’s material well-being, but humans have rarely thought past their immediate economic self-interest, even when the long-term prospects were obviously suicidal, such as today’s global energy paradigm.  Because environmental issues affect humanity’s material well-being, they are economic in nature.  As can be seen so far in this essay, there was little awareness or seeming caring in early civilizations whether they were destroying the very foundations of their civilizations.  Even if they did not care how much other life forms suffered, they did not seem to realize that it also meant that those oppressed and exterminated organisms and wrecked environments would not provide much benefit to humanity in the future, especially energy, whether it was food or wood.

Far more oblivious, however, is when the predilection is not using yardsticks to measure economic reality, but manipulating the yardsticks.  From the earliest days of using “precious” metals as a medium of exchange, humans have been obsessed with cheating the system.  As Adam Smith once noted, so-called precious metals are only “precious” because they are scarce.  The obsession with gold did not even rise to the level of economic short-sightedness; people questing after easy gold were thieves trying to steal from their societies to get a free ride.  When nations invaded others to steal their gold, it was naked, aggressive theft.  With the economic logic that had a fleet sailing around the Mediterranean seeking firewood for hot baths, Rome invading other peoples to steal their gold makes a certain absurd and evil sense, and Rome did it regularly.  Rome's invasion of Dacia (today’s Romania) in 105 CE was one of those instances, and it was done by one of the “good emperors,” Trajan, during the Peace of Rome.  After conquering Dacia to fill the coffers with gold, silver, and plunder, and they razed the capital city to the ground, Trajan’s troops brought 50 thousand prisoners to Rome to be sold as slaves, 10 thousand captured Dacians were forced to fight to the death in gladiatorial combat in the ensuing celebration, when 11,000 animals also died, and a still-standing monument commemorates Trajan’s heroic deed.  During the Peace of Rome, Jews had uprisings against Roman rule, and Rome brutally put down the final revolt in 135 CE.  What Assyria and Babylonia began, Rome finished.  Slaves comprised a quarter of the Roman Empire's population and more than a third of the Italian Peninsula's.

The Roman Empire relied on the plunder it could seize, and scholars studied Rome in light of where it obtained that loot, as the map below demonstrates.[704]

After two centuries of “peace” and good times, the Empire began unraveling.  The debate surrounding the Roman Empire’s collapse has been a far larger cottage industry than arguing why the megafauna went extinct, but I think that Thomas Homer-Dixon has it right that Rome ran out of energy, or stated more precisely, its EROI and surplus energy declined to a level where the Empire became vulnerable to disruptions.  When Rome crashed, it crashed hard.

Rome debased its currency.  Rome's Denarius was nearly pure silver in 211 BCE when it was first minted; by 269 CE, as the Empire was slowly coming apart, the coin had almost no silver at all in it.[705]  Money is only accounting, and debasing it was only a symptom of Rome’s decline, not a cause.  Currency debasement is a governmental choice by which it essentially steals from those who use the currency.  There are arguments that it can forestall a deflationary trend, but it has always been used as an easy form of taxation that benefits those creating the money.  All currencies for all time have been made worthless by those creating the money, as the short-term temptations for debasing a currency are too seductive.  Energy is the meaningful measure of macroeconomic health, not money.

Energy is the master resource of all organisms, all ecosystems, and all economies.  When a civilization centralizes its energy consumption, which were food and wood in preindustrial civilizations, to a central city, and it has to keep expanding farther and farther from that city to obtain that energy, the tyranny of distance is going to reduce the EROI of those increasingly distant energy resources, and hence reduce the energy surplus.  Also, the practices of deforestation and agriculture provide short-term agricultural yields, but the wood would be almost instantly used (about 90% of the wood imported to Rome was burned, which was the typical ratio for ancient cities[706]).  The soils became eroded, depleted, and often abandoned as the land could no longer support farming, partly because the entire process made the land more arid.  If they could import water to irrigate (usually a rare situation), that could help ameliorate the process, but it took more time and effort and made it more difficult.  There were no accountants, scientists, or engineers monitoring and measuring the process, but all of those dynamics would reduce the system’s EROI and surplus energy and make it less resilient, so it was vulnerable to disruptive shocks.   

After newly exposed forest soils have produced a few crops, the yield will decline due to nutrient depletion.  When the croplands receive less precipitation, yields drop.  When soils wash away due to erosion, crop yields in those eroded soils will decline.  Those effects reduce the EROI and surplus energy of farming those lands.  When cropland is abandoned due to aridity, nutrient depletion, and erosion, and lands farther from Rome were conquered, deforested, and farmed, it took more energy to transport those crops to Rome than with farms closer to Rome.  That also depressed the EROI and surplus energy.  When harbors silted up and needed dredging, or were eventually abandoned and a port was built farther away, that also reduced the EROI and surplus energy of Rome-bound food.  When food was used to feed soldiers who traveled increasingly vast distances to conquer and plunder peoples and their lands, those would be lower-EROI ventures than conquests closer to Rome.  That dynamic has also been called imperial overreach in academic parlance, but in scientific terms, it is really just sucking the dregs of low-EROI resources after high-EROI energy sources have been depleted.  Rome’s decline was really just another resource-depletion dynamic.  Humanity’s first one was killing off the megafauna, and Rome only experienced what Sumerian, Minoan, Mycenaean, Athenian, and numerous other early civilizations already suffered.  Rome just did it on an unprecedented scale.

Rome's imperial logic of devouring ever-more territory and people was doomed to end.  The world’s first great epidemic, the Antonine Plague, began in 165 CE.  It seems to have carried off one emperor in 169 CE and may have killed Marcus Aurelius in 180 CE.  In a possible case of the chickens coming home to roost, the plague was carried to Rome by troops coming back from a “good emperor” campaign in Mesopotamia.  Marcus Aurelius’s death marked the Peace of Rome’s end.  Another epidemic appeared in 250 CE and claimed another emperor in 270 CE as it still raged.  After Rome was no longer the Empire’s capital, in 541 CE the Plague of Justinian hit Constantinople and killed up to half of Europe.  It may have been bubonic plague.  Those are about the only known epidemics of ancient times, including the Plague of Athens.  By 600 CE, Rome’s population collapsed to about 100,000 people from one million at 100 CE.  In 1084, during the peak of the Medieval Warming Period and a time of great city-building in Europe, Rome's population was 15,000, as people lived among the ruins of the greatest civilization that Earth had yet seen.[707]  After Rome collapsed, the entire Mediterranean's periphery went moribund for centuries and slowly recovered from the environmental and human devastation of Rome’s reign.

When scientists and scholars discuss megafauna extinctions, or the demise of Neanderthals, or the collapse of civilizations, some will always attribute such events to climate change as they deflect responsibility from humans.  Climate change has probably never been the ultimate cause for such events.  The ultimate cause was probably always humans, and everything else was a proximate cause, at most.  In the past several hundred years, there are clear instances when deforestation and sheep grazing quickly turned moist forests and/or fertile farmland into semi-deserts in less than a century, particularly in the kinds of temperate regions where the first civilizations arose.  When scientists have investigated and reconstructed the dynamics that led to the collapse of Cahokia, the classic Maya, or the Anasazi, the story was always the same.  Human civilizations altered the ecosystems, usually via deforestation and agriculture, which made them lose their resilience, and a drought did them in.  Those urban areas were permanently abandoned.  That civilization-collapse dynamic is like the hypothesis for why mass extinctions have punctuated the eon of complex life: those multi-tiered energy systems are inherently unstable and susceptible to collapse.

The rise and fall of Rome is an iconic example of the trajectories of preindustrial civilizations.  Only so much surplus can be skimmed from economic systems based on the energy of wood, food, and muscle power.  I wanted to cover some civilizations in detail to make the pattern clear, and will largely only survey other preindustrial civilizations, as the dynamics were similar, but with some important variations.

China’s was the second pristine civilization to rise, and although the Tibetan Plateau and Himalayas separated China from the Fertile Crescent and India, there was cultural and technological diffusion.  At times, China was ahead of Fertile Crescent civilizations in technological and cultural innovation.  By eight kya, agriculture was firmly established in China.  China has had less investigation of its prehistory, but it seems clear that China’s deforestation began with agriculture, just like everyplace else, and by 1000 BCE, China was largely deforested.[708]  The East Asian food complex is markedly different from the Fertile Crescent's, largely because East Asia relied on summer monsoons for its water, and winter rains provided water for the Fertile Crescent and westward, although all civilizations were primarily based on seed and root crops.[709]  The rice paddy is the most sophisticated preindustrial agricultural system ever created.  It began adding to Earth’s methane concentration by 3000 BCE, and rice paddies bred malaria to the extent that the paddy system in southern China was not successful until the local populace had partially adapted to malaria.[710]  Deforested lands alternately flood and desertify, and managing water in China became the foundation of imperial practice like nowhere else in history.  Although pharaohs claimed divine control over the Nile’s flood, in practice they did nothing at all.  Chinese emperors and the states they controlled, however, owed their legitimacy in their subjects’ eyes to how well they controlled flooding and drainage.[711]  The Yellow and Yangtze rivers carried more than 30 times the silt that the Nile did, and deforestation with the resulting flooding, siltation, and desertification have been major Chinese problems for thousands of years.  Although it has been challenged, the idea that China reached early political unity due to few geographic barriers has merit.[712]  China has been politically unified almost continually for more than two millennia.  The Han Chinese that dominate China are like white Americans, Canadians, or Australians, in that they invaded, conquered, and came to dominate lands initially settled by others.[713]

In northern China, beginning with millet, dry farming was practiced, and in southern China the rice paddy system dominated.  China and East Asia never had the level of animal domestication that Fertile Crescent and European civilizations had.  For millennia, human excrement was used to fertilize East Asian crops and even feed pigs, until industrialization.[714]  The lack of domestic animals in China meant that any kind of wild animal, including arthropods, became food.

Many important early innovations can be traced to China, and the earliest pottery yet discovered was made there about 20 kya.  The Chinese invented paper about 100 CE, the fishing reel in the fourth century CE, toilet paper in the sixth century CE, paper money and porcelain in the seventh century CE, gunpowder in the ninth century CE, movable type and the compass for navigation around 1040 CE, bombs and hand cannons in the 13th century CE, along with other weaponry such as land and water mines.  Chinese innovations helped lead to Europe’s rise.  Horses were not used much for plowing until the Chinese invention in the fifth century CE of the horse collar, which was used in Europe by 1000 CE.  Horse-drawn plows could move 50% faster than ox-drawn plows, which increased Europe’s agricultural surplus.  China mounted the largest oceanic naval excursions to their time, between 1405 CE and 1433 CE, but China soon became insular for reasons still debated.  Europe then took the technological lead and soon conquered the world.  China’s political unity was a key reason for its change in direction: controlling the throne controlled the empire’s direction.

China followed the developmental trajectory of other pristine states; it was initially peaceful for thousands of years until chiefdoms began giving way to early states, and potentates were men, they had harems, and so forth.  Early Chinese elites relied on their family descent lines for their status, and early settlements could relocate often, which likely reflected shifting cultivation as soils were depleted.  Even those that had some urban features quickly came and went, and those movements could have seemingly had more to do with political reasons than ecological/economic ones, although politics are always proximate causes, not ultimate ones.  While investigation of China's early civilizations are still in their early stages compared to other inquiries, that variation from the other pristine states persisted until the Warring States period, more than two kya, not long after Confucius lived.  Chinese civilization then began to resemble other Eurasian civilizations.  Today's scholars suspect that that change reflected influences from other Eurasian civilizations.[715] 

China and Fertile Crescent civilizations both suffered from intrusions by pastoral societies from Eurasia’s grasslands.  Marija Gimbutas presented her Kurgan Hypothesis in 1956 to explain the spread of Indo-European languages, and the hypothesis’s primary thrust was that male-dominated pastoral peoples, with their male, sky-god religions, conquered agricultural peoples with their Earth-based goddess religions.  Her hypothesis was used by feminists ever since and created a firestorm of controversy.  The Kurgan Hypothesis may not be as wrong as its detractors allege.  However, as with many radical hypotheses, the initial one was found deficient during testing, but variants of the original hypothesis survive today.

Nomadic pastoral societies did attack and invade settled agricultural ones, and the spread of the Indo-European language and pastoralism is probably a valid connection, but seemingly for different reasons than Gimbutas hypothesized.  Human herder societies independently developed the ability to digest milk past infancy, which increased their carrying capacity five-fold versus raising animals for meat, and then they became a threat to sedentary civilizations.  Not only was it a huge energy advantage for pastoral societies that could digest milk, but it made many marginally fertile environments inhabitable by sedentary societies, and made already settled ones more energy and nutrient secure.  Also, when peoples began to rely more on cattle than crops, they could become more mobile.  Pastoral societies of the steppe were patrilineal, which are the most violent societies, and they indeed invaded settled societies and often set themselves up as the new elite.[716]  Peoples who could digest milk not only came to dominate grasslands, but they also did well on marginal agricultural lands, such as those in northern Europe.[717]  The allele that allows lactose digestion reaches nearly 100% in northern Europe, but those that evolved without milk-producing animals, such as Chinese and Native American peoples, cannot digest milk at all.  Lactose tolerance appeared about eight kya in pastoralists and spread with their migrations.  In places such as northern Europe, there were no vast grasslands to roam, pastoralists became sedentary, and the combination of farming and dairy cows was northern Europe’s staple for millennia.

Other genetic adaptations happened in the same region around the same time.  Blue eyes are blue due to a lack of iris pigment, and first appeared between six kya and ten kya.  The region around the Baltic states is thought to be the home of blue eyes, as it has the highest blue-eye frequency on Earth.  Blond hair first appeared in northern Europe about 11 kya, and first became prevalent around Lithuania about five kya.  Those pigment losses are related to light skin, which was an adaptation to reduced sunlight in regions farther from the equator.  Lighter skin evolved independently in Europe and East Asia, may have evolved numerous times, and in Europe it seems to have evolved about six kya.[718]  Racism is a relatively recent phenomenon because race itself is recent on the evolutionary scale, as geographically isolated humans began the process of speciation.

It is generally accepted today that the original pristine states were based on agriculture, and before those societies became states, when they were at the village level of social organization, they were largely classless and women had high status, related to women’s economic (caloric) contribution.  As agriculture became masculinized, probably due to the physical requirements of forest clearance and handling draft animals, men ascended in importance and women’s status declined.[719]  The general thrust of the Kurgan Hypothesis is probably accurate in that pastoralists invaded agricultural societies.  Violent patrilineal nomadic societies invaded sedentary societies and set themselves up as the elite and the religion of the conquerors became the religion of the conquered.  The agriculturalists of Europe were largely farmers from the direction of pristine states that invaded hunter-gatherer lands and displaced/absorbed the "natives."  A European mass grave from today’s Germany dates to about seven kya, about the time of the alleged Kurgan Invasion, and debate has raged as to whether that grave was due to endemic violence or invaders, as hunter-gatherers, farmers, and pastoralists met.[720]  Beginning about 3500 BCE, from archeological examinations of European graves, evidence of violent death, particularly for males, shows a dramatic increase.[721]  Between 3500 BCE and 2000 BCE, the rise of the professional warrior can be seen in Europe’s artifacts and grave goods, and the concept of the hero emerged.[722]  The Iceman died in the Alps about 3300 BCE, and he died violently.  New Guinea’s highlanders lived in isolation for many millennia and adopted agriculture, but as with other relict populations of the founder group migrations, they were in continual warfare, and straying into another village’s territory meant risking a violent death.  Anthropologists looking for an epoch in the human journey when neighboring peoples coexisted peacefully have always come away disappointed.[723]  There have been brief, non-violent phases of the human journey, in some situations, usually when there was relative economic abundance.  When resources became scarce, and theft, coercion, and violence became profitable, then bloodshed usually attended the situation.

About 1000 BCE, one of the largest migrations in the human journey, the Bantu Expansion, began, and it expanded because of the Bantus' use of iron and agriculture, and they displaced or absorbed hunter-gatherers as they expanded across sub-Saharan Africa.  In a dynamic too common in human history, invading men mated with women of the invaded; mitochondrial DNA studies provide evidence of this.[724] 

Equatorial rainforests never produced civilizations, whether it was South America, Africa, or Oceania.  All three rainforests were penetrated by agricultural humans late, were sparsely populated, and settlements rarely if ever extended far past the riverbanks.[725]  However, New Guinea was an exception, and may be an independently developed agriculture, which was based on bananas, taro, sugarcane, and yams, primarily within the past six millennia.[726]  But their highlander altitude made it different from the typical tropical rainforest.  The banana was domesticated there as many at 10 kya.  New Guinea's Highlanders lived in isolation from the rest of the world until the 20th century.  But nothing that could be called an urban environment ever developed in Oceania or the African and South American rainforests.  They were always villages at most, although parts of Amazonia that were likely influenced by Andean civilizations had connected villages, and some could have been called towns, which reflected a kind of urban planning.

Mesoamerica’s Domestication Revolution was one of the two certainly pristine ones known, and the one around today’s Peru may have been another.  The other two pristine states of the human journey arose there, and they followed the same general patterns as Sumer and China in that they began peacefully with no classes and, as they grew into states, men came to dominate, elites appeared with monumental architecture devoted to them, potentates had harems and divine sanction, and there were other features that seemingly evidenced universal human traits and/or reactions to similar conditions.  The development of religion in what became Mesoamerica’s pristine civilization, the Zapotec state, has been documented by archeologists who traced a seven-thousand-year progression from hunter-gatherers to egalitarian early agriculturists to an elite-dominated society to a pristine state.[727]  It was similar to how Mesopotamian civilization developed, including the replacement of singing and dancing by priestly rituals (today’s rock stars have been likened to the new shamans, as their concerts revive pre-civilized gatherings and rituals).  Controversial aspects of Mesoamerican societies have included human sacrifice and cannibalism.  They definitely happened, and human sacrifice was practiced on a pretty grand scale at times.  The question of Western Hemispheric cannibalism has touched on the lack of domestic animals, so it may have had nutritional aspects, or what is called culinary cannibalism.  But most seeming cannibalism is of the cultural cannibalism variety, in which eating flesh has symbolic meaning, whether it is eating somebody to keep their spirit in the family/tribe or to gain spiritual dominance over a fallen foe.[728]  Cannibalism was a common charge made against peoples that Europe conquered, but was usually a sensational allegation to remove their humanity and justify their bloody treatment by Europe.  Columbus made his cannibalism accusations against Caribs from whole cloth. 

It took about two millennia to domesticate maize in Mesoamerica (wheat may have only taken a couple of centuries or less to domesticate), in one of humanity’s greatest feats of domestication.  Maize was a near-universal staple among the Western Hemisphere’s agrarian natives in 1492.  Anthropologists have surmised that the Western Hemisphere was a few thousand years “behind” Old World civilizations in 1492. 

While there is evidence of agriculture along the Andes before 4000 BCE, it was not until about 2500 BCE that agriculture in Peru began in earnest, and they farmed maize by 2000 BCE.[729]  The potato was the Andean peoples' greatest culinary contribution to the world.  There is evidence of Peruvian warfare and population collapse by 1000 BCE, probably due to the familiar environmental degradation that civilizations have always inflicted.[730]  In another millennium, the Moche culture appeared, which produced the Western Hemisphere’s other pristine state.  It began smelting bronze about a thousand years before Europeans arrived, as elite prestige goods. 

The Incan Empire that the Spaniards conquered was merely the latest in a series of rising and falling polities over several millennia, which likely influenced Amazonian cultures.  While the markets in Aztec-run Tenochtitlán were incomparable and conquering Spaniards had an appreciation for the materialistic and greedy aspects of Aztec culture, Incan culture was another matter entirely.  There were no vast markets in Incan society, but it was run more like a communist regime, with central planning of the economy.  The Incas had ornate rituals combined with feasts and festivals, in which religion, warfare, economic reciprocity, and an elite-justifying ideology were inextricably linked, which formed the social cohesion of the empire.  They naturally had human sacrifice to appease the gods in their Sun-worshipping imperial religion.  The Incan Empire, which was the Western Hemisphere's largest, by far, stretched along the Andes Mountains for thousands of kilometers and was continually subjected to El Niño's vagaries.  The Incas had novel means of dealing with it, including forming a vast network of storage facilities along the Incan "highway" on the Andes's high Western slopes, which like those Gravettian mammoth villages took advantage of the "freezer effect" (and drying) of preserving food, and the Incas advertised their ability to provide for their subjects.  The empire's taxation was often more in the form of services than food.[731]  Those peoples were arguably the greatest agricultural experimenters of pre-industrial peoples, getting the most out of their challenging environments. 

Because the Western Hemisphere’s inhabitants were virtually all in their Stone Age, they did not ravage their environments as greatly as Old World civilizations did, and many societies were environmentally sustainable and provided seeming answers to questions that scientists have asked about Old World civilizations’ development.  The natives of coastal California were familiar with agriculture, as it was practiced by nearby inland tribes, but they never adopted it.  California was so bountiful, and its climate was so human-friendly, that its natives retained their hunter-gatherer lifestyle.[732]  Similarly, northward on the Pacific Northwest's coast, natives created an economy in which half of its calories derived from salmon runs, and those peoples were relatively sedentary without agriculture.  Natives turned the Great Plains into a big pasture for bison, and the biome was partly maintained by annual burning of the grasslands.  In Mesoamerica, milpa farming has been sustainable for thousands of years.  In the Amazon, the natives transformed the rainforest, and a higher proportion of plants and trees provided human-digestible foods than in any other “wild” place on Earth, those natives also terraformed thin tropical soils with ceramics (maybe unintentional) and charcoals (intentional) and made super-soils called terra mulata and terra preta.[733]  In summary, native practices in the Western Hemisphere were often sustainable if not quite abundant.  But when civilizations arose, they had problems that were like their Old World counterparts'.  Their problems were also environmental and not just the injustices of hierarchal societies, often steeply hierarchical.

In the Eastern Woodlands of North America, natives began domesticating plants before 2500 BCE.  It may well be an independent domestication event.[734]  Those horticulturalists largely became matrilineal societies.  The Adena culture was succeeded by the Hopewell culture, in which maize seems to have made its way from Mesoamerica.  Around 500 CE, the Late Woodland period began, the bow and arrow supplanted the spear and atlatl, and the "three sisters" - maize, beans, and squash - began dominating food production.  When the Medieval Warming Period began around 800 CE, intensive maize production began and spread, which led to rapid population growth and the rise of Mississippian culture, which led to the only pre-Columbian North American city, at Cahokia, which collapsed, almost certainly from environmental over-taxation and a cooling climate, before 1400 CE.  The mound-building Mississippian culture had a familiar trajectory, as intensive agriculture led to an agricultural surplus.  Men, who controlled the surplus and rose to dominance, commandeered the local religion into granting them divine status or sanction and erected monumental architecture to themselves and their divine yet invisible patrons.  As in Sumer, they made their structures from earth instead of stone.  Soil fertilization for maize-growing was not practiced, which rapidly depleted the soils (there were no domestic animals to provide manure, and the Indians did not adopt the night soil practices of East Asia), and the cooling of the Little Ice Age, along with declining soil fertility, spelled the decline of Mississippian culture before Europe's first invasions of the Columbian era.  The Soto entrada and its aftermath was a catastrophe for Mississippian peoples.[735]  Later European invaders encountered lands long since depopulated.  By the 1600s, when England began invading the Eastern Woodlands, the Mississippian culture had vanished, and by the late 1700s, the Southeastern Indians not only retained no memory of who made those mounds that they lived near, they also had no memory of the social order that built them.  The Cherokee seemed to retain some vestigial memory of Mississippian culture, as they had stories of despotic Indians that the Cherokee annihilated, but the mounds had become the source of a myth that spirit warriors lived in the mounds and could issue forth and fight Cherokee enemies.[736] 

Anasazi civilization also overtaxed its environment and collapsed in a drought, as did the Classic Maya.  The lauding of Native American environmental conscience seems largely a romantic invention, like the “peaceful savage” fantasy.  Although Native Americans obviously had a far gentler tenure on the land than what happened in the Old World, it may have been only a matter of time before they “progressed” with metal smelting, rampant deforestation, and the like.  Without draft animals (bison were probably the only candidate for that, and turning them into domesticated draft animals may not have been feasible), their civilizations might have taken very different paths than the Old World’s.  What kinds of civilizations might have emerged from the Western Hemisphere had Europe not intervened will always remain a tantalizing question, but we will never know; those civilizations did show different ways to do it, even if what the Spaniards stumbled into seemed familiar, with cities, markets, elites, monumental architecture, warriors, priests, peasants, slaves, and so on.

Along with the disruptions that Europe caused to the world’s people, it was depressingly common how often the natives used the newcomers to conquer their neighbors.  Although Spaniards inflicted history’s greatest demographic catastrophe onto the Western Hemisphere in the 1500s, they often had native assistance.  The Aztecs were anything but benevolent rulers; their bloody altar constantly sacrificed prisoners (it was an endemic practice in Mesoamerica), and when Cortes commandeered the invasion that ultimately conquered the Aztecs, his native allies did most of the fighting.  Any natives who helped the Spaniards helped depopulate their hemisphere.  When the French allied with the Huron, the first thing that the Huron did was use them as a secret weapon against the Mohawks.  That backfired on the Huron, as their tribe became extinct within 40 years.  In Africa and North America, when European slavers came, the natives were often only too happy to sell their neighbors into slavery, and some American tribes made brief livings as slavers for Europeans before they themselves became extinct.  With a few possible early exceptions, natives almost never realized what the coming of Europeans ultimately meant.  With some notable exceptions, such as Pontiac and Tecumseh, natives could not put aside their differences and try ridding their lands of the invaders, and when some tried, it was already too late.  When the British began “settling” the South Pacific, the natives used European weapons to slaughter or enslave their neighbors or build empires.

Jared Diamond and Alfred Crosby noted that Eurasia was spread along an east-west axis, while Africa and the Americas were north-south, which made Old World diffusion easier, but that idea also has problems, as Fertile Crescent crops did not spread to East Asia due to rainfall timing differences (winter rains in the west and summer monsoons in the east).  Mesoamerican and Andean civilizations had dramatic geographic limitations, which was their greatest contrast with Eurasian civilizations.[737]  However, like the migration of Asian mammals into Europe or the exchange when Africa collided with Eurasia, it was easier for cultural innovations to spread along the same latitude, as they would move through similar biomes.  North-south diffusion is far more difficult, as it moves through different biomes, such as tropical forests and temperate deserts.  Eurasia's geography was more conducive to communicating innovations, which made it more cosmopolitan than sub-Saharan Africa or the Americas, which helped them technologically advance at a faster pace.  Isolated peoples are usually culturally and technologically backward compared to nearby peoples who are more cosmopolitan, and people isolated by mountainous geography, such as those of the Scottish Highlands, Balkans, Appalachia, and Southeast Asia were relatively primitive compared to those around them.[738]  Negritos and aboriginal Australians are classic instances of isolated peoples keeping their cultures intact, which provided a window into the human past, but their cultures also did not "progress," which included their technology.

The orientation of the Americas meant that few innovations traveled between continental civilizations.  The only pack animals in the Americas, llamas and alpacas, never made it past South America before the European invasion.  But there was a continual migration of innovations between China, Europe, and the Fertile Crescent.  That is thought to be partly why Eurasian cultures became technologically advanced over those of sub-Saharan Africa, the Americas, and Australia.

This essay’s purpose, regarding the human journey’s epochal phases, is to show how humans achieved each Epochal Event, which was always about exploiting a new energy source, and how each event transformed the human journey.  Although the civilizations of India and Southeast Asia had unique qualities and achievements, and the Buddhist religion has a great deal to commend (founded, as Christianity was, in the name of another “rebel,”) as well as other world religions, the primary preoccupation of all peoples for all time before the Industrial Revolution was avoiding starvation.  Industrialized peoples seem to have partly forgotten this motivation.

The methods of preindustrial civilizations, with deforestation and agriculture, were never really sustainable, as they disrupted ecosystems and even affected local climates.  The only way that the milpa system, for instance, was sustainable was that they let the land go fallow for eight years after two years of crops, in order to let the damage heal before farmers repeated the cycle.  Only when practices were intermittent, to allow ecosystems to recover somewhat, could they be called sustainable, but even then the idea is somewhat misleading.  It was an ecosystem commandeered for human benefit at the expense of the original ecosystem’s denizens, and the practice never approached true abundance.  Those civilizations were all mired in scarcity, with only about one person in a thousand living to a ripe old age, and only about one-in-100,000 “making it” economically (the potentate).  In such a world of scarcity, life was often cheap, and virtually every preindustrial civilization had forced servitude, from forced marriages to debt bondage to chattel slavery to becoming a human sacrifice to other forms.

Across all preindustrial civilizations, UP reacted in different ways to the energy surplus that domestication afforded, which usually depended on environmental variables, such as whether the arable land was bounded or whether shifting cultivation (as the soils were depleted) was feasible for relatively sedentary populations.  The early states that arose where cultivation could be continual for a plot of land (through fertilizer and other methods) and were geographically bounded by barriers such as mountains, deserts, and bodies of water (Peru and Egypt), were generally dominated by an elite in a steeply hierarchical society in what has been called the "exclusionary domination" model.  The "corporate" model was more feasible where shifting cultivation could be practiced and geographical boundaries were minor (pre-state China, the ancient Yoruba culture in today's Nigeria) and less dominated by "great men" (monarchies) and more by groups that shared power (oligarchies, while constantly jockeying for it), and their control was more over labor than land.  Most states arose where the arable land was both unbounded and permanent, or at least relatively permanent.[739]  In anthropological circles, the corporate and exclusionary domination models of early civilizations often seemed to vie and interact, with one succeeding the other at times.[740]  However, whether it was monarchy or corporate oligarchy, the surplus was so small in agrarian civilizations that only a small elite and professional class could exist.  Freedom was always a scarce commodity that primarily resided with the elite.  While there was some variation in social organization across the world's agrarian cultures, the basics were identical for all of them, with elites and professionals riding atop the peasant class and extracting the agricultural surplus from them via a variety of carrots and sticks. Without the energy that agriculture provided, large sedentary populations were not possible, and without an agricultural surplus, civilization could not have formed.  Everything about the formation and trajectory of all civilizations depended on those energy dynamics.  Without those levels of energy generation, the game simply could not be played.  In their most essential fundamentals, they were all the same. 

With help from contributions from China and other civilizations, much of it coerced, such as what the Western Hemisphere provided to the world (potatoes, cassava, maize, sweet potatoes, a functioning democracy, mountains of gold and silver, and many other benefits), Europe conquered the world, and along the way it tapped a new energy source.  The early days of exploiting a new energy source were characterized by relative abundance that led to golden ages that people in later times, after the energy supply was depleted (easy meat, pristine forests and soils, oil) looked back to with yearning, if they could even recall those days in their cultural memory.  I live in a declining society that is looking back to its golden age, which I grew up in, when energy was cheap and plentiful.  Those days are long gone.  The middle-class lifestyles of my childhood are scarcely imaginable today, 50 years later.

I earlier compared people from different epochs.  That stone tool Tesla could not have imagined what his/her invention would lead to a half-million years later, and members of the founding group could not have comprehended what their journey led to 50,000 years later.  Imagine a hunter-gatherer of 10 kya being dropped into Rome in 100 CE or London in 1500 CE.  History has some relevant examples.  When Ishi, about the last of his people, came out of hiding in his dying world and strode into civilization, it caused a sensation.  He soon died of tuberculosis, but his encounters with civilization were recorded.  He attended an opera, and the popular account portrayed his rapport with the diva, but Ishi actually stared in amazement at the audience, as he had never before seen so many people in one place.  When he saw an airplane in flight, he laughed in amazement.  Imagine a hunter-gatherer of 10 kya being dropped into imperial Rome.  That hunter-gatherer had probably seen dogs, but horses, cows, sheep, and the like would have been astounding, and watching a horse or ox pull a cart would have been stunning.  Crops would have been an amazing sight.  Imagine that hunter-gatherer at the Coliseum.  The building and crowd alone would have boggled his mind, even if the festivities might have been horrifically familiar.  Metals and glass would have seemed magical.  Writing had not yet been invented in that hunter-gatherer’s world, so even the concept would have been difficult.  Imagine him trying to learn math.  There were no more singing and dancing religious rituals, and no wide-open spaces to hunt a meal.  Imagine that hunter-gatherer visiting a Roman bath.  Hot water alone would have been surreal, while the cavorting might have been delightful.  What would his reaction have been to Rome’s markets?  Rome was also loud and could be hellish, so the hunter-gatherer might have longed to flee to the countryside before long, but the countryside would have little resembled the one he knew.  He obviously would not have understood anything that anybody said, but they were also all members of UP, so he would have seen many behaviors and traits that he eventually understood.  But how long would his shock have lasted?  Could he have really ever adapted to Roman society (if he did not quickly end up on the arena’s stage as a novelty)?  Another surprise for that hunter-gatherer would be seeing people interact who did not know each other.  People were interacting with out-group members and not trying to kill them on sight, which became standard behavior in most hunter-gatherer societies that battled over territory (their food supply).  Civilized life was all made possible by the local and stable energy source that agriculture provided, which led to an epoch that changed very little until the next energy source was tapped: the hydrocarbon energy that powered the Industrial Revolution.  The next chapter will survey the developments that led to that momentous event.  It is the only Epochal Event with historical documentation that showed how it developed, which is easier to reconstruct than examining stones and bones.

Few dates before 1 CE will be used beyond this point in this essay, so for the remainder of this essay, unless "BCE" is applied to them, all dates will be CE dates, with “CE” dropped from them.

 

Epochal Event 3.5 – The Rise of Europe

Chapter summary:

  • Emperor Constantine tries gambits to hold Roman Empire together

  • Rome falls, invasions of Roman lands by Germanic tribes, the rise of Islam, and Moorish invasion of Iberia Peninsula

  • Dark Ages followed by Medieval Warm Period and Viking invasions begin

  • High Middle Ages begin in Europe

  • "Pagan" technologies such as horse collar and watermill used in Europe

  • Explosion in watermill use, and windmills introduced in Europe

  • Reintroduction of Greek teachings to Christian Europe

  • Catholic Church's struggles with secular rulers, and the Crusades

  • Mongol invasions and the devastation of Islam

  • Medieval Warm Period ends, and the catastrophic 1300s begin

  • Renaissance begins in northern Italy

  • Turks conquer Constantinople

  • Portugal learns to sail the Atlantic Ocean

  • Portugal initiates a new era of slavery

  • Portugal's and Spain's pursuit of slaves and gold in the 1500s

  • Europe turns global ocean into low-energy transportation lane and begins conquering the world

  • English and Dutch rise to imperial dominance, and France dominates Continental Europe

  • England's path to industrialization

  • Deforested English countryside

  • Coal pollution in England

  • England resumes iron industry where Rome did, and the area is again quickly deforested

  • A pirate becomes England's richest private citizen, and England conquers Scotland

  • Deforested England invades and conquers Ireland, and establishes Ulster Plantation

  • English invasion of North American begins

  • Dispossessed English peasants become coal miners and live in new coal towns

  • The importance of mast wood for ocean-going ships

  • Struggle over European mast wood

  • English invasion of New England for mast wood

  • Dispossession of English peasants with Game and Enclosure laws create the workforce for the Industrial Revolution

The Roman Empire suffered from the devastation that it inflicted on Europe and the Mediterranean’s periphery, as its EROI and surplus energy steadily declined, and Emperor Constantine tried some gambits.  One was moving the capital from Rome, and the Greek city Byzantium became Constantinople and remained the imperial capital for more than a millennium and fell to pastoral invaders, the Turks, in 1453.  Another ploy was uniting the fragmenting empire under one religion, and Christianity became the Roman Empire’s state religion.  Christianity by 300 would have probably been largely unrecognizable to Jesus, and especially after it became a state religion a generation later.  By 476, Rome had officially fallen, and Italy’s first king, Odoacer, deposed Emperor Romulus Augustulus.  Germanic tribes conquered Europe’s Roman lands, and in the Near East, Islam began its rise in the late 600s.  In 711, Moors invaded the Iberian Peninsula, overthrew Visigothic rule, and Islamic Iberia became Europe’s most civilized location for centuries.  While the Roman Catholic Church specialized in burning libraries and “pagan” literature, Islamic culture preserved it.  The Church completely eradicated Classic Greek writings in Western Europe, and although there is vociferous debate on the issue, in many ways medieval Europe was in a dark age for centuries.  The Dark Ages were related to Rome’s devastation inflicted on its subject peoples and environments.  After centuries of recovery, around 800 Europe’s Medieval Warm Period began (although some put the date in the 900s), and Frankish king Charlemagne tried to revive the Western Roman Empire.  The Medieval Warm Period was a time of unprecedented prosperity and progress in northern Europe, and led to widespread Viking invasions among other usually violent migrations, but the climate that made northern Europe amenable to civilization-building caused epic droughts around the world and helped lead to the fall of the Classic Mayans and Anasazi, the decline of Angkor Wat, and may have been responsible for initiating the devastating Mongol invasions.[741] 

The Medieval Warm Period led to the High Middle Ages, which began around 1000.  It was a time of great city-building in northern Europe and about 75% of northern and central Europe’s forests were razed and put under the plow.  The success of northern Europe was partly attributable to its heavy ice age soils, which did not erode as rapidly as the thinner southern soils of the Fertile Crescent and Mediterranean regions.  Not until adopting the horse-pulled heavy plow did northern Europe’s soils become sufficiently arable to feed Europe’s High Middle Age peoples.[742]  The teams pulling heavy plows were more than a single farmer could afford, so communal financing of horse teams for heavy plows has been considered a proto-capitalistic development.  Even so, rivers filled with the mud of erosion, and the same deforestation and soil-loss process happened in northern Europe, but arguably slower than in those earlier civilizations. 

Although the Church obliterated “pagan” teachings, it did not defend Europe from pagan technology.  The Chinese horse collar arrived in Europe by 1000 and it quickly became the standard.  As the Roman Empire became depopulated, the Greek watermill helped compensate for labor shortages.  Watermills were active across Italy in the Roman Empire’s early days, for running hammers, and were heavily used in Rome’s mines.  Constantine’s predecessor Diocletian made a price edict regarding watermills.  The advantages of motive power not produced by muscles were obvious.[743]  The thick forests of northern Europe had steady Atlantic precipitation to thank (as well as the warm Gulf Stream), and Central and Western Europe was blessed with streams and rivers in abundance.  The spread of the watermill is the first time that humanity harnessed widespread non-animal energy (other than sailboats, but they were far less widespread), and it helped propel Europe’s rise.  Humanity learned how to exploit the hydrological cycle’s energy in an unprecedented way, but not everybody embraced it as Europe did.  In eighth-century China, using water for irrigation and transportation had higher priority than mills, and they were regularly dismantled.[744]

But in medieval Europe, the watermill reached its peak use in the preindustrial world, beginning with Germanic lords as Rome was falling.  Not only did the watermill spread throughout Europe, but new mills such as the ship mill and tide mill appeared.  Today’s France is where most medieval mill innovations appeared, but watermills became universal on the streams and rivers of Europe.  In 800, only a few watermills existed in Western Europe, but by 1000 there were hundreds.  The Domesday Survey of 1086 recorded nearly six thousand watermills in England alone, and the true number was some thousands more.  The Kingdom of France had 10 thousand watermills at that time, and their number doubled in the next two centuries, as did England’s.[745]  Each mill produced at least two-to-three horsepower, which was the equivalent labor of about 50 men.  In 11th-century France, its mills produced the labor of a quarter of its population.  Medieval European watermills produced the work of millions of people and reduced the need for slaves.  It was a prelude to the Industrial Revolution.  When Columbus sailed in 1492, watermills performed the work of at least 10 million people in Europe, which had a population of about 75 million.[746]  When watermill sites became filled, Europeans began using windmills, which first appeared in France in 1080, although the first undisputed European windmill appeared in Yorkshire in 1185.[747]  The social organization of medieval Europe was feudal; peasants labored for landowners in return for a portion of the harvest.  The watermill became the center of a struggle between feudal and Church authorities and the peasantry; the windmill was established partly to circumvent lordly claims on waters that passed over their lands, as nobody yet owned the air.[748]

A seminal event in Europe's rise was the reintroduction of Classic Greek writings.  It happened during the conquest of the Iberian Peninsula by Christian armies in what is today called the Reconquest.  Islamic libraries housed Greek writings, and when the library at Toledo was captured in 1085, Christian scholars from across Europe traveled to that library, where those works were translated, and Europe was never the same.  The rise of science and reason in medieval Europe thus began.

When that australopithecine Tesla made the first stone tool, his/her invention was transmitted via culture, probably by demonstration.  When Homo erecti made Acheulean hand axes, they were engaging in a craft that lasted more than a million years; it was obviously a standardized training, as all axes looked similar.  When that founder group left Africa, they had full command of language, a sophisticated toolset, and ideas were readily communicated, although it can be interesting to wonder what their beliefs were, if they had many.  Those indoctrinating priests concocted complex thought forms to seduce and control the masses.  Monumental structures in early civilizations were often architectural and engineering marvels, and the ancient Greeks began thinking in ways that could be called scientific.  When that approach took root in Europe, which already used Greek technology to great benefit, it led to the Scientific Revolution, which accompanied and mutually stimulated the Industrial Revolution.  In short, along with greater energy usage, mental feats also increased and were usually required for the next Epochal Event to manifest.  The Teslas and Einsteins of their day initiated the breakthroughs and the masses took the ride in the subsequent epoch and raised their level of mental prowess.  Calculus was only invented once (twice, really, as Leibniz and Newton did it independently), but it has been taught to students ever since as part of the mathematics curriculum.  Each energy epoch was initiated by and accompanied by increased mental accomplishment, and each breakthrough helped form the foundation of the next one, which Newton stated most famously.

The medieval Catholic Church owned about a quarter of Europe’s land and constantly vied for power with secular rulers.  The Church became infamously corrupt, called for Crusades that helped thin out the ranks of its ecstatic members, and even called Crusades onto its subjects when they strayed from the flock.  In the 1200s, among others, Thomas Aquinas attempted to reconcile Church dogma with rediscovered Greek teachings.  High Middle Ages Europe also saw the troubadour phenomenon, with its themes of chivalry and courtly love.

Islamic culture enjoyed humanity’s highest standard of living in about 1200, and although Europe was rising in that period, it was also seen as backward compared to the refined cultures of the Eastern Roman Empire (which never lost the ancient Greek teachings) and Islamic lands.  But late Medieval Warm Period droughts may have unleashed a scourge that would be unsurpassed in ferocious destruction until the Nazis in the 20th century: the Mongol invasions initiated by Genghis Khan.  Islam never fully recovered from the Mongol invasions.  Persia’s population declined by about 90%, and Baghdad was Islam’s leading city before its virtually complete destruction and wholesale slaughter of its residents.  Places such as China, Russia, and Hungary lost up to half of their populations.  A recent study suggested that the tens of millions of deaths at the Mongols' hands may have initiated reforestation that absorbed carbon dioxide from the atmosphere to such an extent that it helped end the Medieval Warm Period.[749]  The impact was only about 1 PPM, and the coming Little Ice Age has several proposed causes, including the Western Hemisphere’s depopulation and reforestation due to the Spanish invasions of the 1500s.

By 1300, Earth was cooling down, High Middle Ages Europe was largely deforested, and nearly all arable land was under the plow.  Europe had reached the Malthusian limit of its means of preindustrial production.  The 1300s were a century of unending calamity for Europe, beginning with famines in 1304, 1305, and 1310, and a major famine began in 1315 that lasted three years.  Famines visited Europe at least once a generation in the 1300s.  In 1337, England and France began a series of wars that lasted more than a century.  Those events were only a hint at what lied ahead.  Plagues and famines tend to be conjoined: weakened bodies are susceptible to disease.  The Black Death pandemic probably originated in war-torn and famine-plagued China as early as 1338.  In 1346, it reached Europe.  By 1350, around half of Europe had died and the plague kept reappearing.  War, famine, and epidemics were so prevalent in the 1300s that the Danse Macabre became an art form in the 1400s and 1500s, after the troubadour profession died out with the Black Death.  Europe became a hell on Earth.  But the work that watermills performed was not subject to famine and disease, and the work of millions of “energy slaves” surely helped hold Europe together.  Labor was in such shortage after the catastrophes that worker wages rose dramatically.[750]

In the late 1300s, in northern Italy’s city-states, the ferment initiated by the rediscovery of ancient Greek teachings flowered in the Renaissance as humanism began its rise in Europe.  Constantinople, which helped preserve ancient Greek teachings instead of destroying them, never fully recovered from the sacking that its “allies” gave it during the Fourth Crusade, which led to Venice’s lucrative dominance of Europe’s spice trade.  In 1453, Constantinople fell to Ottoman Turks, ending the Roman Empire’s last vestige (other than the Roman Catholic Church), and humanist scholars fled to Europe, which further reinforced Renaissance humanism.

When Turks conquered Constantinople, Venice lost its spice monopoly and perhaps the seminal event of Europe’s rise happened: attempts to find another route to obtain spices.  Spices are often made of defensive chemicals that plants produce to defend themselves from animals, and many have antibacterial properties.  These properties were important for preserving food, particularly animal products (mainly meat), in warm climates before the advent of refrigeration, but the antibacterial properties of spices are important even today in warm-climate nations.  Spices essentially preserved food energy so that humans could consume it rather than microbes.

The Iberian Peninsula had been the site of wars for several centuries by the Fall of Constantinople, and the Christian/Islamic animosity there was pronounced; enslaving captured opponents was standard practice.  Portugal began the maritime innovations that would see them seize the spice trade from their Islamic rivals.  Henry the Navigator is closely associated with the rise of Portuguese maritime knowledge and practice.  How responsible Henry was for Portugal’s maritime prowess has long been debated, but what is not debatable is that Portugal began developing the necessary knowledge and skills for accomplishing an unprecedented feat: sailing the world’s oceans.  Until that time, only the Indian Ocean was regularly traveled because of its relatively gentle and predictable nature.[751]  Not until Europe’s rise were the Pacific, Atlantic, Arctic, and Antarctic oceans regularly traveled.  Genoese sailors sought India via the Atlantic since the 1200s, unsuccessfully, and even settled some Atlantic islands, but Portugal was humanity’s first successful practitioner of transoceanic navigation.  Many technical issues were resolved, and Portuguese sailors with Henry’s patronage sailed down the Atlantic Coast of Africa and across the Atlantic.  The Portuguese began colonizing the Madeira Islands in 1420, the Azores in 1433, and in 1434, Portugal became the first European power to sail south of Cape Bojador on the African coast.

Serfdom largely replaced slavery in Europe by about 1000, but was still a form of forced servitude.  By 1434, the first captured Africans to use as slaves were delivered to Lisbon.  The sitting pope officially approved of enslaving non-Christians in 1452, and one of humanity’s greatest disasters began.  Portugal dominated the transatlantic slave trade for more than three centuries.  The other Portuguese commercial obsession, before they seized the spice trade, beginning in 1500, was gold.  African gold began pouring into Lisbon when the slaves did, and the Portuguese began minting gold coins in 1452.  The pursuit of slaves and gold characterized Portuguese and Spanish efforts in the Western Hemisphere during the 16th century, which caused history’s greatest demographic catastrophe: most of a hemisphere’s population died off within a century.  Life was also cheap in the imperial nations.  The average mortality rate for the crew during the centuries that Portugal used its spice route was about 25% per voyage.  Scurvy was the primary cause, and Europe ignored the cures for centuries.

When Pangaea formed and the Permian extinction wiped out nearly everything, long years of evolution on separate continents came to an end when one supercontinent formed and Lystrosaurus became Earth’s dominant land animal for a brief time.  The Great American Biotic Interchange was another example of merging continents spelling the extinction of less adaptable species.  Some have argued that the biological effect of Europe’s conquest of the world was like continents merging, but it happened 250 million years before the new supercontinent will form.[752]

Europe’s rise was made possible when it turned the global ocean into a low-energy transportation lane.  Portugal was in the early lead, but Spain was close behind, and within a century they were both caught and surpassed by English, Dutch, and French efforts.  Until that time, the oceanic sailing ship was by far the greatest energy-capturing technology in world history and remained that way until the steam engine appeared.  Europe’s watermills achieved an average of three horsepower per mill by the 17th century’s end.[753]  When Columbus stumbled into the New World in 1492, the day’s 100-ton sailing ships generated between 500 and 700 horsepower when traveling at 10 knots, which was more than 50 times the power that the muscles of the 80-man crew generated.[754]  By the 1800s, the most efficient sailing ships generated more than 200 times the human power needed to operate them.  Using bodies of water as low-energy transportation lanes was one of civilization’s most important inventions, from Sumer to Rome to Tenochtitlán to Europe’s global dominance.

Other traits that led to the dominance of Europeans were their violence and greed.  Europe’s 16th century in the New World was essentially a century-long gold rush.  Europe’s incessant wars and technological advances devoted to inventing ever-deadlier weaponry, as well as its group fighting tactics and insatiable greed, made it an irresistible force that swept over the world’s peoples.  Greed was transformed from a vice into a virtue by Europe’s economic ideologists.  That dynamic will be explored in the next chapter.

Rome was a huge parasite.  Its citizens did not understand that their methods were unsustainable, not to mention evil, and would lead to their civilization’s collapse.  The Spaniards’ obsession with gold, which was responsible for exterminating a hemisphere, suffered from a similar blindness.  Although warned by Spanish scholars that importing mountains of gold and silver to Spain would do little economically for Spain other than create inflation, the Spanish sovereigns did not heed the advice.  The first bankruptcy that marked the effective end of Spain’s imperial aspirations was in 1557, a mere generation after the initial Incan plunder began arriving in Spain.  Crown bankruptcies continued, and Spain in 1600 was arguably worse off than in 1500.  Spain and Portugal became the first imperial also-rans during Europe’s rise.  Portugal’s violent seizure of the spice trade acquired some real if ephemeral wealth during its century of dominance.  Portugal also imperially overreached, but closer to home.  When its ruling class was decimated by an ill-advised invasion of Northern Africa, Spain annexed Portugal.  With their imperial fortunes thus conjoined, they declined at the same time.

The English and Dutch dominated the high seas during the 1600s.  The Netherlands declined in the late 1600s and France replaced them as England’s rival in the 1700s.  The French lost their wars against the British, and got vengeance by helping Great Britain’s most successful colonies become independent via the American Revolution.  After the humiliating War of 1812, the Americans engaged in a friendlier rivalry with the British in the late 1800s, to take the imperial crown in the early 20th century as it became history’s richest and most powerful nation.  When imperial latecomers arrived (primarily Germany and Japan), other imperial nations had already laid claim to nearly the entire planet.  Earth’s industrialized nations then had two devastating wars that determined global plunder rights, and the USA emerged with unprecedented dominance.  The USA was really an empire by the early 20th century, but its social managers always promoted the fiction that America was not playing Europe’s imperial games, even though they were obvious to everybody on Earth except for perhaps the empire’s equivalent of plebeians and naïve patricians who actually believed the propaganda.

While European powers plundered the planet, something happened in one of them that led to its dominance and eventually transformed the world with the Fourth Epochal Event: harnessing the energy of hydrocarbon fuels.  It began by mining coal laid down in the Carboniferous Period, and after a couple of centuries of rising industrialization, oil deposits were mined.  Oil has been the primary focus of geopolitical conflict ever since the British Navy adopted oil as its primary fuel in 1911, on Winston Churchill’s initiative.  The imperial powers have not allowed Middle Eastern peoples their de facto independence ever since.

The rest of this chapter will survey the path that led to England’s initiation of the Industrial Revolution, and the next few chapters will tend to focus on England and its succeeding states, called Great Britain (1707 to 1800) and the United Kingdom (“UK” - 1800 to present, after adding Northern Ireland to its polity), and its rebellious colonies in North America, today called the USA.  They may well seem an unnecessary focus to many global readers, but I do it for a few reasons.  One is that England was the first nation to industrialize and helped set the pattern for other industrialization events.  England's industrialization, with its attendant capitalism, was the only pristine one.  Another is that England became Earth’s greatest imperial power since Rome.  It had a truly global reach and altered the societal development of most of the world’s peoples, sometimes profoundly.  Another is that England’s descendant, the colonies that became the original USA, is the world’s leading power as of 2014.  As bragged by a presidential advisor soon after its unprovoked 2003 invasion of Iraq, the USA is an empire and arguably was one long before it obliterated temperate North America’s natives.  The USA’s first president set the blueprint for stealing a continent, and it wrested lands from everybody in the way.  The theft of most of Mexico, in two steps (1, 2), added to the USA’s plunder in the first half of the 19th century, and its land grabs and imperial behavior only increased afterward, and a century after its early larcenous acts it emerged from history’s greatest imperial war with unchallenged global hegemony.  But the primary reason why I focus on those nations/empires is that they were history’s greatest energy users, and the USA has used more energy than any other nation (it was passed by China in 2010, but uses four times as much per capita).  Far more than any other dynamic, humanity’s energy practices will determine its future.  Although Americans are not my target audience and I doubt that the energy breakthroughs for initiating humanity’s Fifth Epochal Event will originate from within the USA, America has been leading global energy trends for more than a century, and most attempts to initiate the Fifth Epochal Event have originated within the USA.  Also, I am an American and know my nation better than any other, so it is the nation that I am best qualified to write about.  One day, perhaps soon, the USA will no longer be the focus of so much global attention, and if humanity experiences its Fifth Epochal Event instead of meeting its demise in the Sixth Mass Extinction, I expect that nations will become obsolete political entities and take their place among other relics of the human journey.

The developments that led to England’s use of coal in industry arguably began when the first sailboats plied Mesopotamian rivers, as it was the first time that non-muscle power was significantly used.  When Hellenic innovators developed the watermill, windmill, and the first steam engine, they laid the path to the Industrial Revolution.  The rise of waterpower and wind power in medieval Europe, first with windmills and then with oceangoing sailing ships, already had Europe riding an obvious energy wave, even if thermodynamics and other energy sciences were not yet invented. 

The Domesday Survey, published in 1086, recorded that 85% of the English countryside was deforested, as well as 90% of England’s arable land, and the remaining forests were largely reserved for royalty and nobility for hunting.  But studies of lake and river sediments show that most of England’s deforestation had been accomplished before Rome invaded two millennia ago.[755]  By 900, the brown bear was nearly extinct on the British Isles and the wolf was not far behind.  English coal had been mined by Romans, and China also mined some coal, but deforested England became the world’s first nation to rely on coal.  As the High Middle Ages were ending in the 1200s, deforested and cooling England began turning to coal.  Most of Earth’s coal came from a brief geological period before any organism learned to digest lignin, and geological processes made trees into today’s coal deposits.  The level of geological “processing” determines the grade of coal, and the typical progression is from peat to lignite to bituminous coal to anthracite, which is like a rock and the cleanest burning.  Pennsylvania’s anthracite deposits were the most desirable coal in the USA, and Wales has anthracite deposits.  But England generally burned bituminous coal, and pollution issues were obvious from the beginning.  In 1257, Queen Eleanor visited Nottingham, and the coal smoke used in local industry drove her away, as she could not stand the smell and feared for her health.[756]  In 1285, a commission was established in London, led by Eleanor’s son Edward I, to address the coal smoke problem.  In 1306, coal was banned in London, to little practical effect.  Coal smoke was so noxious that it was not yet used in homes.  Fuel-hungry operations, such as blacksmiths and brewers, are where England’s early coal pollution originated.[757]  As with the “green effect” of the Mongol hordes on much of Asia, the Black Death gave England’s forests a brief reprieve when half of England died.  England’s population did not begin to grow again until the 1500s, when it was in the Little Ice Age’s grip, which lasted until the 20th century.

The Catholic Church owned England’s coal mines until Henry VIII kicked out the Catholic Church, partly because it would not give him an annulment; he appropriated its English assets, including its mines.  During Elizabeth I’s reign, England began its ascent to industrialization and England’s woods were once again decimated.  Elizabeth established commissions to investigate the dire state of England’s woods, and the results were unanimous: they were largely gone.[758]  Until Elizabeth I’s reign, England was relatively backward and the Netherlands was far ahead in economic development.  The geographic isolation of the British Isles made them culturally quaint compared to their continental neighbors, which can still be seen today with the British reverence for its royalty.[759]  Japan is the other isolated industrialized island nation, on the opposite side of the Eurasian landmass, with a similar religious fervor toward its royalty.  The Netherlands was Europe’s most urbanized place although it was resource-poor and began intensive agricultural efforts to reduce its dependence on imported food, and grain in particular.  The land-poor Dutch even began to claim land from the North Sea, in history’s greatest effort of oceanic land reclamation.  During Henry VIII’s reign, England had a primitive economy that provided raw materials to the Low Countries, where they dyed English cloth and sent it back, and southern England exported wood to deforested France.[760]

England imported its munitions from the Low Countries, and when the Continental wars began that would culminate in the devastating Thirty Years’ War, Henry noted England’s vulnerability and began developing its arms industry.  England’s iron industry began in 1543.[761]  When Rome invaded, it established iron operations in what became Sussex, which deforested the area within a century.  In the same place, more than a millennium later, Henry revived England’s iron industry.  Sussex was quickly deforested, and hearings were held only five years later, in 1548, regarding the deforestation and ruination of the commoners by the new iron industry, as the price of wood skyrocketed.  Although the commission was concerned, the Crown did nothing about the situation, as an important industry could not be thwarted.  Sussex’s residents took matters into their own hands and attacked a local forge, which coincided with rebellions in other counties; they were brutally suppressed by the lords and Crown.[762]

While Spain and Portugal were busy plundering humanity, England was still getting its domestic house in order and began emulating Dutch practices.  During the last half of the 1500s, England’s “contribution” to the world’s rape was largely limited to harrying the Spanish.  England’s richest private citizen was the pirate Francis Drake, whose claim to fame was stealing Spanish silver by surprise raids of its Pacific ports and circumnavigating Earth as the only way to return home with the loot.  While Drake was sailing around Earth with his booty, Martin Frobisher hauled back thousands of tons of fool’s gold to England from a bay named after him.  England’s first colony in the Western Hemisphere disappeared without a trace.  Such were the follies of England’s early imperial efforts.  Before England became an imperial aspirant, it conquered its neighbors.  Roman Emperor Hadrian built a wall to keep out the “barbarians” of what became Scotland.  A second wall was built farther north a generation later.  England first invaded Scotland in 1296, and that region’s Scots were subjected to incessant warfare.  The Scots fought alongside France in the Hundred Years’ War, and my family name reflects that heritage; I have a surname with French roots and spelling, but my direct ancestor came from Scotland.  Scotland formally united with England in 1707 and became Great Britain, but warred with England until 1745.  A period of Scottish peace with England began in 1560.  As England ran out of wood it invaded Ireland, and the conquest was not completed until 1603.  An English businessman first suggested moving wood-hungry English glassworks to Ireland in 1589; after the conquest was complete in 1603, the rapid decimation of Ireland’s remaining forests commenced.[763]  Ireland has yet to recover its forests.  The English established a colony in Ireland at Ulster, and used Borderer Scots and other lower-class subjects to populate the colony as a kind of cannon fodder who were promised land for “settling” where the fiercest resistance to the English invasion had been.  That colony formed the toehold that became Northern Ireland, and post-colonial strife with Ireland lasted to this century.

England began invading North America with the fort at Jamestown in 1607.  Wayward religious fanatics got lost on the way to the mouth of the Hudson River in 1620, stumbled into today’s Massachusetts, and became the “pilgrims” of American lore.  The witch-hunting craze followed them, and "witches" were executed in trumped-up trials until 1693.  North America was “settled” in similar fashion to Ireland’s invasion, in that the English gentry got the best land in the valleys while the Scots-Irish “settlers” populated the hills as a buffer people.  If they could violently wrest land from the rapidly dispossessed Indians, they were welcome to it, until they lost it to arriving gentry once the frontier was settled.[764]  That is where America’s “hillbillies” came from, and the borderer culture of the British Isles, with its constant warring, gave birth to the USA’s preferred infantryman.  That is part of my family’s heritage and that of the USA’s white underclass.  Often-pejorative terms such as “redneck,” “cracker,” and “Hoosier” originated in the British Isles to describe residents of the borders and highlands.[765]  The word “lynching” came from the vigilante “justice” that those border and backwoods peoples engaged in.  They largely settled the western USA as they sought free land and gold and performed some of the greatest atrocities against Indians in the final days of the Western Hemisphere’s conquest.  The genocide of inland tribes in California was inflicted by poor rural whites with dreams of easy gold.  Even though it is part of my heritage, I bore the brunt of Appalachian xenophobia when they tried to get me fired from a temporary job that I had at a bank in southern Ohio (by lying to my supervisor about my actions) before I secured permanent employment at a trucking company.  Most of our drivers were from Appalachia; I understood their miserable existences and longed to fix it.

By the early 1600s, coal was England’s primary fuel, and “coal towns” formed where workforces for new mines lived.  Mining towns were ramshackle affairs, populated by migrant workers, and the English class system became pronounced due to the gulf between coal miners and the rest of English society.  That ghetto-like existence was new in the British Isles.  In Scotland, coal miners were actually slaves, even wearing collars that identified their owners.[766]  Coal mining was hellish work, particularly in underground mines, which were dominant on the isle of Great Britain.  Miners were killed by mine gas (methane) explosions, asphyxiated by mine gas (carbon dioxide and carbon monoxide, which is why they used canaries in coal mines), died in cave-ins, and suffered myriad other horrific fates.  Drowning not only became a common way to die as mines began digging below the water table, but solving the water problem became a key event in the Industrial Revolution, and arguably the key event, which will be explored in the next chapter.  Coal miners eventually organized to get better working conditions, and coal miners were prominent in the USA’s labor movement.[767]

From civilization’s earliest days, the sailing ship was humanity’s greatest energy technology.  Today, the term “prime mover” refers to an engine’s component that transforms one kind of energy into another, usually by converting heat energy into mechanical energy (but it is the energy of motion in both instances).  But also when environmental energy is captured and turned into mechanical energy, it is accomplished via a prime mover.  In that regard, a water wheel and crankshaft is a watermill’s prime mover, and a windmill’s sail and crankshaft is its prime mover.  The prime mover is the machinery’s most important component and its heart, where the most advanced technology and materials are brought to bear, as that part endures the greatest stresses.  In an automobile, for instance, the prime mover is the combination of combustion cylinders and their attached crankshaft.  Chemical energy in gasoline is thereby transformed into mechanical energy via the controlled explosions of rapid fuel combustion, which liberated that solar energy captured so long ago.  In a sailing ship, the prime mover is the sail and mast, where wind energy is transferred to the ship.  The mast is a sailing ship’s most important component, and is like an engine’s crankshaft.

The two primary uses of wood in civilizations have always been fuel and making structures.  Just as 90% of Rome’s wood was used for fuel, burning wood has always been its greatest use on Earth, even to the present day.  Firewood does not need to be long and straight, and coppice and “waste” wood has long been used for firewood and in pulp mills.  Other stands of trees were allowed to grow for a century and more to provide long, straight wood for making structures.  For seafaring nations, that always meant ships; securing wood for shipbuilding was a major goal in the earliest seafaring civilizations, and became an obsession during the rise of Mediterranean civilizations.  The war between Athens and Sparta largely centered over wood to build navies.

As Europe learned to sail the high seas, ships became larger and so did the masts.  The naval ship was humanity’s highest-performance equipment well into the industrial age, and technological innovations were first used in Europe’s navies if they could be, as they were the key equipment in vying for imperial dominance.  Military ships were the largest ones on the high seas, and their masts needed to be the largest.  A military ship’s mainmast was the greatest energy-generating technology on Earth, and research showed that single-tree masts were superior for military ships, partly because they held together better when hit by a cannonball and they weathered storms better.[768]  Although the English began deforesting Ireland as soon as they could, mast wood was largely supplied by Scandinavian polities (Norway, Denmark, Sweden, etc.).  By the late 1600s, after centuries of providing most of Europe’s mast wood, Baltic nations not only refused to sell England trees greater than about a half-meter in diameter (22 inches), they no longer had trees greater than 0.7 meter (28 inches).  By 1850, Sweden was deforested and starving, and a great wave of migration from Sweden to the USA began.[769]  That environmental catastrophe is also part of my heritage, as a Swedish-American ancestor married into my mother’s Norwegian family that migrated to the USA in the late 19th century.  The Pacific Northwest had a fishing industry and environs that reminded my ancestors of their homeland.  Europe could not provide mast wood large enough to meet England’s needs in its imperial arms races.  While the Dutch and English were both fighting Spain, they were friendly, but by the middle of the 17th century they became bitter rivals, and their first war began in 1652.  The day’s naval ships carried up to 100 guns.  England’s “ships of the line” needed to be increasingly large to defeat England's rivals, and ever-larger masts were critical to their success.  By 1900, masts for merchant ships reached 60 meters tall, and the British Navy began adopting steam power before the mid-19th century.

In 1602, the first Englishmen visited what became New England, and the expedition’s primary finding was that the gigantic trees that they found, particularly the tall, straight white pine, would provide England with an independent source of mast wood.  By 1634, mast wood was shipped to England from New England, and within a generation, several hundred masts per year were shipped.  The Netherlands tried to deny England access to Baltic mast wood in 1658, between their first two wars, and seized some of New England’s first mast wood shipments.  Eventually, as with the Spanish silver fleet, which was an armada designed to fend off piracy of Spain’s New World plunder, England developed its mast fleet, which was anticipated with nearly as much anxiety as Rome’s wheat fleet from Africa was.  By 1700, all English “ships of the line” were masted with New England's timber.[770]  The Dutch won their wars with the English in the 1600s but lost to France, and were on their way by the late 1600s to becoming another imperial also-ran.  A seemingly minor outcome of their wars against England was that the Dutch lost their North American colonies, but this could be seen as an early step in the development of the polity that became the USA.  After defeating the Dutch, France then became the premier Continental power and England’s primary imperial rival.  The late 1600s and early 1700s also marked the heart of the Little Ice Age, as sunspot activity fell to a nadir called the Maunder Minimum.

No historian has argued that England had a grand plan of industrialization, but the Epochal Event was the culmination of several trends.  Although the science of energy had yet to be invented, the obvious advantages of watermills, windmills, and sailing ships were not lost on people, and the control of arable land, forests, low-energy transportation lanes, workforces, and markets was always the road to riches from Sumer onward.  People knew what they were doing, even if they had little or no long-term perspective.

A key trend for England’s industrialization was removing peasants from the land so that they could no longer feed themselves.  Those dispossessed peasants became the Industrial Revolution’s workforce, and the dispossession began in England with the forest laws enacted by William the Conqueror; deer were reserved for hunting by the elite, not commoners.  Sherwood Forest was one of many royal forests where “criminals” such as Robin Hood hunted the King’s deer.  Modern English Game Laws began in 1671, and in 1723 the infamous Black Act was passed, which made “poaching” a capital crime.[771]  Europe’s feudal era was anything but halcyonic, but slaves became serfs, and as bad as serfdom was, they still had some rights, and provisioning themselves from the “commons” in the open field system was a universal right in feudal Europe.  As England began its rise to dominance, English landowners began removing peasants from the land via Enclosure Laws, beginning in the 1200s, usually to establish “deer parks” for elite hunting grounds.  In the late 1400s, Enclosure measures were stepped up with Enclosure laws.  The first anti-Enclosure rebellion began in 1549, and revolts continued into the 1600s.  But the landowners won and became England’s first capitalists, as they raised food for sale after the "primitive accumulation" gained by dispossessing the peasantry.  The mechanization of farming began in earnest with the lands cleared of peasants, and England's agricultural revolution began. 

Agricultural output increased, England’s population rose, and those dispossessed peasants toiled in English mines and mills.  A common misconception regarding the Industrial Revolution is that it was an urban phenomenon, but it really began in the countryside, where the energy was.[772]  England’s watermills, necessarily located along rural rivers and streams, powered the cotton-spinning machines tended by dispossessed peasants, which turned England into the world’s workshop well before 1800.  England had nearly a century’s lead on its rivals, and was eventually supplanted atop the global imperial hierarchy by its descendent and rival, the USA.  London played little role in early industrialization, similar to a parasite like Rome.  The cotton spinning machine was the iconic technology of the early Industrial Revolution, but two events in the early 1700s had greater ultimate importance: using coal to smelt iron in 1709, and creating the first commercial steam engine in 1710.  The stage was thus set for machines that could be built and powered by hydrocarbon energy, which is still the foundation of today’s global industrial economy, more than three centuries later.  With those events, the Industrial Revolution began.

 

Humanity’s Fourth Epochal Event: The Industrial Revolution

Chapter summary:

  • Relationship of England's deforestation and coal use

  • Use of coal for smelting metal, and the explosion in English iron production

  • First commercial steam engine

  • Life of English peasants in 1500, and how unimaginable 2014 London would be to them

  • How cognitive and social changes are predicated on the economic situation

  • Europe's fitful rise of science

  • Elite control of society, and the rise of the classical economist

  • Economic uselessness of gold rushes

  • Rise of mercantilism

  • Classical economists, their notions of wealth, and their service to the elite

  • The British "free trade" invasion of China in its Opium Wars

  • Karl Marx names capitalism, and for the first time provided an honest explanation of capital accumulation

  • Retail politics and political systems competition

  • Voltaire and the Enlightenment

  • Isaac Newton and slave trade

  • Contrast of industrial and preindustrial economies

  • Energetic basis of World Wars

  • Slavery ends with industrialization

  • Rise of the corporation

  • Benefits of England's invasion of North America

  • English invention of spinning machines

  • Victory of the capitalist press over the working-class press

  • Hellish conditions of early industrial England

  • Genocidal English invasions of Virginia and New England

  • Rise of racism with Europe's conquest of the world

  • English/American deforestation and resultant extinctions of North America

  • English deportation of "criminals," and invasion of Australia and Tasmania

  • Great Britain's global imperial conquests and subjugations

  • British rule leading to famines

  • Overcoming malaria and scurvy in global conquest

  • Fake imperial "philanthropy"

  • Fake robber baron "philanthropy" and the rise of public relations

  • World's poor ship food to world's rich

  • Insider revelations of the real game being played by the West

  • Purpose of imperialism

  • Imperial indoctrination, public relations, and "education" making FE unimaginable

  • Damage that the UK and USA have inflicted on humanity and the world

  • Rise of American whaling

  • The USA's quick industrial rise

  • Coal power overtakes water power in the USA

  • Deforestation and environmental devastation of New England and its conversion to coal

  • American competition between canals and railroads

  • Richness of the North American continent, and how the USA quickly became the leading industrial nation

  • Dramatic changes in making a living in the industrialized world

  • False self-image of American pioneers

  • The USA's anachronistic embracement of slavery

  • Rise of science and its relationship to imperialist and capitalist interests

  • America's Founding Fathers and the fairy tales of American nationalism

  • Mixed results of investigating life processes

  • Mixed results of using fossil fuels

  • Role of carbon dioxide in the current ice age and the current global warming

  • World's first industrial wars

  • Real economy has always run on energy

The previous chapter surveyed some English trends that led to industrialization, and one controversial subject is whether England turned to coal because of deforestation.[773]  The mainstream view is that they were directly related, and I agree.  Nobody used coal if they could avoid it.  The first ironworks in England almost immediately caused protest and rebellion because they led to rapid deforestation and rising wood prices.  Metal smelting is very energy-intensive, as Cyprus and many other places discovered the hard way, but coal could not be used for metal smelting because of its impurities, primarily sulfur, which also produced the noxious stench that made it so infamous and produced acid rain among other effects.  London in the mid-1600s had Earth’s worst air quality, by far.  In 1661, in one of the earliest works on air pollution, John Evelyn wrote that Londoners had more lung disease than the rest of humanity combined.  London Fog was coal smoke, and until the mid-20th century, London was legendary for its coal pollution, and 4,000 people died in a few days during a pollution event in 1952.  Many years ago, when I first viewed casual photographs of residents of early 20th century European cities, I was struck by how everybody was covered in soot.

In 1600, England produced about 18,000 tons of pig iron, and a century later, it only produced a little more, while it imported nearly 10,000 tons, mainly from Sweden, which still had plentiful forests if not much mast wood.  Swedish iron was price competitive with English iron, even with a stiff tariff imposed on it.[774]  English ironworks competed for wood with breweries and cider and cheese producers, as well as textile manufacturers and related businesses.  Also, canal builders and wagonway builders (building low-energy transportation lanes, and wagonways were railroad predecessors) competed for wood in a rapidly industrializing England.

Coke is coal with its impurities, mainly sulfur, “baked” out, and it burns like charcoal.  Coal was used in the British Isles as early as 3000 BCE.  Coke was made in China in ancient times, but that practice did not migrate to Europe.  The Chinese used it to smelt metal by about 1000 BCE.  In 1589, a patent was granted in England for using coal to smelt iron, and there is other evidence of coke’s use in 1600s England, but by brewers.  In the 1600s, coal became a nearly-universal industrial fuel in England, while wood was still used in homes.  In 1709, Abraham Darby built the first commercially successful coke-fueled blast furnace.  Until that time, not only was wood expensive, charcoal was so fragile that it could not be shipped far.  Coalbrookdale, where Darby’s furnace resided, had England’s greatest ironworks density.  Darby combined his knowledge of using coke in brewing, the low-sulfur coal in Coalbrookdale, and his newcomer status, which limited his access to exorbitantly priced charcoal, to give coke a try.  As usual, necessity was the mother of invention.  Others had tried coke-fueled smelting before, but nobody had lasted long.  Darby’s furnace, however, became so successful that he could sell his iron much cheaper than his competitors.  For the first time ever, cast iron became a household consumer good, for items such as kettles, stoves, and pots.  In the 1740s, Darby’s son helped invent a method of using coal to further refine pig iron into wrought iron, and his grandson built the world’s first iron bridge in 1779, which still stands.

In 1750, only 5% of England’s pig iron was produced with coke, but by 1800, with new processes and the continuing rising price of charcoal, British pig iron production was 150,000-200,000 metric tons annually, and almost all was coke-smelted.  It was ten times greater than annual production in the 18th century’s first half, and the steep ascent began in the 1770s.[775]  In the first decade of the 19th century, it doubled again.[776]  During the 18th century, British coal production increased five-fold, to more than 15 million metric tons, and it doubled again by 1830.[777]  It took ten times its weight in fuel to produce ten tons of iron, and twenty times for copper.[778]  One reason for iron’s relative “cheapness,” energy-wise, is that life processes already partially refined the ore into oxides.  In 1900, the British produced five million tons of pig iron annually, the USA produced twice as much, and Germany produced more than six million tons.[779]  In 2011, the UK produced only seven million tons of pig iron, China produced nearly a hundred times as much, and global production was 1.1 billion tons, which was several thousand times what England, the early leader in industrialization, produced two centuries earlier.  In 2008, global coal production was estimated at 5.8 billion metric tons, which was nearly 400 times what the UK mined in 1800.

A careful estimate as of 2013 determined that humanity has reduced Earth’s plant-based biomass by more than a third since the beginnings of agriculture.[780]  Humanity certainly could not have industrialized by using wood.  Arguments making the case that deforestation was not why coal was adopted in England are shaky and also irrelevant to the fact that England could not have industrialized with wood.  Iron operations regularly shut down during England’s early industrial history due to wood shortages.  The economics of coal were evident to even imperial Romans, but nobody would use coal if they could avoid it.  Some ironworking operations used wood until the late 19th century.  But using sunlight energy captured during the tree’s life could not compete for long with mining ancient sunlight trapped in coal that was collected over tens of millions of years, even if nobody initially knew how coal was formed.  Even today, the British Isles’ grassy hills provide austere evidence of the rampant deforestation that those lands have yet to recover from.  That the British Isles have any woods at all is a testament to using fossil fuels to power the Industrial Revolution.

The other critical innovation was the modern steam engine, which was intimately related to coal.  Burgeoning coal mines quickly exhausted deposits above the water table and began digging deeply into the earth, and water in the mines became a great problem.  Not only were floods killing miners, but standing water made mines inoperable.  Romans pumped water from their mines (water pumps may have been another Hellenic invention).[781]  So did British mining operations, and around 1710, Thomas Newcomen combined the ideas of a French inventor and an English inventor to make the first industrial steam engine, to pump water from coal mines.  In a parallel case of using coal for smelting, the coal-fired Newcomen engine was common in mining by 1725.  It was the first of its kind, primitive compared to later engines, and its spread was gradual.  James Watt was asked to fix a Newcomen engine in 1763.  He eventually invented an improved version with a separate condenser that was first commercially installed in 1776.  The steam engine that powered the Industrial Revolution was thus born, although, as with coal, its spread was gradual, and wind and water power were competitive with coal for nearly a century.  The hydrocarbon-fueled steam engine was the key to the Industrial Revolution, in which the energy of ancient sunlight was exploited to generate previously unimaginable power.  A steam locomotive of 1850 roaring through the English countryside would have been inconceivable to an English peasant of 1500.  From a half-million years to 50 thousand years to 10 thousand years to less than five hundred years, the duration of each Epochal Event continued to shrink as levels of energy use increased dramatically and eventually nearly geometrically with each event. 

As with the previous Epochal Events (1, 2, 3), imagine an English peasant of 1500 being placed in the midst of London in 2014, or visiting today's average American home.  The only metal in an English home of 1500 might have been some tools and eating utensils.  Wood and wool were the primary materials in an English home.  Some metals of the modern world would be vaguely familiar, but plastic would be unrecognizable.  English peasants’ homes had thatched roofs, dirt floors, no plumbing, and people rarely bathed.  Glass windows only existed in rich homes, which were built like fortresses.  London was a walled city in 1500, and nobody went outside after dark if they valued their lives.  Sewers did not yet exist, and violence and capital punishment were common.  In England in 1500, only 1% of women were literate and only 10% of men.  About half of all people died before adulthood, and if they survived that long, they could expect to live into their early 60s if they were lucky.  Few made it to 70.  Only rich people were fat.  Strangers roaming the countryside could legally be enslaved.  Modern appliances and machines would all be incomprehensible, and all electronic devices would seem magical.  How much of a modern TV show would be understandable?  Cars, trains, airplanes, and rockets would be mind-boggling.  By 1500, news would have filtered into learned circles that Spain discovered some Atlantic islands, but nobody yet suspected that new continents had been discovered.  The telescope would not be invented for another century, Earth was seen as the center of the universe, and the concept of a galaxy did not really exist.  Imagine trying to explain the Apollo moon landings to that peasant, if the peasant did not regard it as some fairy tale (many people today regard it as a fairy tale).  Could an English peasant from 1500 dropped into 2014 London have ever adapted?

As with previous Epochal Events, the advances in mental achievement were as dramatic as material changes.  However, other than the First Epochal Event, humans largely possessed the same cognitive equipment.  If an infant girl from the founder group that left Africa could have been placed in a home in an industrialized nation today, there is little reason to believe that she would not live a normal life.  The changes in mental achievement during the journeys of Homo sapiens have had little to do with biological changes and, in fact, human brains have shrunk by about 10% in the past 30,000 years.  Humanity’s material and mental changes were thoroughly interrelated.  The human world became vastly more complex with the rise of industrialization, so much so that most people today have very little understanding of how their world actually works.  It usually takes systems thinkers with scientific training to begin to understand the modern world’s complexities.  For instance, about 95% of Americans are scientifically illiterate and have little idea where their energy comes from or how the myriad moving parts of their civilizations operate and interact.  Americans are effective consumers and are history’s fattest people, and the rest of the industrialized world is close behind, but they have little idea where any of it comes from or how it was produced and delivered to them.

Several interacting trends created the phenomenon called the Industrial Revolution, but as with the previous Epochal Events, it all rode atop the energy practices.  Cognitive and social changes were predicated on the economic situation, which was always based on the level of energy consumption.  Without that foundation of increased energy generation, the rest could not have happened.  Since the beginnings of civilization, the level of energy surplus (the produced energy not devoted to agriculture), including feeding its workforce, has always been the primary determinant of how a civilization could develop and whether it survived.

When Greek teachings were reintroduced to Europe, it was already greatly benefitting from that banned culture’s technologies, and the rise of science in Europe began, but it was a fitful journey.  Powerful interests direct mainstream science’s development even today, and have made it largely irrelevant for solving humanity’s greatest problems.  Early on, the greatest enemy of Europe’s rise of science was the Catholic Church, which ironically was the same institution that initially translated those Greek works.  Although Greek teachings began the ferment that led to the Renaissance and humanism, the Inquisition formed not long after those teachings were introduced, to wipe out a side-effect of the Crusades: bringing “heretical” Christian teachings to Europe with returning soldiers.  After annihilating the Cathars and concocting an ersatz version of their “product” with the mendicant orders, the Church maintained its religious monopoly for a few more centuries until another strategy backfired: embracing the printing press, which was invented by Johannes Gutenberg in Germany around 1439.  Instead of expanding its influence by allowing literate subjects to study the Bible, it helped ignite the Reformation, which led to Europe’s bloodiest period to that time, with perhaps the exception of Rome.  Martin Luther’s seemingly innocuous declaration in 1517 led to a series of wars that engulfed Europe, which climaxed with the Thirty Years’ War that killed several million people.  Late in that series of conflicts, England began its religious wars that ultimately ended its royalty’s absolute rule.  In northern Europe, the Church never recovered.

In 1543, two works widely considered modern science’s first were published.  One pertained to astronomy: Nicolaus Copernicus, a devout Catholic, independently revived the Greek teaching that Earth orbited the Sun.  The other was the first great work on anatomy, by Andreas Vesalius, which overturned more than a millennium of Galenic dogma.  In a preview of how the West’s practice of science would progress, the dogmatists that Vesalius offended were not Church officials but his peers, who attacked him so viciously that