Wednesday, December 5, 2007

Foreword

This is a book that after several years in the making was written and completed just before the midnight of December 31, 1999, just before the start of the new millennium. The timing was not due to any deliberate planning, nor was it anticipated that the completion of the book would extraordinarily coincide with such a date; it just so happened. It could not have been done earlier due to the author's workload as an industrial consultant.

The book reflects the observations and analysis of an individual who was trained as a physicist, with a formal scientific background. It is not intended to proclaim any dogma, nor does it intend to shatter the convictions other people may have formed themselves about the origins of the universe. The book tries to be as factual as possible. It is the belief of this author that certain issues raised by science itself have long been overlooked. Whether deliberately or intentionally , overlooking these issues is tantamount to hiding the dust of the living room under the rug; we can choose to ignore them, but they will certainly not go away. Not any time soon. The facts are there, waiting for us to take a close look at them. And after taking a close look at them, it becomes extremely hard to deceive oneself into believing that these issues have a simplistic resolution.

This book is being released over the Internet in the hope that sharing with others these observations and points of view will stimulate new fresh points of view and, at the very least, some serious discussions on the issues. The publication of the book on the Internet entailed some adjustments, because of the current lack of some capabilities in the browsers that are capable of displaying text and images but are incapable of duplicating other features available to the printed book. All of the footnotes in the book had to be incorporated inside the main body of text since web based printing does not yet allow for footnotes; this was done by placing the footnotes between square brackets and setting the font size to a smaller value plus using italics to distinguish these footnotes clearly. Thus, whenever the reader sees something like [this] he will now that it used to be a footnote in the book, and the text so cited should still be considered as a footnote. And the index that was placed at the end of the printed version of the book had to be discarded altogether, since each “page” of a printed book becomes a meaningless concept when all the text is displayed in a monitor, there is no "page" at which to send the reader from an index, although the lack of an index can be overcome with the help of the “Search” of “Find in This Page...” option that most Web browsers incorporate in their menu lines, so what one hand takes away the other hand gives back in a different fashion.

One thing that was added to the online edition of the book Initial Conditions is the citing of Wikipedia references, which is fast becoming the dominant power of online encyclopedias around the world, displacing the hard copy expensive encyclopedias most of which have to be consulted by making a special trip to the local library.

As a species, we have travelled a long road in an understanding of ourselves. It certainly seems that we still have a long way to go. But whatever surprises the future may bring upon us, it is hoped that some of those surprises will come as new revelations upon us, carved out by means of our own intellects in our never ending voyage of discovery. For that’s precisely what we have been doing ever since we were put in this planet, we are in a never ending voyage of learning and discovery.

The book is being released and published on the Internet precisely about seven years after it was completed way back on December 1999. Again, this was not a deliberate act of planning, nor was there any original intention of making dates coincide. It just so happened. And again, it could not have been done earlier due to this author's heavy workload as an industrial consultant. Nevertheless, the author is still wondering whether there is something behind these rather funny coincidences.


Armando Martinez Téllez

Ciudad Juárez, Chihuahua, México

Prologue







INITIAL CONDITIONS

Armando Martínez




First Edition: December 31, 1999

               Second Edition: December 31, 2005           







(Book Cover)



Throughout the History of mankind, a never-ending quest has consumed many of those who have preceded us, as well as many of us still around, as to the origins of Man and the Universe itself. The pivotal question, in the end, has always been the same: Is all we see and hear and experience the mere result of a natural phenomenon, which was just waiting to happen, or is there any evidence whatsoever that some actual design had to take place before the Universe itself came into being? Such a design, if traceable in some manner, would necessarily point to some type of cognitive force or advanced intelligence responsible for much, if not all, of what we see, hear and experience, as part of a plan laid out many eons ago. If such a design actually took place, do we have any hope of actually grasping some evidence or argument in favor or such a hypothesis, or are we doomed in the midst of a materialistic society that has enthroned science as its ultimate source of truth, to be forever blinded about our true origins?

As it turns out, we may be able to address the above issue, provided we start with the right questions at the very beginning and hold steadfastly to what common sense and logic tell us. Although the use of reason in an attempt to explore something for which reason alone may not be enough might sound ridiculous, it is only through our powers of reasoning that we may have any chance of uncovering some important truths in our data or arrive at some conclusions which faith alone cannot provide us.

Throughout this book, some major issues will be reviewed, bringing into the big picture part of the most recent findings and observations put forth by some of the avant-garde thinkers and philosophers who at this very moment are trying to push the limits of human knowledge into what until now has been mostly uncharted territory. A lot of this knowledge has been hard to come by, and has consumed entire lives throughout the ages, especially in the latter part of the second millennium. Though there is always the possibility that our current state-of-the-art knowledge and findings could be rendered obsolete with new breakthroughs and discoveries yet to be carried out during the third millennium, there is some degree of confidence that such new knowledge will expand rather than demolish what we already know, and it is with this confidence that we should begin our voyage of discovery into what used to be uncharted territory.

This book was written for the layman. Every effort has been made to avoid hiding relevant material under mathematical theories and models that would require an advanced college degree in the sciences. If at some point the reader feels intimidated by what seems to be obscure terminology, the reader is encouraged to just glance quickly over such terminology and keep on moving forward. The references that have been cited throughout this book are for the most part also intended to be within the grasp of the general reader, and material which can otherwise be found in college textbooks has been set aside, so that the reader will not feel overwhelmed with needless technical data. If the effort to keep things simple has not been good enough throughout, please accept my most sincere apologies with the promise that those areas presenting some difficulty will be improved in future editions of this book.

Wherever possible, every effort has been made to quote some of the original sources verbatim (including their typos!) in order to convey the opinions expressed by each source as faithfully and as clearly as possible, even if we may disagree with some of their observations, and we must respect the opinions of others much as we would like others to respect our own points of view.

Before we start, a caveat is in order: this book was not written with the intention of endorsing any particular type of religion or faith. However, as we move forward, certain conclusions will come into the panorama. It will be inevitable.

Index

TABLE OF CONTENTS


Chapter I: An Elementary Physics Problem.

Chapter II: Probability, Large Numbers, and Something Else.

Chapter III: Saint Thomas Aquinas, Entropy, and the Big Bang.

Chapter IV: The Point of Origin.

Chapter V: For Us, Prediction Becomes Impossible.

Chapter VI: Is the Watchmaker Truly Blind?

Chapter VII: The Hidden Lineage of a Cellular Automaton.

Chapter VIII: A Miracle Waiting to Happen.

Chapter IX: The Anthropic Principle.

Chapter X: The Footprints of Intelligence.

Chapter XI: Order Zeta.

Chapter XII: Confronting Quantum Mechanics.

Chapter XIII: Playing the Creation Game, Part 1.

Chapter XIV: Playing the Creation Game, Part 2.

Chapter XV: Playing the Creation Game, Part 3.

Chapter XVI: A Heated Debate.

Chapter XVII: The Shakeup at the End of the Millennium.

Epilogue.

References and Bibliography.

I: An Elementary Physics Problem




We begin our voyage of discovery with an elementary physics problem (elementary, when studied as an introductory university course in physics) that might seem trivial at first sight. But please bear with me, for it will lay the introduction that will set the ground stage to something that is more profound that the reader might realize at this point in time.

Our problem was taken directly out of the book “Physics for Students of Science and Engineering” coauthored by David Halliday and Robert Resnick. The statement of the problem which appears on Chapter Four of the first of two volumes is simple enough:
Problem # 6: “Calculate the minimum speed with which a motorcycle rider must leave the 15º ramp at A in order to just clear the ditch.”
and is referenced to the following figure:


In practice, if a would-be daredevil is considering an attempt to actually make such jump, a problem such as this one must be solved exactly, for if the rider makes a small mistake in his calculation, there is a good chance that his speed might not be the minimum speed required to make the jump and he will fall down into the ditch.

Arriving at the right answer to this problem turns out to be an issue that is not trivial. It can be shown that the solution to this problem will be given by the following formula [v0 represents the initial velocity required at the point of departure from the ledge, θ represents the slope of the ledge from which the vehicle jumps, which is 15 degrees in this case, g represents the downward acceleration pull due to gravity which is 32 feet per square second, x represents the horizontal distance from the starting point to the end point which is 10 feet, and y represents the height difference between the starting and end points of both ledges, which in this problem is 5 feet]:


Notice that the time required to make the jump from one end of the ditch to the other does not appear explicitly in the formula given above. It is irrelevant. We can, if we so desire, derive another formula that will give us the time of flight. But it is not required explicitly in order to determine the minimum speed required to make the jump successfully. And why is the passage of time completely irrelevant for the correct solution of this problem, except as a medium that allows the jump to take place? Because the shape of the trajectory is the critical issue at stake here. If we could slow down the passage of time and watch the jump in “slow motion”, the path traveled by the rider as he clears the ditch would be exactly the same as the path he would travel if we were to witness the jump through the lens of a high-speed camera accelerating each frame. In the end both paths must be exactly the same. The path that the rider must travel to make the jump successfully is a curve commonly known as a parabola. And for a given angle of departure and the distances involved, it can be shown that there is just one and only one parabola that connects the point of departure and the landing point. As long as the rider manages to travel along this parabola, the jump will be a successful jump. If we imagine ourselves our rider as a point object and visualize the parabola that describes his path, we may be able to see that for any other speed different from v0 the parabola traveled by the rider will also be different. Thus, the solution to this problem really requires us to determine, out of the many possible parabolas we can draw that will pass through the given point of departure and will have the same slope of departure, which is the parabola that will also pass through the required landing point, and as we have already stated there is just one such parabola. This physics problem of motion in a plane is therefore at its very core just a problem in geometry. And being a problem in geometry, we can consider it to be essentially a problem in mathematics rather than a problem in physics. So where then does physics enter here in the panorama? It enters with the “natural, universal, physical gravitational constantg of 32 feet per second per second, the downward acceleration pull due to gravity. This physical constant must be measured, and its value will be entirely dependent upon the system of units we are using to make the measurement (meters instead of centimeters, feet instead of meters, yards instead of feet, and so forth). Mathematics alone cannot predict this value and make it fit within one of our many systems of measurement. But we might take notice that since our systems of measurements are entirely arbitrary (after all, the actual true downward pull of gravity remains the same regardless of how we measure it), it could be possible that the physical constant g being used in the statement of the problem could perhaps be derived mathematically from a combination of other truly universal mathematical constants (such as the number e, the foundation of natural logarithms, the square root of two, etc.) and referenced to some other universal invariable number yet to be discovered, in which case the problem would then be entirely a mathematical problem with no physics involved whatsoever, with no need to make any type of measurements. Nevertheless, once the value of g has been fixed, the problem of the rider clearing the ditch becomes entirely a problem in geometry, making the passage of time an irrelevant issue except as the medium in which the geometry comes alive.

Substituting numerical values into the above equation, we arrive at the correct answer, which is v0 = 14.9 feet per second. [In the formulation and the solution of this problem, some simplifying assumptions have been made. First, the resistance presented by still air against the forward motion of the motorcycle -which from the perspective of the rider would be indistinguishable from a wind blowing directly against him- has been assumed to be negligible. In actual practice, for the distances considered above, this air drag will make a small difference, and would complicate the analytical solution of the problem requiring more sophisticated mathematical tools. Second, the jump is assumed to be carried out on a day with no winds in the forecast. On a very gusty day, even taking into account the maximum wind speeds that can be expected and adjusting the above formula accordingly, the rider would be wise to stay at home and leave the jump for another occasion.] If the rider is smart, and his motorcycle is not capable of attaining this speed under the best possible conditions, he will either procure another motorcycle capable of giving a higher speed than the one we have just calculated, or he will give up on his attempt to clear the ditch.

There is another central issue on the solution of this problem that we have not yet discussed. There can be no counterargument whatsoever that the successful and exact solution to a problem such as this one requires some form of intelligence. The solution to a problem like this one cannot possibly be the result of mere chance alone, and if this problem is to be confronted by the reader during a final examination at a university, any mistake in the derivation of the formula or any math miscalculation on his/her part will be punished with a lower grade score (if taken into actual practice, the rider may not be able to reach the other side, and will most likely fall helplessly into the deep ravine maybe killing himself.)

As stated before, the exact solution to this problem requires some form of intelligence. Most readers can surmise by now that this problem cannot be solved on paper by most elementary school students (unless the student happen to be a “whiz-kid”) or perhaps even by many high school students. Indeed, in order to solve this problem successfully, the person who solves it requires some mastery of the following tools:
  1. Algebra
  2. Basic kinematics
  3. Trigonometry
  4. Arithmetic (perhaps a hand-held calculator)
The above knowledge is something that does not come easy to most people during their lifetimes. Acquiring this knowledge necessarily implies that the person has enough information processing capabilities in his/her mind to fully understand the nature of the problem and to develop the tools that are required in order to solve it. Nobody in his right mind would accept the argument that a problem such as this one can be solved by mere chance alone on a consistent and repetitive basis. Arriving at the final equation required to solve the problem is something that just cannot be accomplished by mere chance alone. And this knowledge must precede any attempt at the correct solution of the problem. It cannot possibly be otherwise.

But there is yet another central issue to the physics problem just discussed. In the problem, the angle of jump is already fixed. And the distances from ledge to ledge are fixed also. The downward acceleration due to the gravitational pull from earth is also fixed. The only variable we have at our disposal to carry the rider safely to the other side is his velocity of jump. The minimum speed v0 required by him to make the jump safely is the initial condition required by the rider in order to make it to the other side. Anything less will just not do. And if the minimum speed required to make a safe jump is anything more than the maximum speed that the motorcycle is capable of giving, our would-be Evel Knievel will not even attempt to make the jump, unless he is intent on committing suicide. Thus, a safe jump necessarily requires knowing the right initial condition, and this in turn does not come by mere chance alone, it requires some form of intelligence. If the rider goes ahead with full confidence making the jump and his effort succeeds, then there can be no doubt whatsoever in our minds as we witness the jump that, as far as we are concerned, he has been preceded by knowledge and intelligence in the manner in which that knowledge was used.

Pushing the above arguments a little bit further, if we arrive some time after the rider has made his/her jump, even if we did not see the jump take place from ledge to ledge, if we can determine by other means that such a jump has indeed occurred (for example, by noticing the tracks made on the starting ledge and the tracks made on the landing edge), then we have more than enough information to determine that some kind of intelligence must have been at work prior to the actual jump, an intelligence which was of paramount importance in the successful conduction of the experiment.

And this will be our key to unlocking the possibility of whether an event could have been the result of mere chance alone or whether the event was programmed beforehand by some form of intelligence to happen in a prescribed manner: the initial conditions themselves, if they can be traced backwards in time from the evidence they have left behind, may carry most of the information we need to know in order to determine whether something was due to random chance alone or whether some intelligence was at work when it all started. We do not have to be present and aware at the very beginning when something took place in order to determine what the initial conditions were. In our elementary physics problem, just knowing that the rider made it safely to the other side should be more than enough to convince us that in order to make the jump he must have chosen and used the right initial condition, even if the event happened many years ago. Thus, initial conditions may be capable of making quite a distinct mark on the present, even though the event may have taken place a long time ago.

Unfortunately, if we are not witnesses to the actual jump, then it is always possible that the jump could have taken place in a manner completely different from the one we have assumed. For example, it is possible that in the event of some minor miscalculation of the initial condition or in the event of some last minute mechanical problem with the motorcycle, the rider could have cheated and could have retrofitted his machine with some rockets that would enable him to make the jump safely to the other side. We would like to be actual witnesses to the event, but if for some reason we are unable to actually be present, then we have no other choice than to make certain assumptions on what could have taken place when the jump was carried out. In the process of starting out with a limited amount of information and going from some very specific details to a bigger picture, we are actually carrying out a logical process of induction, trying to determine or induce something bigger by gathering together some (perhaps very few) pieces of a jigsaw puzzle. This is completely opposite to the process of deduction, the way in which science usually likes to work, whereby we start out from the very beginning with some generally accepted facts or axioms or postulates, and from those “self-evident” truths we arrive at certain conclusions by rigorous application of logic, as was done by Euclid himself in formally deriving and proving many theorems of geometry in his book The Elements. Critics of the process of induction point out (and their observation should be well taken) that the process of induction is fraught from the very beginning with the possibility at arriving at wrong conclusions, loaded with too many assumptions, and historically many big mistakes and faux passes in science are due to the blind trust placed in those occasions on the process of induction. However, even deductive reasoning may be highly vulnerable if just one of the basic assumptions is later proven to be wrong or incomplete, perhaps bringing down an entire pyramid of knowledge because just one of the bricks in the edifice was later found to be faulty (the Earth was not flat nor was it standing still with the heavens revolving around it, spontaneous generation of life –also known as abiogenesis- would never be proven in any laboratory, and the phlogiston never did exist).

Nevertheless, there are occasions in which we have to use inductive reasoning simply because we have no other choice. For example, even though many of us would like to jump into a time machine and travel backwards in time to witness the very moment of creation of the universe, we know this will not be possible, for even if with the discovery of some new physics principle we could be able to travel backwards into time, the enormously high temperatures and the almost infinite densities of matter just a little bit after the universe came into being would not allow us to sustain our own lives for very long. But even if we are forced to use inductive reasoning, our efforts may not be in vain. Other productive fields of science such as archeology have been able to grow and prosper by working backwards into time with only the remnants of bygone eras, aided by powerful new techniques such as carbon dating and DNA typing (besides using the shovel to dig up old bones.)

Keep in mind that we have only dealt with a very simple physics problem that does not come even close in difficulty with other problems that could be posed in such areas as quantum mechanics or general relativity. And there are many, many other problems, which must be solved exactly, before there is any hope of seeing a Universe evolve with any kind of primitive life, or for that matter, even a Universe (as we currently know it) actually evolve. The precise solution of any given problem has a footprint called intelligence, and as the problem gets tougher to solve it will demand increasing levels of wisdom in order to surrender its solution.

A counterargument that might be posed against the thesis that the correct determination of an initial condition as in the case of the rider who tries to clear the ditch is proof of intelligent abstract thinking would be the observation that all of our mathematical models are but symbolic idealizations of something we call reality, idealizations that in many cases can become much more complex than the reality they are supposed to represent, and that on final analysis occurrences that take place in everyday life can do very well without such complications. After all, on a time when there were no computers or calculating machines and when the science of mechanics was non-existent, William Tell was able to shoot an arrow from a distance targeting the arrow so precisely that it would land right on the apple placed above his son’s head, and after this he was able to shoot another arrow targeting the arrow so precisely that it would land on the tyrant who ruled his country. All this without having to add up a single number, almost as if William Tell knew what to do without the need of any mathematical models and without any prior knowledge of physics. By the same token, in order to be able to place a “hole in one” in a golf match, a master golfer such as Tiger Woods would try to pull off this feat by first looking around and “sensing” his environment, getting a “feel” for the wind direction and velocity, and watching how far the target appears to be from the place where he is standing. Logic alone tells us that in order to get that “hole in one” he must actually measure the wind speed and velocity and the distance between him and the target, perhaps to an accuracy of at least four significant digits (considering that the target hole which is many yards away from him is less than three inches wide), and once this is done he must use a computer and very precise mathematical models to determine the speed with which he must strike the golf ball, as well as the angle of stroke and the precise point of impact, thereupon adjusting his swing accurately to match the parameters required to make that “hole in one”. Yet, no master golfer has ever required such elaborate methodology to get to the PGA championship tournaments. Carrying out the above observations even further, if we look at a pond filled with fish and other living life forms, even though a snapshot of the activity taking place may reveal us that life in the pond can be described by an enormously huge and complex set of interrelated mathematical formulas the fish in the pond appear to be quite content and oblivious to such abstractions, going around their daily business of survival without even having to go to elementary school or kindergarten, almost as if the fish knew exactly what to do in order to survive without the need of any of the tools of mathematics.

The above counterargument is seriously flawed because it fails to take into account that, in order to use his bow and arrow with such a precision, William Tell had to acquire first a lot of experience by the time tested method of trial and error. Without this, he would have done no better than an amateur, and History might have been written differently. The master golfer must also undergo through a similar process, regardless of how good his “gut instincts” may be. There are reasons to believe that in cases such as these, the experience gained through trial and error allows our brains to build up an increasingly refined database incorporating the recollection of past failures and successes, and when a show of mastery is about to be displayed both the arrow marksman and the golfer subconsciously compare their current situation with previous ones, and drawing upon their experience database they adjust themselves to emulate those similar conditions that in the past proved to be successful. The disadvantage of using a memory acquired through trial-and-error versus intelligent abstract thinking is that experience alone may not be enough to solve situations in which there is no past experience. Space exploration is filled with many examples of first-of-its-kind events (the Apollo space program, the Sputnik satellite, the Voyager missions) where the luxury of trial-and-error must be ruled completely out of the picture because of budgetary constraints. None of these multimillion dollar programs would have taken off the ground if the launch in each case had been entrusted to “gut instincts” alone. Considering the distance from the Earth to the Moon, even a very small miscalculation of just one or two degrees would have resulted in one of the manned Apollo missions missing the target by thousands of miles, and without the gravitational pull of the Moon required to send back the team to Earth by turning the spacecraft around, the mission would have been lost forever in the vast emptiness of space. Thus, long before the rocket is rolled out to the launch platform, a lot of careful planning and abstract intelligent thinking must precede the launch. All of the required initial conditions must be in place just before the boosters are fired up. Once the countdown goes down to zero, there is no turning back, and the launch will be either a success or a failure. It will be the initial conditions, or more precisely, the intelligence behind the precise determination and the gathering together of all the required initial conditions, that will either make or break the project. There can be absolutely nothing random about something like this; nothing can be left to chance.

Besides, there is a widespread belief that in the case of those two historical shots plucked out of his bow by William Tell, luck was also on his side. And in the case of a “hole-in-one”, it is widely accepted that whenever a master golfer is able to elicit such a miracle from his club then, besides his mastery, luck also had to be on his side; and this is confirmed to us by the fact that such a feat is an oddity rather than the rule in masters tournaments [Nevertheless, a good game of golf can always be improved with some knowledge of the physics involved, even without going through the entire detailed math before each strike. See for example, the book "The Physics of Golf" by Theodore P. Jorgensen.]. In general, and this goes for many sports, luck is almost an essential ingredient in order to beat the odds. The same can be said for life itself. But for many other things, luck is no substitute for careful thinking and planning. The Eiffel Tower was not built based upon the hope that such a fanciful arrangement of metal beams and rivets would somehow not collapse; it had to be designed from the very outset as a giant structure that would withstand all the way down to its base the enormous tonnage that makes such structure, and it was designed to do so for many years to come. In the case of the Eiffel Tower, the initial conditions for such a monumental undertaking were all completely specified in the blueprint long before the first brick was laid down, long before its first iron beams had been manufactured. Likewise, any digital computer ever built by Man is no better than the software (computer programs) that is being run in the entrails of the machine. As any computer programmer will tell you, even the smallest mistake when writing down a computer program will most likely result in a crash sooner or later. In order to perform reliably 100% of the time, the computer demands software with no errors whatsoever; it demands perfection. No computer program was ever designed counting upon Lady Luck as a helpful aid. Interestingly enough, once a good computer program has been written in a so-called “high level language” (examples of such languages are COBOL, C++, Fortran, Lisp and Pascal) it can be run on any machine equipped to handle such language, regardless of the internal architecture of the machine, regardless of whether the computer was built using high-density integrated circuits, vacuum tubes, or using pistons activated by steam engine. The “hardware” will only follow blindly and faithfully the instructions it has been given by the “software”. MIT’s Raymond Kurzweil makes this point clearer in his book The Age of Intelligent Machines:

“So the message of science can be bleak indeed. It can be seen as a proclamation that human beings are nothing more than masses of particles collected by blind chance and governed by immutable physical law, that we have no meaning, that there is no purpose to existence, and that the universe just doesn’t care … And yet, the message doesn’t have to be bleak. Science has given us a universe of enormous extent filled with marvels far beyond anything Aquinas ever knew … Indeed, far from being threatening, the prospect is oddly comforting. Consider a computer program. It is undeniably a natural phenomenon, the product of physical forces pushing electrons here and there through a web of silicon and metal. And yet a computer program is more than just a surge of electrons. Take the program and run it on another kind of computer. Now the structure of silicon and metal is completely different. The way the electrons move is completely different. But the program itself is the same, because it still does the same thing. It is part of the computer. It needs the computer to exist. And yet it transcends the computer. In effect, the program occupies a different level of reality from the computer. Hence the power of the symbol-processing model: By describing the mind as a program running on a flesh-and-blood computer, it shows us how feeling, purpose, thought, and awareness can be part of the physical brain and yet transcend the brain.”

If the software is no good, the end results coming out of the computer will be no good either (in computerese, the saying goes “garbage-in, garbage-out”). Call it finicky behavior, if you like; the fact is that without a thinking mind overseeing the start up parameters of many things around us, such things would not be possible today.

Since the performance of a digital computer depends much more critically upon the “software” than upon the “hardware”, the hardware being just the medium that allows the software to execute, what are the initial conditions here in order to be able to accomplish a major computing task? Must we consider both the software and the hardware both part of the essential initial conditions? Or are the true initial conditions limited just to the software? Let us go back to the case of the rider who is trying to clear the ditch. Should we consider the motorcycle also an indispensable part of the initial conditions? At first, we might be inclined to say yes. But on second thought, we must come to realize that the problem to be solved remains exactly the same regardless of whether the motorcycle used to make the jump is a Harley-Davidson, a Suzuki, or a Yamaha. As a matter of fact, the vehicle in principle could even be an ordinary bicycle! The formula does not make any specific requirements upon the medium in which the jump will take place; the choice is up to the person making the jump. If the rider could run fast enough, then no vehicle would be necessary, since the jumper himself would be the vehicle. The minimum jump speed required at takeoff remains exactly the same. The only way in which we could justify drawing in the vehicle as part of the initial conditions would be regarding the design of the vehicle itself: Was the vehicle designed to be capable of attaining the required minimum jump speed? And, on final analysis, any motorcycle we might try to use could not have sprung out of nowhere; long before it was actually built it had to be a concept in the mind of somebody, it had to exist as an idea. Yes, iron ore had to be mined from somewhere, foundries had to do the work of purifying the metals used and cast them into a suitable shape, gasoline had to be extracted and purified from raw petroleum, rubber had to be molded into a toroidal shape, but each and every one of these activities was also preceded by an idea whose time had yet to come. Without a smart mind hard at work from which the idea will arise, all else remains just as a mere possibility that might never be realized, and all the materials will lay dormant with no useful purpose. The initial conditions contain all of the information necessary to accomplish a given goal. Indeed, even the setting of any given goal must be considered part of the initial conditions. Once the design or the solution to a problem exists in thought, then its full realization will depend solely upon the availability of materials and the willingness of the designer to make it real. But the materials themselves are incidental. Luck may have a part in the actual realization of an idea, especially if there are not enough resources to bring an idea or the solution of a problem into fruition, or in case something completely unexpected happens at the last minute. However, failure to anticipate unexpected events that will make everything end up as a big mess is generally considered to be poor planning, and poor planning seldom bears the mark of a bright mind (however, even bright minds do make mistakes, as in the case of Napoleon’s invasion of Russia in 1812, and in such cases the punishment that has to be paid for poor planning has generally been quite high). Returning to the case of the digital computer, if we are to draw in the computer as a part of the initial conditions then we must draw in the true initial conditions that allowed the machine to be built in the first place, and those initial conditions are usually found scribbled in a notebook, drawn on a big blueprint, or carried along the mind of someone who is watching an apple fall down from a tree.

There are many other types of problems that require an exact determination of at least one initial condition in order for a certain result to be achieved. If a rocket does not have the minimum escape velocity required to break off from the Earth’s gravitational pull, its mission will be doomed from the very moment it is launched and the rocket will come back to Earth to meet its fate (the minimum escape velocity can be shown to be about 11,200 meters per second, with the weight of the rocket itself making little or no difference). And by exact determination we mean precisely that: something determined with knowledge and intelligence after a close study of the problem at hand, especially if a solution picked out entirely at random will almost certainly result in failure.

II: Probability, Large Numbers, and Something Else




Many times throughout our lives we may have heard a statement telling us that any event, however improbable, may eventually happen if we give it enough time to occur, or if the places in which such an event can take place are numerous enough (or both.) This is the same argument that has been used to defend the possibility of extraterrestrial life somewhere out there in the cosmos, and which has been used to justify enormous expenditures in the search for signs of life outside planet Earth, such as the SETI (Search for Extra Terrestrial Intelligence) project or the unmanned missions to Mars (this hope for some kind of intelligent life to exist or to have existed in other parts of the galaxy or the Universe besides planet Earth may be subconsciously motivated by a justified fear that we may be alone in the Universe and that there may be no other forms of intelligent life in the cosmos besides ourselves, a possibility which may be too hard for many people to swallow.)

Regardless of whether we are completely alone in the Universe or whether we actually have a lot of company out there of which we are unaware of, our existence in this planet has already proven that, as far as this Universe is concerned, at some point in its evolution the Universe was capable of coming up with the conditions which made it possible for the appearance of Homo Sapiens, intelligent life.

Before going any further, let us carry out a simple practical experiment in order to get an idea on how easy or how difficult it would be to obtain, by random chance alone, something that we know beforehand was produced by intelligent life. The reader may wish to take part in this experiment, which was designed for his/her entertainment.

To begin with, we must write in separate (but equal) pieces of paper the individual letters of a simple word such as “HOUSE” (the reader may wish to use his/her first name, such as “DANIEL” or “SYLVIA”). The next step is to throw the pieces of paper into an empty box or container and shuffle the box around in such a way that we have no idea were each individual letter is located. Once this has been done, the reader will take out from the box one of the pieces of paper blindly, and since the pieces of paper have the same size and shape, there is no way of telling from the outside without looking directly at them which one of them we are actually pulling out. Using more precise terminology, we state that each piece of paper has the same chance of being taken out from the box as the other ones. In the case of the word “HOUSE”, each letter has a probability of one fifth (or 20%) of being taken out individually from the box. Once the first piece of paper has been taken out of the box, the reader will place the letter in some flat surface to start forming the word “HOUSE” and will start a count of one. If the letter was the letter “H”, the reader will proceed to take out the next piece of paper, which in case of being the letter “O” will be used to continue forming the word “HOUSE”, and the count will increase to two. But if the letter was not the letter “O”, the reader will place back into the box both letters and will start all over again. However, the count, which is now two, will keep increasing with the next trial. The purpose of the game is to continue drawing letters from the box and putting all of them back into the box unless we start getting the right order to compose the word “HOUSE”, and all the while the count will keep on increasing. The game ends when the word “HOUSE” is finally completed, at which point we will have a tally with the total number of times we have tried in vain to obtain such word. We may get extremely lucky and start picking out each letter in the correct order during our first attempt, in which case when we hit the right combination we will have a tally of one. However, if the experiment is repeated, it becomes unlikely that we will again get a tally of one, and it is very possible that we may have to try dozens of times before we hit again on the right combination. If the experiment is repeated a third time, the odds of hitting the right combination three times in a row become increasingly small. These experimental observations can actually be put into a mathematical framework and we can estimate, in the long run, what the probability is of getting the right combination in a large number of trials. The line of though is as follows:

In the word “HOUSE”, there are five different letters. Thus, there are five different ways in which we can pick out the first letter of the word (and only one of them will be the right one, the letter “H”). Once we have picked out the first letter, there are only four letters remaining in the box, and we can pick out the second letter in any of four different ways. Thus, we can pick out the first two letters in 5 • 4 = 20 different ways (and only one combination will be the right one, the pair of letters “H” and “O” in that order). Extending this argument, it is obvious that we can pick out the five letters from the box in a total of

5 • 4 • 3 • 2 • 1 = 120 different ways

But there is only one way of picking each letter after the other which will be the “right one”, in the correct order to form the word “HOUSE”. By the very definition of probability, if an event can happen n times out of a total of N possibilities, then the probability of that event taking place in the long run will be n/N. Thus, the probability of getting correctly the word “HOUSE” will be 1/120 or 0.0083, or 0.83%. In other words, if we repeat our experiment several times, then as the odds even out we will find that, on the average, we will be picking out the right combination less than one percent of the time.

If the word has more than six letters, the average reader may quickly become dismayed and may begin to realize that it is not an easy task to come up with the right combination of letters to form such word if the letters are picked out entirely at random.

If instead of trying to form a simple word we attempt to compose a phrase (for which we would require at least one blank piece of paper inside the box representing the space used between two different words), the reader can easily imagine that getting such a phrase out of a random combination of letters would be a very difficult and time consuming task.

We are now ready to pose a more interesting question: If we have a box that is big enough to contain a large number of letters from the alphabet, say some ten million letters, what then is the probability of getting, just by drawing the letters out from the box on an entirely random basis, a work such as Shakespeare’s Hamlet?

Most readers, including many skeptics, will agree that the odds of such an event taking place are, for all practical purposes, nearly zero, and we would not expect to be witnesses to such a miracle within our lifetimes. After reading a work such as Hamlet, the average reader will come to the unavoidable conclusion that such work could not have been the result of a combination of thousands upon thousands of letters being poured out entirely at random, and a being possessing a fair amount of intelligence (and knowledge) had to be the creator of such work. However, and it is important to emphasize this, according to the rules of probability there is some chance, however small, of such an event taking place. Can a work such as Hamlet eventually be produced on an entirely random basis by a computer churning out billions upon billions of combinations of letters per second, given enough time to accomplish such task? Absolutely. It is indeed possible. However, it is highly improbable. It is imperative not to confuse what is possible with what is probable. An event that is possible may have a probability of taking place that is so small that even with an ultra fast computer working since the creation of the Universe would still be in the works waiting to happen. And we know for a fact that Shakespeare’s mind was not an ultra fast computer and that his life span was certainly much, much smaller than the known age of the Universe.

On a random universe, the only way in which there can be any hopes of making a very small probability have any meaning is by resorting to the sheer brute power of large numbers. For example, if there is a black marble lost inside a big box containing about one thousand white marbles, with all of the marbles thoroughly mixed, then the probability of blindly grabbing at random the black marble from the box the first time we get our hand inside the box will be precisely one in a thousand. After putting back in the box the black marble and mixing the marbles thoroughly, if we get a second chance to draw out the black marble we still have a very small probability of drawing out the black marble. However, if we repeat the experiment one million times, then our common sense tells us that at least on one occasion, probably more, we should have drawn out the black marble. After all, the box only contains one thousand white marbles. If after one million repetitions we still have not drawn any black marble from the box, then we may begin to suspect that either we have been among the unluckiest inhabitants there have ever been on this side of the galaxy, or most likely the box never did contain any black marble. Thus, even though the probability of drawing the black marble may be small, the brute force approach of resorting to large numbers (in this case, by increasing the number of trials) makes that probability increase enough to make the event an anticipated event that is due to happen sooner or later, instead of an event unlikely to happen. In order to make the case even stronger, we will analyze the problem of the black marble lost inside a box under a mathematical perspective. We will assume that the box has exactly 999 white marbles and one black marble. We will also assume that after each trial the marble we may have gotten will be put back into the box so as not to deplete the number of marbles inside the box (besides making the mathematics easier to handle). As we already said, if we carry out the experiment only once, then the probability of getting the black marble will be one in one thousand, or 0.001, or 0.1%. A probability of one-tenth of one percent is barely something any wily gambler would bet on. What if we deposit the first marble back into the box and try our luck for a second time? If we call P1 the probability that we may get the black marble during the first trial, P2 the probability that we may get the black marble during the second trial, and P1P2 the probability that we may get the black marble on both occasions, then it can be proven that the probability P that we may get the black marble at least once in those two trials is given by the expression:

P = P1 + P2- P1P2

P = 0.001 + 0.001 – (0.001)(0.001)

P = 0.001999

And with two trials, our odds of getting that black marble have doubled to two-tenths of one percent. This is still a very small probability. What if we make three attempts? In that case, it can be proven that, if we call P1 the probability that we may get the black marble during the first trial, P2 the probability that we may get the black marble during the second trial, P3 the probability that we may get the black marble during the third trial, P1P2 the probability that we may get the black marble on the first two trials, P2P3 the probability that we may get the black marble on the last two trials, P1P3 the probability that we may get the black marble on the first trial and the last trial, and P1P2P3 the probability that we may get the black marble on all three occasions, then it can also be proven that the probability P that we may get the black marble at least once in those three trials is given by the expression:

P = P1 + P2 + P3 - P1P2 – P2P3 – P1P3 + P1P2P3

P = 0.001 + 0.001 + 0.001 – (0.001)(0.001) – (0.001)(0.001) – (0.001)(0.001) + …

… (0.001)(0.001)(0.001)

P = 0.002997001

With three trials, the odds of getting our hands on that black marble have almost tripled to about three-tenths of one percent. Enticed by our rising expectations, we would like to make a bigger number of trials; say, ten trials. But an extension of the above formulas is out of the question, since the resulting formulae become extremely cumbersome to handle, and we must resort to heavier mathematical artillery. It can be formally proven that, if we call q the probability that on any given trial we will not pick out the black marble, then the probability P that we will pick out the black marble at least once in a number N of trials will be given by the expression:

P = 1 - qN

Thus, if we carry out three different attempts to draw out the black marble, the odds of getting it will be:

P = 1 - (0.999)3

P = 0.002997001

and, as expected, the answer checks out with the result we got before.

For ten trials, the probability of drawing out at least once the black marble is:

P = 1 - (0.999)10

P = 0.009955119

The odds are now almost one percent. And if we keep increasing the number of trials, the odds get even better, as can be seen from the following table:




After five hundred trials, we have more than one chance in out of three that we will have picked up that black marble. After one thousand trials, we have a chance greater than fifty-fifty that we will have found the black marble. Notice that the odds begin to turn in our favor when the number of trials approaches or exceeds the total number of different ways in which the black marble may be obtained from the box (one thousand different ways in this case, since there are one thousand marbles inside the box). And after ten thousand trials, the odds that we will have found the black marble are so close to one hundred percent that any gambler, no matter how timid or how dumb, would gladly bet on the chances of the black marble being found before the ten thousand trials are over. Thus, even if that small black marble is lost in the box among a thousand marbles, by resorting to a large number of trials we have increased our chances of finding it almost to mathematical certainty

Perhaps the problem was just too easy. What if we throw the black marble into a box containing one million marbles? Then our chances of drawing out a black marble on a single trial would be only one in a million. Wouldn’t this make it impossible for us to find the black marble, regardless of the number of different attempts? Not if we throw in large numbers. After the first one thousand trials, the probability of finding that single black marble lost in a box containing one million marbles is:

P = 1 - (0.999999)1000

P = 0.0009995

This is a rather small probability, in spite of the fact that we have thrown in one thousand trials. However, not to be discouraged, let us increase without bound the number of trials. Then we can see our chances of finding that black marble improving dramatically as the number of trials goes up:




Thus, even though the black marble is now lost in a box containing one million marbles, by the mere trick of resorting to large numbers we have again raised the chances of finding it almost to mathematical certainty. Notice again that the odds definitely begin to turn in our favor when the number of trials approaches or exceeds the total number of possible ways in which the black marble may be found inside the box (one million in this case, since there are one million marbles inside the box). No matter how small the probability may be, the sheer power of large numbers has the capacity to make many very improbable events not just very probable, but almost certain to occur. Similar arguments have been resorted to when trying to justify the possibility of extraterrestrial life on the grounds that any event that has an infinitesimally small probability of occurring will indeed happen provided that the number of trials is sufficiently large, and on a laboratory as large as the Universe itself operating throughout with the same physical laws as the ones that gave rise to life on Earth, the sheer power of the many billions and billions of galaxies and stars and planets involved will make the appearance of life elsewhere almost a certainty.

The tactic of resorting to large numbers to make unlikely events happen can be traced all the way back to the Roman philosopher Lucretius, who wrote the poem On the Nature of the Universe, attempting to explain the universe in scientific terms with the purpose of freeing people from superstition and allay fears of the unknown. Indeed, he went as far as using the argument to defend the possibility of extraterrestrial life. We can read on his poem the following excerpt:
“Granted, then, that empty space extends without limit in every direction and that seeds innumerable in number are rushing on countless courses through an unfathomable universe under the impulse of perpetual motion, it is in the highest degree that this earth and sky is the only one to have been created.”
Lucretius, who in this respect was way ahead of his time, perhaps would have been most successful in procuring funding for a SETI program if only two thousand years ago the Roman Empire could have had the technology we have today. Anyway, the basic argument remains the same to this day: Given an infinite amount of material ceaselessly combining and recombining during an infinite amount of time all the possibilities and configurations that are allowed under the laws of Nature will occur and recur without end.

However, we now know that the Universe has not had an infinite amount of time available to try out every possible combination. In fact, the Universe is relatively young, and has not existed for more than some 15 billion years according to our most recent theoretical calculations and astronomical observations. Though 15 billion years may seem like a long time, much of that time has been consumed in the formation of galaxies and stars, and it is doubtful any life form could have had any chance to arise in the middle of interstellar dust (at least life as we know it). The only large quantity available to “beat the odds” is the seemingly “infinite” amount of material that is dispersed across a very large space already compacted into usable planets or satellites, and it is there where we can find the large numbers and put them to use. But even then, resorting solely to the power of large numbers to make the emergence of any form of life possible is just not enough. A more detailed calculation of the odds for life appearing anywhere by resorting to large number stratagems such as evolution still comes up extremely short, and recent discoveries from the modern science of chaos point to the fact that the only way the odds can be beaten is for the basic phenomena that allow life to evolve to be nonlinear (more about this will be said on a later chapter). Fortunately, many features of our Universe are nonlinear, and it is this extraordinarily lucky coincidence (?) that allows us to be here today communicating with each other. Indeed, if we were designing a Universe all to our own with the hopes of seeing life evolve, we would have to “build in” the capability for non linearity; otherwise life will most likely never evolve regardless of how long we may be willing to wait, since the odds become extremely dismal even with very large numbers thrown in.

Going back to our original experiment, where we tried to form the word “HOUSE” drawing each of the letters at random, let us now try a different approach. Instead of us picking out each of the five letters from a box, let us have a computer carry out the task for us. In so doing, we will allow each letter to be picked out at random from the English alphabet consisting of 26 letters. In addition, we will allow the computer to consider picking out the character at random also from a numeric keyboard consisting of the digits zero to nine. Thus, our alphanumeric “random soup” can give us at random one character out of 36 different possibilities. In addition, we will add two more characters available from the keyboard: the “space” character used to separate words and the “Enter” key (in old typewriters this key corresponds to the “carriage return”). Then, each time we let the computer pick out a character at random it can do so from among 38 different possibilities. In principle, the experiment could also be carried out by letting loose a monkey inside a cage containing an old mechanical typewriter, waiting for the monkey to push at random any of the 38 different keys of the typewriter, but the computer has the added advantage that it can compare automatically each combination of five words, and once a combination is found to be an exact match for the word “HOUSE”, the computer can be programmed to stop and print out the final tally with the total number of attempts before hitting correctly upon the word “HOUSE”. We will allow the computer to gather five characters in each trial, and if just one of the characters differs from the characters required to form the word “HOUSE”, the trial will be considered a failure, whereas once the correct match is obtained the trial will be considered a success.

The odds of picking out correctly the first character are one in 38, since only one of the characters (the letter “H”) will be an exact match for the first letter required to form the word “HOUSE”. Likewise, there are 38 different ways in which the second letter can be picked out, and only one of them will be the correct one. Thus, there are a total of 1444 different ways in which the first two letters can be picked out, and only one way in which the correct sequence of the first two letters will be selected. The odds of picking out correctly the first two characters are thus one in 1444. It is not hard to see that there are a total of

38 • 38 • 38 • 38 • 38 = 79,235,168 different ways

in which a five-letters word can be formed (this the reason of why any respectable English dictionary will be huge, although many of the possible combinations are sheer undefined nonsense that are not in use, at least not yet). Therefore, the odds that the computer will churn out correctly the word “HOUSE” are one in 79,235,168 or about 0.00000126%, close to one-millionth of one percent. Can we increase the odds by the usual trick of resorting to large numbers? We certainly can, and there are two alternatives available at our disposal:
  1. Repeat the experiment, with many other trials using the same computer.
  2. Carry out the experiment using many computers working at the same time.
In the first case, we may need a lot of time. In the second case, we will need a lot of computers. But either way, at least in principle we can raise the odds by pulling in the magic of large numbers. For the sake of argument, let us assume that we will repeat the experiment many times over using the same computer (this is perfectly feasible with the ultra fast personal computers of today). If we repeat the experiment ten thousand times, then the odds of hitting upon the correct combination of characters at least once will be:

P = 1 - (0.999999987)10000

P = 0.0001299

Our odds are now much better (relatively speaking), for there is now a 0.0126% chance that we will have gotten the word “HOUSE” after ten thousand trials, going from just one-millionth of one percent to about one-hundredth of one percent. And as we keep on increasing the number of trials the odds get even better and better, as can be seen from the following table:




Again, notice that the odds begin to turn in our favor once the number of trials approaches or exceeds the total number of possible combinations, about 80 million in this case. With about half-a-billion trials, the odds are definitely on our favor, and with one billion trials there is almost mathematical certainty that somewhere along the way we will have obtained the word “HOUSE”. The power of large numbers seems to be almost immense. However, there are some practical limitations when resorting to this gimmick, and its shortcomings will now be exposed out into the open.

Instead of just forming a simple six-letters word, let us go for something a little bit more ambitious. Let us try to obtain from the computer, by drawing out characters entirely at random, the following phrase:

THE HOUSE ON THE PRAIRIE

This phrase consists of 24 characters (including the spaces between each word). Let us now evaluate the total number of different possible combinations of character strings consisting of 24 characters drawn at random from an alphanumeric “random soup” of 38 characters. We can obtain this number by multiplying 38 by itself a total of 24 times. This yields the following number:

(38)24 = 82,187,603,825,523,214,603,738,912,597,460,647,936

= 8.21876 • 1037

where we have shortened the representation of such a huge number by means of scientific notation. [The number 1037 means a one followed by 37 zeros, whereas a number such as 10-25 means a decimal point followed by 24 zeros followed by a one.] Just by looking at this number, almost immediately we may begin to sense that something is not right, that things are no longer as simple as they appeared to be. Indeed, the task of using a computer to obtain on a purely random basis a phrase as simple as “THE HOUSE ON THE PRAIRIE” now appears to be way beyond reach. For if we are to use a digital computer, we must bear in mind that every computer that will ever be built in this Universe, now matter how fast, will always require a certain amount of time to carry out each one of its operations. Recall that in all of the previous cases we have studied so far, in order to turn the odds in our favor we need to carry out a number of trials that will either approach or exceed the total number of ways in which the random event can be carried out. Assuming that the computer requires about one second to build each “random” phrase of 24 characters and within this time frame it must also compare the phrase against the “standard” phrase to determine if it has struck gold, then we will have to wait about 8.21876 • 1037 seconds before we have any reasonable hopes of having obtained the desired phrase, generated completely at random, from the computer. The brutal fact is that if that computer had started doing this task since the very moment our Universe was born, it would still be hard at work and would not have covered in all this time even a minute fraction of the number of trials required to turn the odds in our favor. Indeed, if we put an upper estimate on the age of our Universe at about 15 billion years (which is in agreement with the data we have at hand at this point in time), then our Universe is no older than about

4.6656 • 1017 s = 466,560,000,000,000,000 seconds

Compare this number, the age of our Universe, with the time required for the trials carried out in the computer to approach or exceed the total number of possible combinations, at one second per trial:

82,187,603,825,523,214,603,738,912,597,460,647,936 seconds

The computer, working non-stop since the day our Universe was created until today, would not have covered even

0.000,000,000,000,000,000,568%

of the total number of trials required to turn the odds in our favor. For all of its hard work since the Universe was created, the computer would not have made even a tiny dent in turning the odds in our favor.

Purists might argue that a computer carrying out each “build-phrase-compare-phrase” operation at one second per operation would be much too slow by today’s standards. But even if we had used a faster computer carrying out each operation at the rate of one microsecond per operation instead of a full second (the faster speed will allow one million different strings of 38 characters each to be generated and compared against the target string each second), the computer would not have covered even

0.000,000,000,000,568%

of the total number of trials required to turn the odds in our favor. And a computer this fast already has a decent speed by today’s standards for personal computers. Even if we summoned all of the information processing power locked up in IBM's powerful Blue Gene computer:




the computer would not even make a small dent in beating the insurmountable odds.

As a last resort, we could try using many computers instead of a single computer in order to raise the odds, all of them starting to work since the day the Universe was born. As soon as one of the computers had generated successfully (at random) the string “THE HOUSE ON THE PRAIRIE”, it would relay to the other computers a message telling them that the search was over, and then all the computers would come to a halt. This is a perfectly valid approach; we can either use a single computer to generate one thousand different random trials or one thousand different computers to generate each one a single random trial. But even then, we would still come up short, for if we use one billion computers with every computer carrying out each operation at the rate of one microsecond per operation, with all the computers hard at work since the very moment the Universe was created, this vast array of computational power would not have covered even

0.000,568%

of the total number of trials required to turn the odds in our favor. Clearly, we need more computers for the task. If we use one trillion computers, all working since the day our Universe was born, the probability would not rise beyond 0.568%, or one-half of one percent. [Strictly speaking -for those with a penchant towards mathematical rigor-, adding three, four or n more additional computers will not simply triple, quadruple, or increase by n times the probability of hitting upon the desired combination of characters, for there will be an unavoidable overlap of identical combinations being generated at different times among the different machines, resulting in a decreased efficiency due to the missed opportunities these repeated identical combinations of characters represent, and thus the probability will not increase by a factor of n when n machines are added to the task; the rise will actually be lower. However, we can cast aside these considerations, since a more precise -and much more complex mathematical analysis- will not alter the main conclusion we are trying to reach.] However, this is easier said than done, for this much computational power may be unattainable at least for the next five hundred years, even assuming spectacular breakthroughs in technology such as the proposed quantum computer. [Assuming we may be able someday to use each individual atom to store and process bits of information, there is an upper physical limit to the amount of storing and processing that can be carried out, set by the limitation on the number of quantum states in a bounded region that an individual atom can have. This upper limit is what is known today as the Bekenstein bound, an absolute upper limit on information density considering each quantum state of an atom is capable of encoding one bit of information.]

So far, we have only covered the problems involved in trying to get from a computer churning out hundreds of thousands of characters per second on an entirely random basis the simple phrase “THE HOUSE ON THE PRAIRE”. What if we now try to get from the same computer, working still on an entirely random basis, a full printout of a work like Hamlet consisting perhaps of something like 500,000 characters? As you might have guessed, the total number of combinations, given by the number (38)500,000, is a number so huge that we might as well take it to be infinity, a number so huge that it is far beyond the reach of a fast gigantic digital computer even if a computer as big as the observable Universe had been built solely for that purpose. A task such as this one, of incredible exponential difficulty marshalling such large numbers that no computer here on Earth can print those numbers if full, lays to ruins the magic that large numbers could have done for us in trying to beat the odds and turn them on our favor. Indeed, this is the straw that breaks the back of the camel of large numbers. We are led to conclude, forced by the sheer weight of the overwhelming evidence, that if we come across a computer printout containing the full text of a work like Hamlet, then that work could not possibly have been produced on a random basis by any computer here on Earth, however big, however fast. This conclusion is so important that it will be emphasized again for the reader:
There is no natural random process in the entire Universe capable of producing a work like Hamlet.
In order for a work such as Hamlet to be produced, a prerequisite or initial condition is the existence of a powerful mind such as the mind of William Shakespeare. But this is not enough. Another prerequisite is the prior existence of an alphabet that can be craftily manipulated by a writer to produce a good piece of work. The alphabet is the other initial condition that will be manipulated by the creator to accomplish his/her purpose, and in this case it is the species Homo sapiens that has created the initial condition known as the Roman alphabet as a means of communication to convey the story and the ideas contained in Hamlet. Without the coming together of these initial conditions, there will be no Hamlet. And these are not the only initial conditions that must be fulfilled in order for Hamlet to come into being. Initial conditions that would allow a life form by the name of William Shakespeare to be produced and sustained in a planet inside our vast universe are also a necessary prerequisite.

From this chapter we can draw two important conclusions: first, it may be possible for us to tell from the characteristics of an object whether the object was created by Nature as a result of completely random phenomena or whether is was the result of careful planning by an intelligent being; and second, the initial conditions needed to create something may be considered themselves to be part of the original act of creation by the planner who has decided to produce the target object regardless of how long ago the object was produced. For our purposes, as we study the evidence or the footprints left behind, the passage of time is irrelevant; unless time has made it impossible for us to trace back and figure out the initial conditions from whatever remnants those initial conditions would have left behind.

III: Saint Thomas Aquinas, Entropy, and the Big Bang




In the traditions of many religions throughout the world (including Judeo-Christian beliefs), there has long been a sustained belief that the Universe as we know it today did not exist forever in the past, that there was a spontaneous act which gave birth to all that has been, all that is, and all that will be. In other words, the Universe itself has not been eternal as our senses might indicate at first glance, but has had a limited lifespan after its creation. Those beliefs in an act of creation were based solely upon faith, and civilizations of bygone eras had no other means than sheer faith to accept the happening of such a major event that no corporeal inhabitant of this Universe could have witnessed. In days of yore, only a few could have enough courage, without having any experimental data at hand, to try to deduce a starting point, an act of creation, resorting not just to faith but also upon the powers of the intellect. Interestingly enough, modern science, especially during the last century of the ending millennium, has provided confirmation that indeed the Universe did not exist forever and that there was indeed a moment of birth. But this is going ahead of our story.

Thomas Aquinas, a Dominican monk born around 1225 as a nobleman of Lombardic descent, was a medieval doctor of the Catholic Church, considered by many to be the greatest. Among the most famous and influential of his works are the Summa contra Gentiles, a treatise on God and his creation, and the Summa theologica, an exposition of theology for which he was given the title of Doctor communis or “universal teacher”. The most important influence upon St. Thomas Aquinas came from his teacher, Albertus Magnus, considered to be “one of the most universal spirits of the Middle Ages”. Around that time, both had received a strong influence from the Latin translations of the works of Aristotle and the great Arabic and Jewish philosophers such as Averroës, Avicenna, and Maimonides, and their thoughts brought into being a new relationship between faith and reason, starting a movement known as scholasticism [Scholasticism is defined on the Dictionary of Cultural Legacy of The American Heritage Dictionary in its third edition as follows: "The philosophy and theology, marked by careful argumentation, that flourished among Christian thinkers in Europe during the Middle Ages. Central to scholastic thought is the idea that reason and faith are compatible. Scholastic thinkers like Thomas Aquinas tried to show that ancient philosophy, especially that of Aristotle, supported and illuminated Christian faith"]. In essence, this school of thought upholds the belief that faith and reason, far from being confined to different realms, actually complement and support each other, and knowledge of God and his creation can be gained with faith and reason supporting one another. That this point of view created shock waves among traditionalist theologians goes without saying, and the scholastics were promptly accused by many of “having sacrificed religion to philosophy, and Christ to Aristotle”.

In trying to “prove” the existence of a supreme being to “disbelievers”, St. Thomas Aquinas resorted to carefully elaborated arguments, following a strict system of Aristotelian logic and using as building bricks some of the knowledge that was supposed to be true at the time. Unfortunately, with the passage of time, some of these arguments are now outdated, interesting to look at, but not giving much guidance besides the historical value they might have.

However, among his arguments that have not yet been thoroughly demolished by time, there is one in particular in which we want to focus our attention very carefully: the Prime Mover. Essentially, the argument goes like this (excerpt taken from the book A History of Religious Ideas by Mircea Eliade):
“In one manner or another, this world is in movement; every movement must have a cause, but this cause results from another. This series, however, cannot be infinite, and one must admit the intervention of a Prime Mover, who is none other than God. This argument is the first of a group of five which are designated by Thomas as the ‘five ways’. The reasoning is always the same: taking the world of evident reality as the point of departure, one comes to God. (Every efficient cause presupposes another, and in tracing back through the series one comes to the first cause, God. And so on.) Being infinite and simple, the God thus discovered by reason is beyond human language. God is the pure act of being (ipsum esse); thus he is infinite, immutable, and eternal. In demonstrating his existence by the principle of causality, one arrives at the same time at the conclusion that God is the creator of the world. He has created all freely, without any necessity. But for Thomas, human reason cannot demonstrate whether the world has always existed or, on the contrary, the Creation took place in time. Faith, founded on the revelations of God, asks us to believe that the world began in time.”
More than seven hundred years later, St. Thomas Aquinas himself would have had no choice but to recognize that, through human reason (and meticulous astronomical observations) Man can find compelling arguments to demonstrate that an act of Creation actually took place in time. Human reason can find out many things about the origin of the Universe, provided enough clues have been left behind to follow the trail. However, the argument of the Prime Mover (and by Prime Mover we mean the causative agent responsible for setting up the initial conditions which allowed for the Universe we live in to be born) has some solid truth to it in modern times if cast into a new light: the Second Law of Thermodynamics. From this perspective, St. Thomas Aquinas was right all along, indeed we cannot go back indefinitely in time without encountering a starting point. Going back forever without encountering a point of origin is not just improbable, it is infinitely improbable. Such is the unavoidable conclusion we obtain if we adhere rigidly to the second law of thermodynamics, a law that has strenuously withstood the test of time all the way to the end of the second millennium and which not even the most skeptical of scientists and philosophers attempt to put into doubt. But even if the second law of thermodynamics was not enough to convince us of the absolute necessity of a point of origin, a “Prime Mover”, recent developments in modern physics have all but buried forever the concept of a static Universe existing forever in the past, requiring us instead to accept the prior existence of a singularity as the beginning of a very, very dynamic Universe. These two scientific arguments will be elaborated upon a little bit later.

So far, we have said nothing about the actual intelligence that could have been involved in the act of creating the Universe. Whether the Universe came into being entirely on its own or whether some other (perhaps very intelligent) causative agent had anything to do with the way the Universe started, these two widely opposing points of view have staunch supporters on both sides. Before trying to address such a thorny issue, we must study the possible alternatives under which this Universe could have begun.

In an article published inside Scientific American on April 1980 titled “The Structure of the Early Universe”, John Barrow and Joseph Silk make the following observation:
“There are ‘chaotic’ cosmologists who like biologists maintain that the properties of the universe are the result of evolutionary process. If it could be demonstrated that the present structure would have arisen no matter what the initial conditions were, then the uniqueness of the universe would be established in theory as well as in actuality. On the other hand there are ‘quiescent’ cosmologists, who appeal largely to the initial conditions to explain the present structure of the universe. They hypothesize that when the universe was created at the singularity, it had certain definite and preferred structural features for reasons, say, of self-consistency, stability or uniqueness. This means that gravitational evolutionary processes played a role not in shaping the overall configuration of the universe but only in molding substructures such as galaxies, stars and planets.”
In the first case, the initial conditions are completely irrelevant. No prior planning is required, and everything happens entirely on its own without any design or purpose in mind. In the second case, if we assume that the initial conditions played an extremely important (perhaps crucial) role in the creation of the Universe, then the possibility that those initial conditions may be traced back to yet another causative agent ultimately responsible for the setup of those initial conditions will provide much food for thought. This possibility will be explored more fully in other chapters. In the meantime, let us elaborate and expand on what has already been said.

The science of thermodynamics takes its first start with French chemist Antoine Laurent Lavoisier (1743-1794), the father of modern chemistry, who besides isolating the major components of air and organizing the classification of compounds went on to disprove the phlogiston theory by determining the role of oxygen in combustion, thus laying the foundations for what is known today as the principle of conservation of energy which states that the total energy of an isolated system remains constant regardless of changes within the system. In killing the phlogiston theory, he had placed the first solid brick on what we know today as the First Law of Thermodynamics.

The Second Law of Thermodynamics, discovered by French physicist and engineer Nicolas Léonard Sadi Carnot (1796-1832), who is regarded by the many to be the founder of thermodynamics, is not so obvious, and it states that even though an energy supply might be available to produce some work, once the energy supply has been used its capability to produce work is diminished even though the energy itself has not disappeared out of existence (first law of thermodynamics); it has simply become less useful. Were it not for the second law of thermodynamics, it would be possible to connect a refrigerator to a heater and the heat which is being extracted from the refrigerator could be used to produce the work necessary to keep the refrigerator’s motor running forever once the heat has been converted back into the electric power required by the refrigerator to run, thus making a “perpetual motion” machine possible. There is absolutely nothing in the first law of thermodynamics that prevents this from taking place. However, the second law of thermodynamics completely rules out this possibility, and precludes any type of “perpetual motion” machine from ever being built, no matter how sophisticated the contraption might be (our own Sun will not last forever, once it has burned out its nuclear fuel supply and turns into a nova, although it will take a long time for this to happen). The unit of measurement used in the application of the second law of thermodynamics is something we call entropy, which measures the degree of disorder in a system. The American Heritage Dictionary of the English Language lists the following definitions for entropy:
Entropy. 1. For a closed thermodynamic system, a quantitative measure of the amount of thermal energy not available to do work. 2. A measure of the disorder or randomness of a system. 3. A measure of the loss of information in a transmitted message. 4. A hypothetical tendency for all matter and energy to evolve toward a state of inert uniformity. 5. Inevitable and steady deterioration of a system or society.”
Left on their own, most things tend to deteriorate as time goes by, and the only way to restore them to their original conditions is by the expenditure or work (which must come from an outside source whose entropy will increase), an expenditure that by itself will create even more disorder (since the source must pay the price tag for decreasing the entropy of the target, thereby making the combined entropy of the total system source-target always increase), making the total amount of energy even less useful. A tall building eventually begins to crumble and will have to be demolished in a pious act to hasten its inevitable demise. A car eventually turns into a pile of rust, but we never see a pile or rust reconstituting itself into a brand new car. Clothes wear out and have to be replaced periodically with brand new ones. Our own bodies, as we age, begin to deteriorate and develop wrinkles and white hair, and our bones become brittle. The landfills for trash around the world are always growing, and not a single trash dump anywhere on planet Earth is decreasing in size in spite of an ecologically conscious society and all the best recycling efforts being carried out today. Entropy can be shown to have a statistical basis, and in some cases it can actually be stated in terms of some formula which can be used to measure in numbers the total entropy increase of a closed system. But the major issue at stake here that must be kept in mind by the reader is that entropy actually marks the passage of time, since most things in nature are going from a state of lower entropy to a state of higher entropy (there is a very important exception to this law which will be covered later). A system with zero entropy would be a perfectly ordered system, and we may assume that this could have applied at some point in time to the entire Universe, most likely when the Universe first started. It cannot possibly be otherwise, because going from a less ordered Universe to a Universe with a higher degree of order would require vast amounts of physical work to be carried out continuously on a major scale, and we do not see such a thing happening today. Since it takes a measurable amount of time for a system to go from a state with higher order into a state with less order, then the only way we can go back to the original state with higher order is to run the clock backwards. But for the largest system of them all, the Universe itself, the clock cannot be run backwards indefinitely without reaching the point at which entropy was zero, since there cannot be a more highly ordered system than a system with perfect order, a system with zero entropy. And if entropy itself is the yardstick used to measure the passage of time, once we have gone backwards and we have reached the state of zero entropy we cannot go backwards even further since the concept of time itself becomes irrelevant by not having absolutely anything at hand with which to measure its passage. Thus, on thermodynamic considerations alone, the Universe had to have an origin, a starting point. The reader should take into consideration that when St. Thomas Aquinas started outlining his arguments in defense of a starting point, an act of creation of all that is seen and known today, the science of thermodynamics did not exist!

The common belief about the structure of the Universe that was held even by Albert Einstein himself at the start of the century was that of a static universe, a belief which made him introduce into his equations of general relativity a “cosmological constant” (which he later called his “greatest mistake”). The first blow against the belief in a steady-state universe was given by American astronomer Vesto Slipher, who made careful measurements of the nature of light emitted by nearby galaxies, and found that most galaxies had light that was shifted towards the red, and since we know by the effect known as “Doppler shift” that light emitted by an object moving away from us is shifted towards smaller frequencies corresponding to the red just as the horn of a train moving away from us sounds with a lower pitch than when standing still or moving towards us, the unavoidable conclusion is that most galaxies are moving away from us. Dutch astronomer Willem de Sitter, using Einstein’s theory of relativity, showed us that the space of the Universe could expand, taking galaxies away with it in such a manner that the galaxies will be moving away from each other. Edwin Hubble was later able to demonstrate through careful measurements that the velocity with which a galaxy is moving away from us is proportional to its distance away from us, a discovery that until recently has withstood the test of time and continues to hold today for most of the observable Universe, a discovery that implies that the Universe is indeed expanding.

But it was the discovery of the cosmic microwave background radiation in 1965 by Arno Penzias and Ralph Wilson, the most important remnant of a fantastic explosion that had to occur before the Universe began expanding, which gave the final blow to the theory of a static steady-state Universe, and for the first time in the history of mankind there was undeniable scientific proof that there actually was a moment in which the Universe came into being. This monumental finding was first reported by them in volume 142 of the Astrophysical Journal under the title Excess Antenna Temperature at 480 Megahertz, and at the time even Penzias and Wilson were unaware that they had actually discovered the relic radiation from the an explosion that gave birth to the Universe itself. Their paper begins almost innocuously as follows:
“Measurements of the effective zenith noise temperature of the 20-foot horn reflector antenna at the Crawford Hill Laboratory, Holmdel, New Jersey, at 4080 Mc/sec. have yielded a value about 3.5 K. higher than expected. This excess temperature is, within the limits of our observations, isotropic, unpolarized, and free from seasonal variations…”
A microwave antenna has the shape of an old-fashioned horn trumpet, except much bigger. Pointing the antenna towards different parts of the sky enables us to compare how much microwave radiation flows into it from different directions, and this radiation turns out to be the same in every direction, to an accuracy of one part in ten thousand, and thus we say that the radiation is isotropic. The isotropy of the cosmic microwave radiation provides dramatic confirmation of yet another fact that was also unknown at the time: the early Universe was extremely uniform. Indeed, since visible light and all other forms of electromagnetic energy from this primeval explosion -dubbed nowadays as the Big Bang [the term "Big Bang" is somewhat misleading, since the expansion is not an expansion in the usual sense of the word where the debris is flying through preexisting space; at the moment the Big Bang took place, space itself did not exist, and at this very moment space is expanding at an enormous rate separating galaxies further and further; furthermore, a center for such an expansion does not exist, in much the same way as the surface of a perfectly spherical balloon that is being inflated has all of the points over its surface separating away from each other at the same rate] - has been traveling to us through space at finite speed, when we see the farthest galaxies not only are we seeing today light from ten billion light-years away; we are seeing light emitted ten billion years ago, and by probing the farthest reaches of our Universe with ever more powerful telescopes we are actually coming close to witness how the Universe looked like a short time after it was created. And the early Universe appears to have been far too uniform for us to explain how the wide variety of stars, planets, galaxies, clusters of galaxies, super clusters of galaxies and everything else it contains.

Further theoretical work by English scientists Stephen Hawking and Roger Penrose on Einstein’s equations of general relativity uncovered what is known today as the singularity theorem, which implies that the entire dynamical Universe must have evolved from or into a very dense state.

Because of the very limited tools at his disposal, even if St. Thomas Aquinas could have arrived at the conclusion that a “Prime Mover” was the causative agent for the “first impulse” or the “first kick” which started the creative explosion of the Universe, he could not have known during his time that a first “impulse” was simply not enough. There are many ways in which an impulse might be given, and not all of them will necessarily pave the way for the conditions required to bring life into being, as we know it. A “first impulse” is just not enough. It also has to be smart enough; or at the very least it has to able to defy and beat almost insurmountable odds long after it has taken place. It has to be carried out in the right manner. That first impulse must be done correctly, it must be done with the right initial conditions.