This is the fourth part in my series on science. Here are the other parts: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, and Bibliography.
We have seen that the secular establishment has had quite a difficulty with the goals, methods, and foundations of science.
What then does science deal with? Science is always interrelated to induction. It is arguing from a particular to the general. The issue that some like Dr.Gordon Clark noticed is that Inductive logic seems to commit the fallacy of affirming the consequent. Which is an invalid form of argumentation and is contrary to forms like Modus Ponens and Modus Tollens.
Modus ponens:
If P, then Q
P
Therefore, Q
Modus Tollens:
If P, then Q
not Q
Therefore, not P
Affirming the consequent looks like:
If P, then Q
Q
Therefore, P
Abductive reasoning is similar to induction:
Premise 1: If A occurs, then B would be expected as a matter of course.
Premise 2: The surprising fact B is observed.
C: Hence, there is reason to suspect that A has occurred.
Abductive and inductive reasoning are similar and it is often debated whether they are actually distinct. It is in induction we have particular experiences and generalize from them what we expect for future experiences. In abductive reasoning, we infer unseen facts, events, or causes in the past from clues or facts in the present. These are set against deductive reasoning in which you start with a gernal fact and move to a particular case.
The reason for this fallacy is while P is a sufficient condition for Q. It is not a necessary condition. An example usually that is given is:
“If it rains, then the street will be wet.
The street is wet.
Therefore, it has rained.”
You then go out to find it was actually the sprinklers, someone may have used a hose, or a pipe may have broken. There are much more possible explanations for why the street is wet. The point is that rain is a sufficient condition for a wet street, but it isn’t a necessary condition.
The charge of science doing this is quite fair, because of the constant dogmatic stances by scientists. The issue is that scientists and many other people commit this fallacy; science doesn’t. As the reformed apologist says, “If A, then B; B therefore, A. This is, of course, fallacious. However, if A, then B; B therefore, A would appear to have more veracity…” The more we test these things the greater probability there is for the inductive inference or conclusion to be true. This is what scientific experimentation does. It shows how probable a claim is to be true. Other philosophers have struggled with these issues like Charles Sanders Pierce:
For Peirce, this raised an important question: How is it that, despite the logical problem of affirming the consequent, we nevertheless frequently make conclusive inferences about the past? He noted, for example, that no one doubts the existence of Napoleon. Yet we use abductive reasoning to infer Napoleon’s existence. That is, we infer his past existence not by traveling back in time and observing him directly, but by inferring his existence from our study of present effects, namely, artifacts and records. But despite our dependence on abductive reasoning to make this inference, no sane and educated person would doubt that Napoleon Bonaparte actually lived. How could this be if the problem of affirming the consequent bedevils our attempts to reason abductively? Peirce’s answer was revealing: “Though we have not seen the man [Napoleon], yet we cannot explain what we have seen without the hypothesis of his existence.”17 Peirce suggested that a particular abductive hypothesis can be firmly established if it can be shown that it represents the best or only explanation of the “manifest effects” in question. As Peirce noted, the problem with abductive reasoning is that there is often more than one cause that can explain the same effect. To address this problem in geology, the late-nineteenth-century geologist Thomas Chamberlain delineated a method of reasoning he called the “method of multiple working hypotheses.”18 Geologists and other historical scientists use this method when there is more than one possible cause or hypothesis to explain the same evidence. In such cases, historical scientists carefully weigh the relevant evidence and what they know about various possible causes to determine which best explains it. Contemporary philosophers of science call this the method of “inference to the best explanation.” That is, when trying to explain the origin of an event or structure from the past, historical scientists compare various hypotheses to see which would, if true, best explain it. They then provisionally affirm the hypothesis that best explains the data as the most likely to be true.
~Stephen C. Meyer. Signature in the Cell (Kindle Locations 2367-2382). HarperCollins. Kindle Edition.
Two objections must be dealt with, here.
1. Someone may maintain that induction does commit this fallacy and argue that it must be invalid enterprise. The objection seems to not understand that induction or abduction is not deduction. It is rather that all these forms of reasoning are employed in our thoughts.
2. Some in the light of difficulties with inductive reasoning will try to reduce all of the scientific enterprises to being merely a deductive enterprise.
J.P. Moreland says, “The deductive-nomological version of the covering-law model:
L1: All metal conducts electricity.
C1: This wire is a metal. Explanans
E: This metal wire conducts electricity. Explanandum
The deductive-statistical version of the covering-law model:
L1: 50% of radioactive substance x will decay in time t.
C1: This is z grams of substance x. Explanans
E: 50% of z will decay in time t. Explanandum
The inductive-statistical version of the covering-law model:
L1: 90% of people who get penicillin recover.
C1: Jones got penicillin. Explanans
E: Jones recovered. Explanandum
In each case, the thing to be explained (the explanandum) is explained or
“covered” by inferring it from premises (the explanans), the first of which is a
general law and the second of which is a statement of initial conditions. In the
deductive-nomological version, the explanans contains only universal generalizations and the argument is deductive. In the deductive-statistical version,
the explanation is a deductive argument that contains at least one statistical
generalization in the explanans. In the inductive-statistical version, the explanation is an inductive argument (signified by the double line below C1) that in-
includes at least one statistical generalization in the explanans. In each case, a
good scientific explanation of some explanandum E will embody one of the
three logical forms above.”
The history of this comes from Carl Hempel that introduced the deductive-nomological and the inductive-statistical models of explanation.
Some may maintain that since these can be formed as deductive arguments, they’re not inductive. The problem with this view is that the premises are supported by inductive arguments, which are the very thing in question. Both these objections fail to get past the need for induction in scientific inquiry.
Now, after that, we can ask a question of the one who posits a theory. What is the probability of that theory being true? This goes especially for the one keeping his or her religious conviction from blinding him or her. Examples of such religious convictions would be the big bang theory, evolution, and abiogenesis. The question of whether they are good scientific theories is whether they are empirically adequate and probable. So, how probable are these theories?
·Universe
The problem of the universe is the improbability of a life-permitting universe. Consider the example of the expansion rate of the universe. This is known as the cosmological constant. I’ll quote Dr. William Lane Craig:
A change in its value by a mere 1 part in 10120 parts would cause the universe to expand too rapidly or too slowly. In either case, the universe would, again, be life-prohibiting… The force of gravity is determined by the gravitational constant. If this constant varied by one in 1060 parts, none of us would exist.
This usually is compared with the seconds that have ticked by since time began (1020) and with cells in your body (1014). There are around 50 of these very improbable phenomena in the universe. They are called the constants and quantities of the universe that is not based on the regularities in nature.
Evolution:
I call it the theory of evolution, but in reality, it is the framework of evolution. It is in this framework which facts are interpreted and incorporated into. This is a belief by which other beliefs are to be interpreted. But to move past that I will continue onward. The theory of evolution isn’t just about the change in allele frequency over time. If that was in debate, all Christians would be theistic evolutionists. The problem is in common descent, and of particular note is the lack of transitional fossils. We seem to just have a few questionable fossils that are left to interpretation. The evidence should be in the fossils, but we lack the fossil evidence. Either we accept the lack of evidence and continue searching or we enact a theory like punctuated equilibria. The issue with the second option is that it undermines itself as a scientific theory because it’s a theory that denies that it could ever have evidence. Some may believe that the evolutionary models give us predictability. That doesn’t seem distinctive to them, but I’m skeptical of predictive abilities of any historical (forensic) science. In this, I mean that any historical theory must explain the variety we see in the world. A theory must be able to provide opposites: the same process that brought us altruism must also bring us greed. The same process that brings us our immune system must also bring about why we have cancer. I think that it’s just an empty claim of his article. It can always be the case for one to simply readjust the theory to account for the observations. I may even mention the predictions that do not occur that evolutionists have predicted, such as vestigial organs. This is not to claim they can’t have predictive value (A theory that has the Sun in the center of the solar system gives different predictions than Geocentric models). The ANE model of a metal dome predicts the stars would remain in place in the firmament. That’s when an individual will defend his theory by giving Ad Hoc explanations. I just don’t find it persuading when it comes to origins.
Can random mutations account for all the variety of life on Earth? What’s the probability?
Well, the question asked is answered differently based on the person answering it. The more he is closer to naturalism will determine how he answers. The Bible lays out that biological life was a special creative fiat of God and cannot be mere blind natural processes. The naturalist must maintain that this all the result of long causal chains. The issue is when you start discussing the numbers. The probability of this singular events is highly unlikely. Take Ashby Camp on the issue:
Mutations of any kind are believed to occur once in every 100,000 gene replications (though some estimate they occur far less frequently). Davis, 68; Wysong, 272. Assuming that the first single-celled organism had 10,000 genes, the same number as E. coli (Wysong, 113), one mutation would exist for every ten cells. Since only one mutation per 1,000 is non-harmful (Davis, 66), there would be only one non-harmful mutation in a population of 10,000 such cells. The odds that this one non-harmful mutation would affect a particular gene, however, is 1 in 10,000 (since there are 10,000 genes). Therefore, one would need a population of 100,000,000 cells before one of them would be expected to possess a non-harmful mutation of a specific gene. The odds of a single cell possessing non-harmful mutations of five specific (functionally related) genes is the product of their separate probabilities…. In other words, the probability is 1 in 108 X 108 X 108 X 108 X 108 or 1 in 1040. If one hundred trillion (1014) bacteria were produced every second for five billion years (1017 seconds), the resulting population (1031) would be only 1/1,000,000,000 of what was needed!
But even this is not the whole story. These are the odds of getting just any kind of non-harmful mutations of five related genes.
mutated genes must integrate or function in concert with one another. According to Professor Ambrose, the difficulties of obtaining non-harmful mutations of five related genes’fade into insignificance when we recognize that there must be a close integration of functions between the individual genes of the cluster, which must also be integrated into the development of the entire organism.’
…mutated genes must integrate or function in concert with one another. According to Professor Ambrose, the difficulties of obtaining non-harmful mutations of five related genes’fade into insignificance when we recognize that there must be a close integration of functions between the individual genes of the cluster, which must also be integrated into the development of the entire organism.’…
When one considers that a structure as “simple” as the wing on a fruit fly involves 30-40 genes (Bird, 1:88), it is mathematically absurd to think that random genetic mutations can account for the vast diversity of life on earth. Even Julian Huxley, a staunch evolutionist who made assumptions very favorable to the theory, computed the odds against the evolution of a horse to be 1 in 10300,000.
It seems to be related to Haldane’s Dilemma. Specifically, this is that beneficial mutation are too slow to explain the large-scale biological transformation, in the available time of Earth’s history. Dr. Don Batten said:
When a beneficial mutation occurs in a population, it has to increase in the number of copies for the population to progress evolutionarily (if the mutation remained in one individual, then evolution cannot proceed; this is fairly obvious). In other words, it has to substitute for the non-mutated genes in the population. But the rate at which this can happen is limited. A major factor limiting the rate of substitution is the reproduction rate of the species. For a human-like creature with a generation time of about 20 years and low reproduction rate per individual, the rate of growth in numbers of a mutation in a population will be exceedingly slow. This is basically the ‘cost of substitution.’
Imagine a population of 100,000 apes, the putative progenitors of humans. Suppose that a male and a female both received a mutation so beneficial that they out-survived everyone else; all the rest of the population died out—all 99,998 of them. And then the surviving pair had enough offspring to replenish the population in one generation. And this repeated every generation (every 20 years) for 10 million years, more than the supposed time since the last common ancestor of humans and apes. That would mean that 500,000 beneficial mutations could be added to the population (i.e., 10,000,000/20). Even with this completely unrealistic scenario, which maximizes evolutionary progress, only about 0.02% of the human genome could be generated. Considering that the difference between the DNA of a human and a chimp, our supposed closest living relative, is greater than 5%,2 evolution has an obvious problem in explaining the origin of the genetic information in a creature such as a human.
However, with more realistic rates of fitness/selection and population replenishment, the number of beneficial mutations that can be accounted for plummets. Haldane calculated that no more than 1,667 beneficial substitutions could have occurred in the supposed 10 million years since the last common ancestor of apes and humans. This is a mere one substitution per 300 generations, on average. The origin of all that makes us uniquely human has to be explained within this limit.
A substitution is a single mutational event; it can be a gene duplication or a chromosomal inversion, or a single nucleotide substitution. Biologists have found that the vast majority of substitutions are indeed single nucleotides, so Haldane’s limit puts a severe constraint on what is possible with evolution, because 1,667 single nucleotide substitutions amounts to less than one average-sized gene.
Lastly, Dr.Stephen Meyer also asked similar questions in his book going off the work of Dr. Eden. He has become a giant in the intelligent design community and for good reason. This is from his book “Darwin’s Doubt” pages
“Would such an exercise have a realistic chance of succeeding, even granting it billions of years? Eden thought not. The amino-acid chains are also subject to such inflation. A chain of two amino acids could display 202, or 20 × 20, or 400 possible combinations since each of the twenty protein-forming amino acids could combine with any one of that same group of twenty in the second position of a short peptide chain. With a three-amino-acid sequence, we’re looking at 203, or 8,000, possible sequences. With four amino acids, the number of combinations rises exponentially to 204, or 160,000, total combinations, and so on.
Now, the number of combinatorial possibilities corresponding to a chain with four amino acids only marginally outstrips the combinatorial possibilities associated with the five-dial lock in my first illustration (160,000 vs. 100,000). It turns out, however, that many necessary, functional proteins in cells require far, far more than just four amino acids linked in sequence, and necessary genes require far, far more than just a few bases. Most genes—sections of DNA that code for a specific protein— consist of at least one thousand nucleotide bases. That corresponds to 41000—an unimaginably large
number—possible base sequences of that length.
Moreover, it takes three bases in a group called a codon to designate one of the twenty protein-forming amino acids in a growing chain during protein synthesis. If an average gene has about 1000 bases, then an average protein would have over 300 amino acids, each of which are called ‘residues’ by protein chemists. And indeed proteins typically require hundreds of amino acids in order to perform their functions. This means that an average-length protein represents just one possible sequence of an astronomically large number—20300, or over 10390—of possible amino-acid sequences of that length. Putting these numbers in perspective, there are only 1065 atoms in our Milky Way galaxy and 1080 elementary particles in the known universe.
That is what bothered Eden and other mathematically inclined scientists at Wistar. …
They understood the immensity of the combinatorial spaces associated with even single genes or proteins of average length.They realized that if the mutations themselves were truly random—that is, if they were neither directed by an intelligence nor influenced by the functional needs of the organism (as neo-Darwinism stipulates)—then the probability of the mutation and selection mechanism ever producing a new gene or protein could well be vanishingly small. Why? The mutations would have to generate, or ‘search’ by trial and error, an enormous number of possibilities—far more than were realistic in the time available to the evolutionary process. Eden pointed out in his Wistar presentation that the combinatorial space corresponding to an average-length protein (which he assumed to be about 250 amino acids long) is 20250—or about 10 325—possible amino-acid arrangements. Did the mutation and selection mechanism have enough time—since the beginning of the universe itself—to generate even a small fraction of the total number of possible amino-acid sequences corresponding to a single functional protein of that length? For Eden, the answer was clearly no. For this reason, Eden thought mutations had virtually no chance of producing new genetic information. He likened the probability of producing the human genome by relying on random mutations to that of generating a library of a thousand volumes by making random changes or additions to a single phrase in accord with the following instructions: ‘Begin with a meaningful phrase, retype it with a few mistakes, make it longer by adding letters [at random], and rearrange subsequences in the string of letters; then examine the result to see if the new phrase is meaningful. Repeat this process until the library is complete.
It should also be mentioned that while beneficial mutations do not occur enough, harmful mutations build up every generation, around a 100 nucleotides substitutions (genetic copying mistakes of single letter typos) in every person in every generation. That cannot be eliminated by natural selection.
Dr. John Stanford (helped invent the gene gun) puts it like this:
Additionally, very rarely a beneficial mutation arises that has enough effect to be selected for—resulting in some adaptive variation, or some degree of fine-tuning. This also helps slow degeneration. But selection only eliminates a very small fraction of the bad mutations. The overwhelming majority of bad mutations accumulate relentlessly, being much too subtle—of too small an effect—to significantly affect their persistence. On the flip side, almost all beneficials (to the extent they occur) are immune to the selective process—because they invariably cause only tiny increases in biological functionality.
‘So most beneficials drift out of the population and are lost—even in the presence of intense selection. This raises the question—since most information-bearing nucleotides [DNA ‘letters’] make an infinitesimally small contribution to the genome—how did they get there, and how do they stay there through ‘deep time’?
‘Selection slows mutational degeneration but does not even begin to actually stop it. So even with intense selection, evolution is going the wrong way—toward extinction!’
My recent book resulted from many years of intense study. This involved a complete re-evaluation of everything I thought I knew about evolutionary genetic theory. It systematically examines the problems underlying classic neo-Darwinian theory. The bottom line is that Darwinian theory fails on every level. It fails because 1) mutations arise faster than selection can eliminate them; 2) mutations are overwhelmingly too subtle to be ‘selectable’; 3) ‘biological noise’ and ‘survival of the luckiest’ overwhelm selection; 4) bad mutations are physically linked to good mutations, so that they cannot be separated in inheritance (to get rid of the bad and keep the good). The result is that all higher genomes must clearly degenerate.
