Introduction
A controversial matter in the recent discussion thread of the Noetic Noah discourse is the nature of science. Once that term is understood, along with its various ramifications, then one will be better able to understand both laboratory studies and evolution, and even the source for confusion in may of the comments.

Still, a few of other terms require some clarification. First are assumption and presupposition. By presupposition, I mean the belief or assumption which maintains no contingency and which is always understood to be true as well as foundational for all else. Naturalism and theism thus act as presuppositions while evolution and creation act as assumptions and rest upon their respective presuppositions.
Two other terms are important here, but they come as couplets. A univocal argument is one that has only one clear and unambiguous meaning. But as a nuance to this term, when it comes to dealing with the conclusion that one might draw from evidence, we can treat it as we would unequivocal, and refer to the situation where an argument can presumably come to only one conclusion. Evidential arguments often tend to be univocal and draw on that nuance of unequivocal. Likewise, analogical approaches often walk hand-in-hand with presuppositional arguments.

First Principles

Defining “science” is like trying to define “love”. One hundred years ago the tendency would have been toward testing the physical world but today a myriad of other definitions take their respective places. If “science” is simply methodological study, then the scope of science includes all of research. If “science” is a test for measurable results then we’ve removed theoretical studies from the definition. If “science” is the solution of problems then there is yet another nuance that removes any (perceived) unproductive work from the definition. Finally, if “science” is a collection of several viable definitions then becomes imperative to collect and clarify the list.
Science has always had a close relationship to epistemology, the study (the science) of knowledge. Understanding of the world around us, as well as understanding why and how people understand anything, takes the constraints and methods of inquiry to new levels. But the world today1 is one of relativism and any knowledge, especially when it may be tainted by culture, assumptions, and presuppositions, often comes into question.

As any understanding of science relates to epistemology one assumption I come with is that not all scientific study will produce certain or incorrigible conclusions. At this point in history, all of the fields in popular question (naturalism, evolution, intelligent design, irreducible complexity) have all produced a good quantity of inconclusive results along with their respective verified results. The goal here, though, is not to determine that one side is right and one side is wrong but to examine how they are producing results. The question is scientific suitability of the theory structure affects the nature of the knowledge produced, that is, whether or not this knowledge is reliable. It does not change the “facts” but it may seriously alter the assumptions behind and carried along with the interpretation of facts. Thus the reliability of that knowledge is for the engineers to test, debate, and retest, but this discussion is about the quality of the theory.

That a great of natural science demands physicalism is an assumption that I will allow because it is core to the historic definition of science. This is dealt to a greater degree later in the paper and the case for this will be clarified at that time. I will simply note here, and leave for later discussion, that physicalism is not the same as naturalism and does not carry the same demands.2

When we were in elementary or secondary school we were taught that what is considered science is empiricism – can I prove something by way of an experiment? And the properly constructed empirical formula was all that was necessary for a proper scientific theory. Apart from the empirical formula, all else was relegated to faith because it was unprovable by the empirical method and all else was unscientific.
But a great deal has been left out of the discussion of scientific theory making and testing, with students receiving only a part of the package. Algebra and plane geometry are other examples of science3 that presented proofs by other reasons. Remember all those proofs in plane geometry? Plane geometry presents proofs by employing the scientific approach of deductive reasoning. There was a mathematical experiment and a set of reportable results, and there was measurement. But there was no observation, at least not in the same manner. The rules of plane geometry are fixed – the sum of the three angles of any triangle equals 180 degrees.
Given that the top and bottom lines are parallel, angles “a” are necessarily of the same value and angles “b” are also necessarily of the same value. That leaves “a+b+c” equaling 180 degrees. Much of plane geometry is a logical process containing a set of pre-determined mathematical conclusions – there is no new experimentation which might lead to alternative or variant conclusions.
Likewise algebra presents similar logical processes where the two sides of an equation are resolved in order to arrive at a conclusion.
Problem 2x + 7 = x + 18
Solution x = 11


The science-like structure of algebra begins with a theory (the formula or problem) and through the effort of processing the formula one arrives at the conclusion.
Problem 2x + 7 = x + 18
Remove 7 from each side: 2x = x+11
Remove x from each side: 1x = 11
Therefore: x = 11


But this problem can be solved through another sequence.
Problem 2x + 7 = x + 18
Remove x from each side: 1x + 7 = 18
Remove 7 from each side: 1x = 11
Therefore: x = 11


The abstraction here is a simple one – there may be more than one distinct and correct path which leads to the correct conclusion.
These understood processes are applied to the arguments that comprise the formula. Then, after the conclusion is reached, the result is checked. This is often done by reversing the process and verifying each step. The science of algebra thus takes a step beyond the science of geometry by adding new levels of abstraction to the processes.

These examples should suffice to show that there is a great deal more to scientific theory-making than empiricism, and the more advanced processes have changed over the last century, especially in the fields of molecular biology and quantum physics4. Even the basic presuppositions behind scientific theory-making have changed to a noticeable degree. Thus the system of theory making is far from monolithic and a survey of these changes is necessary to establish, if possible, any relationship between ID, naturalism, and accepted theory-making practices.

The human context in which theories occur is also critical in describing a scientific theory. Every scientist’s perspective creates a varying degree of objectivity and with that a different lack or benefit as it relates to objectivity. If we make the mistake of ignoring the context of each construct then we will allow for a potential corruption of objectivity that may easily allow a theory construct to lead to false conclusions.

Yet many attempts to precisely define “science” remain obscure as multiple dictionaries present conflicting definitions. The American Heritage Science Dictionary5 takes the empirical and phenomenological approach:

The investigation of natural phenomena through observation, theoretical explanation, and experimentation, or the knowledge produced by such investigation. Science makes use of the scientific method, which includes the careful observation of natural phenomena, the formulation of a hypothesis, the conducting of one or more experiments to test the hypothesis, and the drawing of a conclusion that confirms or modifies the hypothesis.

Merriam-Webster does the same6:
knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through the scientific method and concerned with the physical world and its phenomena.

But the more general American Heritage Dictionary7 provides a broader scope for the term and allows for the methodology, discipline, and experience.
1.
a. The observation, identification, description, experimental investigation, and theoretical explanation of phenomena.
b. Such activities restricted to a class of natural phenomena.
c. Such activities applied to an object of inquiry or study.
2. Methodological activity, discipline, or study: I’ve got packing a suitcase down to a science.
3. An activity that appears to require study and method: the science of purchasing.
4. Knowledge, especially that gained through experience.

The majority of definitions tend toward the phenomenological and away from other potential sciences. This limitation on the definition of such an important term can mislead student and scientist alike into seeing the world through observation only, and that is a serious shortcoming which scientists of all sorts would almost certainly reject. Pearcey and Thaxton8 fall into this trap when they say, in an all-too-simple statement, that “Science is the study of nature.”
Likewise, VanTil makes the mistake of limiting the scope of science to the phenomenal and ignoring the theoretical. He says9 that
A solution of the problem of the relationship between theology and philosophy or science may be found, it will be argued, if theology limits its assertions to the realm or dimension of the supernatural and if philosophy or science limits its assertions to the realm or dimension of the natural. Good fences make good neighbors. A true science will want to limit itself in its pronouncements to the description of the facts that it meets. It is the essence of a true science that it make no pronouncements about origins and purposes.

While VanTil made no explicit rejection of the theoretical type of science, he did (good) limit it to a physicalism that could avoid theological and teleological issues but (not so good) left out the progress in science that is made by the process of estimation.
Though it is the physical sciences, and especially the laboratory experiment, which we normally think of as the true sciences, as we have seen there is much more to the picture. Not all of science produces measured results and not all of science is known to be true, as we see in the abstract mathematics of tachyon and string theory. Science is therefore defined by its context. Theoretical science, natural science, and the host of others are all generally science but specifically their own type of science. Thus generally, science is methodological study within a field of inquiry. Consistent with this, David Clark defines science in this manner10:
A science is a coherently ordered field of inquiry that is based on presuppositions and follows certain rules that are suited to its own method and object.

To some these are too broad, that too much is included and can be branded as “scientific” such as astrology or UFOlogy. But this is precisely the core definition that we need, for this broader definition allows us to narrow and apply the term contextually and say that science is the methodological study of the natural world, whether visible, estimated, or postulated. This clarified definition avoids the constraints and baggage of naturalism and still allows for all the benefits that both physicalism and the abstract approaches might provide. How this works out is shown in the various theory-making methods used by scientists.
This definition is not held by a consensus. The idea of estimated or postulated content, whether fruitful or not, can be considered controversial. Jerry Coyne, for instance, observes that a theory is only scientific if it is fruitful (providing verifiable results).11
According to the Oxford English Dictionary, a scientific theory is “a statement of what are held to be the general laws, principles, or causes of something known or observed.” Thus we can speak of the “theory of gravity” as the proposition that all objects with mass attract each other according to a strict relationship involving the distance between them. Or we talk of the “theory of relativity,” which makes specific claims about the speed of light and the curvature of space-time.
There are two points I want to emphasis here. First, in science, a theory is much more than just a speculation about how things are: it is a well-thought-out group of propositions meant to explain facts about the real world. “Atomic theory” isn’t just a statement that “atoms exist”: it’s a statement about how atoms interact with one another, for compounds, and behave chemically. Similarly, the theory of evolution is more than just the statement that :evolution happened”: it is an extensively documented set of principles — I’ve described six major ones — that explain how and why evolution happens.
This brings us to the second point. For a theory to be considered scientific it must be testable and make verifiable predictions. That is, we must be able to make observations about the real world that either support or disprove it.
Because a theory is accepted as ‘true” only when its assertions and predictions are tested over and over again, and confirmed repeatedly, there is no one moment when a scientific theory suddenly becomes a scientific fact. A theory becomes a fact (or a “truth”) when so much evidence has accumulated in its favor — and there is no decisive evidence against it — that virtually all reasonable people will accept it. This does not mean that a “true theory will never be falsified. All scientific truth is provisional, subject to modification in light of new evidence.

Perhaps the one shortcoming of Dr. Coyne’s definition is his initial assertion of observation alone that would remove the speculative from the venue of being scientific. Most of us, of course, would not want UFOlogy to be considered “scientific” but seem not to be able to restrict it without restricting other speculative or theoretical concerns. And how, after all, does one know that the predictions are verifiable except after the fact? Was atomic theory not properly scientific in the beginning, but only later? Because of this shortcoming, this sense of immediacy and immanence, such phenomenal approaches seem to fall short. The error is not in its concern (it is correct as far as it goes), but in the scope of its concern. It does not go far enough to provide a full definition for what is scientific.

Scientific Theory-Making Frameworks
Because the general definition of “science” was broad, so likewise is the principle of a scientific theory. Some scientific endeavors attempt to find empirical outcomes while others look at the world through different glasses. Three of these perspectives are the Received View, modeling, and mechanism/schema approaches to study.

The “Received View” and Its Implications
Science depends upon rules and these rules are often rather strict. What has been traditionally called the “Received View” is a set of rules for designing a proper (fully-formed) theory and compliance to these rules determines whether or not a theory is wholly “scientific.” This is the old orthodoxy of science and the Received View, per Suppe, is as follows12:
(1) There is a first-order language L (possibly augmented by modal operators) in terms of which the theory is formulated and the logical calculus K defined in terms of L.
(2) The nonlogical or descriptive primitive constants (that is, the “terms”) of L are bifurcated into two disjoint classes:
VO which contains just the observation terms;
VR which contains the nonobservation or theoretical terms.
Vo must contain at least one individual constant
(3) The language L is divided into the following subcalculi:
(a) The observational sublanguage LO, is a sublanguage of L which contains no qualifiers or modalities, and contains the terms of Vo but none from VT. The associated calculus KO is the restriction of K to LO and must be such that any non-Vo terms (that is, non-primitive terms) in LO are explicitly defined in KO; furthermore KO must admit of at least one finite model.
(b) The logically extended observational language, LO’, contains no VT terms and may be regarded as being formed from LO by adding the quantifiers, modalities, and so on, of L. It’s associated calculus KO’ is the restriction of K to LO’.
(c) The theoretical languages, LT, is that sublanguage of L which does not contain VO terms; its associated calculus, KR, is the restriction of K to VT.
These sublanguages together do not exhaust L, for L also contains mixed sentences – that is, those in which at least one VT and one VO term occur. In addition it is assumed that each of the sublanguages above has its own stock of predicate and/or functional variables, and that LO and LO’ have the same stock which is distinct from that of LT.
(4) Lo and its associated calculi are given a semantic interpretation which meet the following conditions
(a) The domain of interpretation consists of concrete observable events, things, or things-moments; the relations and properties of the interpretation must be directly observable.
(b) Every value of any variable in LO must be designated by an expression in LO.
It follows that any such interpretation of LO and KO, when augmented by appropriate additional rules of truth, will become an interpretation of LO’ and KO’. We may construe interpretations of LO and KO as being partial semantic interpretations of L and K, and we require that L and K be given no observational semantic interpretation other than that provided by such partial semantic interpretations.
(5) A partial interpretation of the theoretical terms and of the sentences of L containing them is provided by the following two kinds of postulates: the theoretical postulates T (that is, the axioms of the theory) in which only terms of VT occur, and the correspondence rules or postulates C which are mixed sentences. The correspondence rules C must satisfy the following conditions:
(a) The set of rules C must be finite
(b) The set of rules C must be logically compatible with T
(c) C contains to extralogical term that does not belong to VO or VT.
(d) Each rule in C must contain at least one VO term and at least one VT term essentially or nonvacuously.

More simply, and without going into a lengthy discussion (Suppe does that quite nicely) these rules define the construction of the formula and the relationship of its content to the applicable subject matter. They are meant to certify that the theory is clear, stated with adequate precision to allow processing and testing, that the content of the theory is consistent with the content of the anticipated and measurable conclusions, and in a narrow enough manner so as to allow the results to be verified. The terms control the language that comprises the theory and it controls the statement relationships within the theory. It does not define the outcome of the theory (whether it is successful at reaching its conclusion) and it does not determine whether fact or error is produced by the theory.

This is not the scientific method that we learned in school. It is a set of rules that determines how a test is set up by way of the scientific method. It says that a test for gray squirrels should not result in a conclusion that describes squirrelness, red squirrels, or gophers. The language of the test, the hypothesis, and the interpretive method should all be clear, specific, limited in scope, and inter-related.
What we learned as the scientific method goes something like this:

1. Postulate Is this triangle 3-sided?
2. Theory This triangle is 3-sided.
3. Test Count the sides
4. Measurement Add up the numbers
5. Evaluation Do the numbers == 3?
Yes, report results as successful
No, report results, revise postulate and retest

The Received View says that the language though the process should be consistent. If one postulates three sides then one theorizes regarding three sides, one tests three sides, one measures the number of sides, and one reports regarding the number of sides. This constraint limits the language and the tests to a clear correspondence. The constraints placed the test, as gathered from the Received View, are here useful and functional to arrive at the test results.

Important here is that, to be scientific is to have a correct structure, not a certain type of product or outcome or even a successful execution. But if one were to modify the postulate (and subsequent theory and test) to look for three sides on a square, and though the results would prove false, it would still be a valid theory structure. The structure of the test was designed and followed consistently to lead to a conclusion. It is not required that a test lead to a necessarily true conclusion. It is equally legitimate to test to false or erroneous conclusion if that is the goal established up front. Any test that leads to inaccuracy is still useful for other purposes, such as clarifying what is not (showing what cannot work), as opposed to what is (showing what does work).

Because of the myriad of shortcomings this view is not simply the Received View but it is now the Once Received View 13 (or ORV) and other frameworks now take their place in theory-making. It is no longer the standard for scientific orthodoxy, if there even is any orthodoxy within scientific methodologies.

While the Received View has its use in empirical testing of the physical world there are some tasks it is incapable of addressing. It is a system concerned with particulars and not with broader operations and processes. It is not a modeling system whereby a process or system can be duplicated or emulated. And it is not capable of providing a complex schema. These next two systems provide solutions in those areas. To help explain these systems, each is presented with an every-day example or parallel of how the theory structure might be employed outside of the world of science.

The Model Model
One way to deal with the precision required of the Received View was to substitute abstraction for precision. Instead of building a list of successive tests the scientist builds a model and processes to run (to test) against this model.14 This approach has the advantage of removing the language constraints, thus forcing them to a level of abstraction. It allows a scientific theory to test against a principle instead of a precise outcome.

But the abstractions can become too abstract, too removed from producing a clear conclusion. Like the Received View’s potential problems with unclear Theory, the necessity to control the obscurity of the abstractions in the Model Model presents the same issue. A model which is too broadly-stated is meaningless. Craver15 clarifies the need for some sort of fruitful outcome:
Each of these MM approaches to laws provides tools to grapple with issues of universality, scope, abstraction, and idealization. Suppes’ approach is prima facia more appealing because it sustains the reasonable claim that theories express empirical commitments. Neither approach clarifies the necessity of laws. Giere (1999, p. 96) suggests that the necessity of laws statements should, like issues of scope, be considered external to theories. This suggestion is unattractive primarily because many uses of theories (including explanation, control, and experimental design) depend crucially upon notions of necessity; an account of theories cannot cavalierly dismiss problems with laws precisely because laws (or something else filling their role) are so crucial to the functions of theories in science.

This approach has found a home in computer science. Software construction is often done in terms of an abstract routine that processes data by way of a predicted format, with appropriate exception handling, using generalized routines as opposed to the traditional and certainly more empirical top-down approach where information is simply taken in steps, piece by piece. As a result the algorithm for a routine has changed from

Routine
If A is true then do X else do X’
If B is true then do Y else do Y’
If C is true then do Z else do Z’
End Routine


to the more abstract

Routine
If A.noException() then A.Process()
If B.noException() then B.Process()
If C.noException() then C.Process()
End Routine


This new type of abstracted structure is implicit in object-oriented software design where the old data elements that might have represented keystrokes or numbers now represent both data and processes. This is especially true in Java and C++16.

Models are also used in the software development process. As a way to control the process, the software development model is a determinative process. The goal of the process is to control and guarantee the quality of the software produced. Each of these steps individually has a review process that includes customer involvement with approval, correction, and review and go/no-go considerations.

Unlike the software development model, others are descriptive rather than prescriptive. A familiar scientific model of this character has to do with disease and inoculation statistical studies.17 A disease, like influenza, may spread rapidly if few or none of the populace is inoculated against it. But there comes a percentage point where, if enough people receive their flu shots, that the spread of the influenza virus is halted. Unlike the physicalism of the Received View, modeling provides a statistical measurement of the probability of a halt to disease transmission. It does not guarantee the halt but raises the probability to an acceptable level.

Models do not produce deductive proofs. A model might become a component in a larger empirical test and so provide a step toward a result but that result comes from the empirical sequence and not the model. A model may also produce a result consistent with a theory and it may also exist only for the purpose of examination without the benefit of a precisely-stated thesis.

Models also do not produce these traditional proofs because a model is not a deductive test. Any conclusions drawn from a model is an inductive conclusion. It is a speculative conclusion (not necessarily in error, just not deductive) based on repeated and varied tests against the model whereby a statistical conclusion is reached. An example of this will be provided in the exploration of some naturalistic scientific models.18

There are, as Craver mentioned, three ways in which models may be employed. (There is a fourth, but it follows in the next section because it has some unique characteristics which make it stand apart from these three.) The most common is conceptual replication and emulation. (This parallels what Craver called experimental design.) This approach is used commonly by engineers who construct soft models within a CAD/CAE environment and then operate these models entirely within the confines of a computer before anything physical is ever built. The approach has value not only in the creation of a device but also in diagnosing problems at a later time. In both cases the same model and method is employed though it may operate with a different set of constraints depending on the purpose.

The second use is explanatory. Almost everyone has seen molecular models, toy car models, and other similar items. These models serve a simpler function. Instead of leading to a functional conclusion they instead lead to understanding. Some of these, like a molecular model, may be employed toward functional ends, but that use is not required as they may stand alone.

The third use of models is information evaluation. (This is similar to Craver’s control use of models.) In this situation the model is used to evaluate information and provide a solution to the nature and character of the information. This is the approach that the evolutionary model takes. Information is placed into the model in a suitable arrangement according to the assumption of the truth value of the model. (Most all theologies also take this approach.)

Of these three, evaluation, explanation, and emulation, two of them can produce nothing new. That is, their fruit tends to be abstract. Evaluation and explanation place data into a pre-determined systematic or framework, and that requires certain presuppositions which define the very fruit of the model. This leaves them “begging the question” in the broadest sense. And in doing so they also become unfalsifiable. That’s not really a problem because these models are not attempting to produce a true or false, or otherwise estimable, product.

Only a model which emulates some function can be though of as producing a fruit that is parallel to the original being emulated. This principle sits behind and drives the mechanism model.

Mechanism (Schema) Models
A close sibling to the afore-mentioned models, this next type of theory describes a mechanism19 and its processes. It shares with modeling the need for repetition and emulation but becomes clearer as the mechanism model attempts to produce a measurable output through a more concrete process description, or schema. The schema produced by a mechanism theory builds a structure of higher order activities and their required lower order activities. These are built as dependencies, or gates, with a variety of and/or conditions and other conditions attached in their necessary relationships.

More simply, the mechanism approach can be seen as an extension of some basic model approaches. The difference here is that we are looking first at the structure or pattern, the scheme, of the material. These also tend to be close, if not physical implementations, of something like the principle being assessed.

One example of this is electronic circuit design. Though generally used in design, circuit emulation is often used to reverse-engineer processes and so creates a new (virtual) item that does what the examined item does, even though the internals may prove to be quite different.

Another example of a mechanism theory is IDEF business analysis. The drawing of these dependencies produces a finite state description of operations that lead to existing results. In the case of IDEF the results are used to improve processes. It is often employed in business to find bottle-necks in office procedures.

In the physical and biological sciences the mechanism model is evident in molecular biology. We have all seen drawing of atoms and molecules, especially representations of DNA.

Mechanistic representations allow for functional experimentation. Tests of chemical behaviors and the subsequent changes in molecular structure are very deductive tests. When you add an acid to a base the result will be neutralized to some degree and the chemical reaction is measurable and observable. But combine two parents’ DNA and the results are as various as all the creatures on earth, and more.
A mechanism can also attempt a physical reconstruction. An example of this might be an atomic collider, attempting to emulate, physically, something that is considered to have occurred aeons ago in the early universe, or in some subatomic structure.

Some Consequences

The idea that science is about facts and religion is about faith sets up a false dilemma. Logically, this also is a “black and white” fallacy, assuming that there are only two choices. As we have seen, not all science is about facts. A good deal of science is about theory, some of which, even if fruitful, might even be completely false.

Likewise, all facts are subject to the interpretive process that their methodology establishes. All facts are placed under the constraint of the language of the ORV, the structure of the model into which they are inserted, or their function in the defined or discerned mechanism. Science, even the natural sciences, does not exist apart from the interpretive process. Likewise, as we saw with the Grant’s work, some scientists are satisfied to abuse the model and make data fit the assumptions of the scientist.

There is also no clear distinction between faith and science. The lack of certainty, that is the dependence upon presuppositions that are little different than religious faith, that accompanies a great deal of science should not be missed. Given also that there are multiple methods to science, the conflict set up between the two is a false and simple black-and-white fallacy. There are more than two options available.

Science is about much more than univocal statements. It covers not only the material being observed, but also, the various types of theory structures that are wrapped around it. This leads to dealing with controversies. Is a person “anti-science” because of a disagreement with the framework being used to interpret the evidence? Is one anti-science because one’s perspective has concluded that the misframing of evidence has led to erroneous conclusions? The answers are not so simple that one can so easily slander another person because of the many areas of disagreement, even outside of theological-scientific discourse.

Science that is about models is not the same as the empirical lab test.  And a mechanism differs greatly from ORV test conditions.  Even scientists disagree about what is scientific.  It’s not so simple as some might make it appear.


Notes:

1 Laudan, Larry, Progress and Its Problems: Towards a Theory of Scientific Growth, 1977, University of California Press. Laudan summarizes many of the issues facing scientific inquiry over recent decades. Though he takes a pragmatic approach, he has an excellent understanding of the problem of knowledge and purpose. He states his position on multiple occasions in the book, that “science is essentially a problem-solving activity,” and pursues the theme rigorously. pp. 4, 11
2 Physicalism is a limitation placed on inquiry. It is the demand that the processes we employ are only dealing with the physical world. It is a restriction on the scope of the methodology and is not a statement regarding the assumptions brought to the inquiry. Naturalism comes in to segments. Methodological naturalism, sometimes confused with physicalism, includes the assumption that only the physical exists because “the only effective way to investigate reality.” (italics mine) Metaphysical naturalism carries this position a step further by declaring that there is nothing but the physical. http://en.wikipedia.org/wiki/Methodological_naturalism
3 Bell, E. T., Mathematics: Queen and Servant of Science (Spectrum), 1951, McGraw-Hill, p. 127ff. Mathematics is today the queen of science as it is the overarching science and so provides the ground rules for scientific study. At the same time it serves science because it provides the methodology and mechanism for measuring results.
4 Kauffman, Stuart, “Prolegomenon to a General Biology”, in Debating Design: From Darwin to DNA, 2006, Cambridge, pp. 151ff. Building on a foundation of quantum physics theory, Kauffman is seeking a new creative source, a new law of physics, that allows life to be self-advancing. He seeks “an autonomous agent” that “is a self-reproducing system able to perform at least one thermodynamic work cycle.”
5 science. Dictionary.com. The American Heritage® Science Dictionary. Houghton Mifflin Company. http://dictionary.reference.com/browse/science (accessed: June 30, 2008).
6 science. Dictionary.com. Merriam-Webster’s Medical Dictionary. Merriam-Webster, Inc. http://dictionary.reference.com/browse/science (accessed: June 30, 2008).
7 science. Dictionary.com. The American Heritage® Dictionary of the English Language, Fourth Edition. Houghton Mifflin Company, 2004. http://dictionary.reference.com/browse/science (accessed: June 30, 2008).
8 Pearcey, Nancy R., and Thaxton, Charles B., The Soul of Science: Christian Faith and Natural Philosophy (Turning Point Christian Worldview Series), 1994, Crossway, p. 22
9 VanTil, Cornelius, Christian Apologetics, Second Edition, 2003, R&R Publishing, p. 60
10 Clark, David K., To Know and Love God: Method for Theology (Foundations of Evangelical Theology), 2003, Crosswway, p. 37
11 Coyne, Jerry A., Why Evolution Is True, Viking, 2009, pp. 16-17
12 Suppe, Frederick, The Structure of Scientific Theories, 1977, University of Illinois Press, pp 50-51.
13 Machamer, Peter, and Silberstein, Michael, The Blackwell Guide to the Philosophy of Science (Blackwell Philosophy Guides), 2002, Blackwell Publishers, p. 55, Carl F. Craver’s essay Structures of Scientific Theories.
14 Ibid, p. 65
15 Ibid, p. 67
16 Abstract (object-oriented) languages such as Java and C++ are designed around classes. These classes are composites of both raw information and code. They may be cascaded so that Class A gains (inherits or implements) its data and code structures from parent classes. The abstractions in an application can become quite involved but would be otherwise too difficult or time-consuming to either develop or maintain without the availability of class structures.
17 Geier, David A., King, Paul G., and Geier, Mark R., Influenza Vaccine: Review of Effectiveness of the U.S. Immunization Program, and Policy Considerations, Journal of American Physicians and Surgeons, Volume 11, Number 3, Fall 2006. This study addresses concerns of both individual effectiveness and population-wide effectiveness.
18 Suppe, Frederick, The Structure of Scientific Theories, 1977, University of Illinois Press, pp 64-152, quoting Kuhn: “Moreover, when one does look at how a scientist proposes or discovers these laws, theories, and hypotheses, one finds that he is not looking for anything like the physically interpreted deductive system of the Received View wherein his data are derivable consequences. Rather his initial search is for an explanation of the data – for a “conceptual pattern in terms of which his data will fit intelligibly along better-known data.”
19 Ibid, p. 69

More on: Apologetics

Articles by Collin Brendemuehl

Loading...

Show 0 comments