(a). Scientific Method
|
|
Francis
Bacon (1561-1626), a 17th century English
philosopher, was the first individual to suggest
a universal methodology for science.
Bacon believed that scientific method required an inductive process
of inquiry. Karl
Popper later
refuted this idea in the 20th century. Popper suggested
that science could only be done using a deductive methodology.
The next topic ( 3b)
examines Karl Popper's recommended methodology for
doing science more closely.
Science is
simply a way of acquiring knowledge about nature and
the Universe. To practice science, one must follow a
specific universal methodology. The central theme of
this methodology is the testing of hypotheses. A hypothesis can
be defined as a proposal intended to explain certain
facts or observations that has not been formally tested.
The overall goal of science is to better comprehend the
world around us. Various fields of study, like physics,
chemistry, biology, medicine and the earth sciences,
have used science exclusively to expand their knowledge
base. Science allows its practitioners to acquire knowledge
using techniques that are both neutral and unbiased.
The broadest, most inclusive
goal of science is to understand (see Figure
3a-1). Understanding involves two interconnected
processes: explanation and confirmation.
Explanation is perhaps the most important basic goal
of understanding. Explanation consists of explaining
reality with a system of hypotheses, theories, and
laws. Explanation may also relate observed phenomena
to a system of empirical formulas, or link them to
mechanisms that are hierarchically structured at both
higher and lower levels of function. A theory can
be defined as a collection of logical ideas that are
used to explain something. The process of testing,
refining, and re-testing hypotheses constructs theories.
The nature of this confirmation process suggests that
theories are rarely static..
Figure
3a-1: Relationship between reality,
theory, and understanding in science. This
model suggests that we develop scientific theories
to explain phenomena found in reality.
Once a theory is established, it must be confirmed by
re-examining reality to find contrary data.
If contrary data is found, the theory is modified
to include this new information and the confirmation
process begins again. The process of validating
theories is endless process because we can
never assume that we have considered all possibilities. |
Figure
3a-2: Facilitating tools
involved explanation and confirmation.
|
Explanation has
two important secondary components: idealization and unification (see Figure
3a-2). Idealization may
be considered to be the condensation of
a body of empirical fact into a simple statement. In
the process of condensation, some detail must be omitted
and the processes and phenomenon abstracted. Idealization
may also involve isolating the
phenomenon from other aspects of the system of interest.
A second aspect of explanation is the unification of
apparently unrelated phenomena in the same abstract or
ideal system of concepts.
Another minor goal of science is the confirmation of
constructed models or theories associated with understanding. Confirmation is
accomplished through hypothesis
testing, prediction,
and by running experiments.
The next topic ( 3b) examines
these aspects of science in greater detail.
|
(b). The Hypothetico-Deductive Method
|
|
Philosopher Karl
Popper suggested that it is impossible
to prove a scientific theory true by means of induction,
because no amount of evidence assures us that contrary
evidence will not be found. Instead, Karl Popper
proposed that proper science is accomplished by deduction.
Deduction involves the process of falsification.
Falsification is
a particular specialized aspect of hypothesis
testing. It involves stating some output
from theory in specific and then finding contrary
cases using experiments or observations. The methodology
proposed by Popper is commonly known as the hypothetico-deductive
method.
Popper's version of scientific method first
begins with the postulation of a hypothesis.
A hypothesis is an educated guess or a theory that explains
some phenomenon. The researcher then tries to prove or
test this scientific theory false through prediction or experimentation (see Figure
3a-2).A prediction is a forecast or extrapolation
from the current state of the system of
interest. Predictions are most useful if they can go
beyond simple forecast. An experiment is a controlled
investigation designed to evaluate the outcomes of causal
manipulations on some system of interest.
To get a better understanding of the hypothetico-deductive
method, we can examine the following geographic
phenomena. In the brackish tidal marshes of the Pacific
Coast of British Columbia and Washington, we find
that the plants in these communities spatially arrange
themselves in zones that are defined by elevation.
Near the shoreline plant communities are dominated
primarily by a single species known as Scirpus americanus.
At higher elevations on the tidal marsh Scirpus americanus disappears
and a species called Carex lyngbyei becomes
widespread. The following hypothesis has been postulated
to explain this unique phenomenon:
The distribution of Scirpus americanus and Carex lyngbyei is
controlled by their tolerances to the frequency of
tidal flooding. Scirpus americanus is
more tolerant of tidal flooding than Carex lyngbyei and
as a result it occupies lower elevations on the tidal
marsh. However, Scirpus americanus cannot
survive in the zone occupied by Carex lyngbyei because
not enough flooding occurs. Likewise, Carex lyngbyei is
less tolerant of tidal flooding than Scirpus americanus and
as a result it occupies higher elevations on the tidal
marsh. Carex lyngbyei cannot survive
in the zone occupied by Scirpus americanus because
too much flooding occurs.
According to Popper, to test this theory
a scientist would now have to prove it false. As discussed
above this can be done in two general ways: 1) predictive
analysis; or 2) by way of experimental manipulation.
Each of these methods has been applied to this problem
and the results are described below.
Predictive Analysis
If the theory is correct, we should find
that in any tidal marsh plant community that contains Scirpus americanus and Carex lyngbyei that
the spatial distribution of these two species should
be similar in all cases. This is indeed true. However,
there could be some other causal factor, besides flooding
frequency, that may be responsible for these unique spatial
patterns.
Experimental Manipulation
If the two species are transplanted into
the zones of the other they should not be able to survive.
An actual transplant experiment found that Scirpus americanus can
actually grow in the zone occupied by Carex lyngbyei,
while Carex lyngbyei could also grow at
lower Scirpus sites. However, this growth became
less vigorous as the elevation became lower and at a
certain elevation it could not grow at all. These results
falsify the postulated theory. So the theory must be
modified based on the results and tested again.
The process of testing theories in science
is endless. Part of this problem is related to the complexity
of nature. Any one phenomenon in nature is influenced
by numerous factors each having its particular cause
and effect. For this reason, one positive test result
is not conclusive proof that the phenomenon under study
is explained. However, some tests are better than others
and provide us with stronger confirmation. These tests
usually allow for the isolation of the phenomena from
the effects of causal factors. Manipulative experiments
tend to be better than tests based on prediction in this
respect.
|
(c). Concepts of Time and Space in Physical Geography
|
|
The concepts of time and space are
very important for understanding the function of phenomena
in the natural world. Time is important to Physical Geographers
because the spatial patterns they study can often only
be explained in historic terms.
The measurement of time is not absolute.
Time is perceived by humans in a relative fashion
by using human created units of measurement. Examples
of human created units of time are the measurement of
seconds, minutes, hours, and days.
Geographers generally conceptualize
two types of space. Concrete
space represents the real world or environment. Abstract
space models reality in a way that distills much
of the spatial information contained in the real world.
Maps are an excellent example of abstract space. Finally,
like time, space is also perceived by humans in a relative fashion
by using human created units of measurement.
Both time and space are variable in terms
of scale.
As such, researchers of natural phenomena must investigate
their subjects in the appropriate temporal and/or spatial
scales. For example, an investigator studying a forest
ecosystem will have to deal with completely different
scales of time and space when compared to a researcher
examining soil bacteria. The trees that make up a forest
generally occupy large tracts of land. For example, the
boreal forest occupies millions of hectares in Northern
Canada and Eurasia. Temporally, these trees have life
spans that can be as long as several hundred years. On
the other hand, soil bacteria occupy much smaller spatial
areas and have life spans that can be measured in hours
and days.
(d). Study of Form or Process?
|
|
Physical Geography as a science is experiencing
a radical change in philosophy. It is changing from a
science that was highly descriptive to
one that is increasingly experimental and theoretical.
This transition represents a strong desire by Physical
Geographers to understand the processes that
cause the patterns or forms we
see in nature.
Before 1950, the main purpose of research
in Physical Geography was the description of the natural
phenomena. Much of this description involved measurement
for the purpose of gaining basic facts dealing with form or
spatial appearance. Out of this research Physical Geographers
determined such things as: the climatic characteristics
for specific locations and regions of the planet; flow
rates of rivers; soil characteristics for various locations
on the Earth's surface; distribution ranges of plant
and animal species; and calculations of the amount of
freshwater stored in lakes, glaciers, rivers and the
atmosphere. By the beginning of the 20th century Physical
Geographers began to examine the descriptive data that
was collected, and started to ask questions related to
why? Why is the climate of urban environments different
from the climate of rural? Why does hail only form in
thunderstorms? Why are soils of the world's tropical
regions nutrient poor? Why do humid and arid regions
of the world experience different levels of erosion?
In Physical Geography, and all other sciences,
most questions that deal with why are usually queries
about process. Some
level of understanding about process can be derived from
basic descriptive data. Process is best studied, however,
through experimental manipulation and hypothesis testing.
By 1950, Physical Geographers were more interested in
figuring out process than just collecting descriptive
facts about the world. This attitude is even more prevalent
today because of our growing need to understand how humans
are changing the Earth and its environment.
Finally, as mentioned above, a deeper understanding
of process normally requires the use of hypothesis testing,
experimental methods, and statistics. As a result, the
standard undergraduate and graduate curriculum in Physical
Geography exposes students to this type of knowledge
so they can better ask the question why
e). Descriptive Statistics
|
|
|
Introduction
Physical Geographers
often collect quantitative information about natural
phenomena
to further knowledge in their field of interest.
This collected data is then often analyzed statistically
to provide the researcher with impartial and enlightening
presentation, summary, and interpretation of the
phenomena understudy. The most common statistical
analysis performed on data involves the determination
of descriptive characteristics like measures of central
tendency and dispersion.
It usually is difficult to obtain measurements
of all the data available in a particular system
of interest. For example, it may be important to
determine the average atmospheric pressure found
in the center of hurricanes. However, to make a definitive
conclusion about a hurricane's central pressure with
100% confidence would require the measuring of all
the hurricanes that ever existed on this planet.
This type of measurement is called a population
parameter. Under normal situations, the determination
of population parameters is impossible, and we settle
with a subset measure of the population commonly
called an estimator.
Estimators are determined by taking a representative sample of
the population being studied.
Samples are normally
taken at random.
Random sampling implies that each measurement in
the population has an equal chance of being selected
as part of the sample. It also ensures that the occurrence
of one measurement in a sample in no way influences
the selection of another. Sampling methods are biased if
the recording of some influences the recording of
others or if some members of the population are more
likely to be recorded than others.
Measures of Central Tendency
Collecting data
to describe some phenomena of nature usually produces
large arrays
of numbers. Sometimes it is very useful to summarize
these large arrays with a single parameter. Researchers
often require a summary value that determines the
center in a data sample's distribution. In orther
words, a measure of the central tendency of the data
set. The most common of these measures are the mean,
the median,
and the mode.
Table 3e-1 describes a 15-year
series of number of days with precipitation in December
for two fictitious locations. The following discussion
describes the calculation of the mean, median,
and mode for
this sample data
set.
Table
3e-1: Number
of days with precipitation in December for Piney and Steinback,
1967-81.
Year
|
Piney
|
Steinback
|
1967
|
10
|
12
|
1968
|
12
|
12
|
1969
|
9
|
13
|
1970
|
7
|
15
|
1971
|
10
|
13
|
1972
|
11
|
9
|
1973
|
9
|
16
|
1974
|
10
|
11
|
1975
|
9
|
12
|
1976
|
13
|
13
|
1977
|
8
|
10
|
1978
|
9
|
9
|
1979
|
10
|
13
|
1980
|
8
|
14
|
1981
|
9
|
15
|
|
|
|
(Xi)
|
144
|
187
|
N
|
15
|
15
|
The mean values
of these two sets is determined by summing of the
yearly values divided by the number of observations
in each data set. In mathematical notation this calculation
would be expressed as:
mean ()
= S(Xi)/N
where Xi is
the individual values,
N is the number of values, and
is sigma,
a sign used to show summation.
Thus, the calculate means for Piney and Steinback are:
Piney mean = 10
(rounded off)
Steinback mean = 13
(rounded off)
The mode of
a data series is that value that occurs with greatest
frequency. For Piney, the most frequent value
is 9 which occurs five times. The mode for Steinback
is 13.
The third measure of
central tendency is called the median.
The median is the middle value (or the average of the
two middle values in an even series) of the data set
when the observations are organized in ascending order.
For the two locations in question, the medians are:
Piney
9, 9, 10, 11, 12, 12, 12, 13,
13, 13, 13, 14, 15, 15, 16
median =
13
Steinback
7, 8, 8, 9, 9, 9, 9, 9,
10, 10, 10, 10, 11, 12, 13
median =
9
Measures of Dispersion
Measures of central
tendency provide no clue into how the observations
are dispersed within
the data set. Dispersion can be calculated by a variety
of descriptive statistics including the range, variance,
and standard
deviation. The simpest measure of dispersion
is the range.
The range is
calculated by subtracting the smallest individual
value from the largest. When presented together with
the mean,
this statistic provides a measure of data set variability.
The range,
however, does not provide any understanding to how the
data are distributed about the mean.
For this measurement, the standard
deviation is of value.
The following information describes the
calculation of the range, variance,
and standard deviation for the data set in Table 3e-2.
Table 3e-2: Dates
of the first fall frost at Somewhere, USA, for an 11-year
period.
Day of First Frost * (Xi) |
Xi -
|
(Xi - )2
|
291 |
-8
|
64
|
299 |
0
|
0
|
279 |
-20
|
400
|
302 |
3
|
9
|
280 |
-19
|
361
|
303 |
4
|
16
|
299 |
0
|
0
|
304 |
5
|
25
|
307 |
8
|
64
|
314 |
15
|
225
|
313 |
14
|
196
|
(Xi)
= 3291
=
3291/11 = 299
|
|
(Xi
-)2 =
1360
|
*The dates are given in year
days, i.e., January 1st is day 1, January 2nd
is day 2, and so on throughout the year. |
The range for
the data is set is derived by subtracting 279 (the smallest
value) from 314 (the largest value). The range is
35 days.
The first step in the calculation of standard
deviation is to determine the variance by
obtaining the deviations of the individual values
(Xi) from the mean ().
The formula for variance (S2)
is:
S2 =
[(Xi
-)2]
/(N-1)
where is
the summation sign, (Xi - )2 is
calculated (third column), and N is the number
of observations. Standard
deviation (S) is merely the square
root of the variance (S2 ).
S2 = 1356 / 10
S = 11.6 or 12 (to the nearest
day)
This value provides significant information
about the distribution of data around the mean. For example:
(a) The mean ± one
sample standard
deviation contains approximately 68% of the
measurements in the data series.
(b) The mean ± two
sample standard
deviations contains approximately 95% of the
measurements in the data series.
In Somewhere, the corresponding
dates for fall frosts ± one and two standard
deviations from the mean (day
299) are:
Minus two standard deviations: 299 -
24 = 275
Minus one standard deviation: 299 - 12
= 287
Plus one standard deviation: 299 + 12
= 311
Plus two standard
deviations: 299 + 24 = 323
The calculations above
suggest that the chance of frost damage is only 2.5%
on October 2nd (day 275), 16% on October 15th (day
287), 50% on October 27th (day 299), 84% on November
8th (day 311), and 97.5% on November 20th (day 323).
|
|
|
Tidak ada komentar:
Posting Komentar