Scientific Merit Paper

Read Complete Research Material

SCIENTIFIC MERIT PAPER

Scientific Merit Paper

Scientific Merit Paper

Joppe (2000) defines reliability as:

…The extent to which results are consistent over time and an accurate representation of the total population under study is referred to as reliability and if the results of a study can be reproduced under a similar methodology, then the research instrument is considered to be reliable. (p. 1)

Embodied in this citation is the idea of replicability or repeatability of results or observations.

Kirk and Miller (1986) identify three types of reliability referred to in quantitative research, which relate to: (1) the degree to which a measurement, given repeatedly, remains the same (2) the stability of a measurement over time; and (3) the similarity of measurements within a given time period (pp. 41-42).

Charles (1995) adheres to the notions that consistency with which questionnaire [test] items are answered or individual's scores remain relatively the same can be determined through the test-retest method at two different times. This attribute of the instrument is actually referred to as stability. If we are dealing with a stable measure, then the results should be similar. A high degree of stability indicates a high degree of reliability, which means the results are repeatable. Joppe, (2000) detects a problem with the test-retest method which can make the instrument, to a certain degree, unreliable. She explains that test-retest method may sensitize the respondent to the subject matter, and hence influence the responses given. We cannot be sure that there was no change in extraneous influences such as an attitude change that has occurred. This could lead to a difference in the responses provided. Similarly, Crocker and Algina (1986) note that when a respondent answer a set of test items, the score obtained represents only a limited sample of behaviour. As a result, the scores may change due to some characteristic of the respondent, which may lead to errors of measurement. These kinds of errors will reduce the accuracy and consistency of the instrument and the test scores. Hence, it is the researchers' responsibility to assure high consistency and accuracy of the tests and scores. Thus, Crocker and Algina (1986) say, "Test developers have a responsibility of demonstrating the reliability of scores from their tests" (p. 106).

Although the researcher may be able to prove the research instrument repeatability and internal consistency, and, therefore reliability, the instrument itself may not be valid.

The traditional criteria for validity find their roots in a positivist tradition, and to an extent, positivism has been defined by a systematic theory of validity. Within the positivist terminology, validity resided amongst, and was the result and culmination of other empirical conceptions: universal laws, evidence, objectivity, truth, actuality, deduction, reason, fact and mathematical data to name just a few (Winter, 2000).

Joppe (2000) provides the following explanation of what validity is in quantitative research:

Validity determines whether the research truly measures that which it was intended to measure or how truthful the research results are. In other words, does the research instrument allow you ...
Related Ads