"Normal" Phenotype?

HOME :: CHAPTER 0  ::  :: WHAT IS A "NORMAL" PHENOTYPE?

What is a "Normal" Phenotype?

A paper written as background to discussion

Mike Cho, Mike Cohen, and Seeta Sistla

Edited by S. F. Gilbert and E. Zackin

What does it mean to be "normal"? The concept of normalcy based on phenotype is one that is widespread both in medicine and in everyday thought. However, what in turn defines a phenotype? The second question can be answered more readily than the first. A phenotype, according to prominent evolutionary biologist Dobzhansky, is "the total of everything that can be observed or inferred about an individual." This includes both external appearance and internal anatomy and physiology, as well as a person's thinking processes and their interaction with society. The phenotype will change over time, and depends largely on the environments that a person has encountered (Dobzhansky 1962, p. 41-42).

The term "normal" is much more difficult to define. Edmond A Murphy (1972, 1973, 1979) lists seven distinct uses of the word "normal" in medical literature. In one sense, it is used statistically (as in a normal distribution), as in "American men are normally between 5' and 7' in height." Murphy suggests using the word "Gaussian" rather than "normal in this context." Normal can also mean "most representative of its class" in which the terms "average," "median," or "modal," might be more accurate. Normal can also mean "that which is most commonly encountered" (such as "Humans normally have two eyes.") in which the term "habitual" might be used. In genetics, normal is often used to indicate "wild-type" and "that most suited to survival and reproduction." Here the term "fittest" might be used. Clinical medicine often defines normal as "carrying no penalty" which might be translated as "innocuous" or "harmless." This usually refers to function. In sociology and politics, "normal" is often used instead of "conventional," and in aesthetics, "normal can also mean "the most perfect of its class" or "ideal." Moreover, Murphy contends that in medicine, our understanding of "normal" is determined in part by economics. "The normal is pretty much what society can afford."

We will attempt to address the concept of normalcy in a philosophical context. One section will address historical and current conceptions of one condition commonly termed abnormal: disability. Another section will specifically address the questions of normalcy that rise from deafness and hermaphroditism as theoretically 'normal' phenotypes which some argue are merely socially constructed as otherwise.

The theoretical question of how to define a "normal phenotype" has been asked primarily in two contexts. In the first context, phenotypic "normalcy" is used to represent the achievement of either an adequate or an ideal level of performance. Here, the "normal" is seen as healthy and as intrinsically better than the "abnormal," although these concepts can still be defined in a multitude of ways. This may include statistics, where the statistical average or the most common trait can be justified theoretically as the ideal. It can also include some ideal standard of physiological functionality, or it can be some standard that meets the needs of the individual. The second commonly utilized concept of the "normal phenotype" is found in attempts to define normalcy in a more statistical sense. Here, the statistical average or most common trait is used as a way to define normality descriptively, without using value judgments. This is the way that many modern attempts to define normality, including attempts to define what is normal in medicine, approach the subject. The distinction between these two concepts is often blurry, and even attempts at finding what is normal in a purely descriptive sense often indirectly imply that one form is better than another (Nordenfelt 1995, p. 16).

Historical Conceptions of Normals Based on Absolute Characteristics

The concept of the statistical norm originated with the work of the Belgian statistician Adolphe Quetelet. Quetelet, considered the founder of statistical methods in social sciences, suggested his notion of the "Average Man" in an 1835 work called A Treatise on Man (Vacha 1978, Cooper and Murphy 2000). To find this average man, whom he believed would "represent all which is grand, beautiful, and excellent," (Quetelet 1842/1969, p. 99), Quetelet analyzed data on all different types of human traits. These included physical characteristics like height and chest size as well as some data that he believed could quantitatively measure intelligence and morals (Hankins 1925). He found that the traits could be distributed in a normal (Gaussian) distribution curve, so the mean values would give the most common value, and therefore the true "normal." Then, the composite average of all traits is Quetelet's "average man."

Ideally, Quetelet would have liked to be able to consider any trait that differs from these averages to be abnormal. However, he realized that it was impossible for a person to follow the average, and therefore be completely normal, in all areas (Quetelet 1842/1969, p. 99). In fact, a relatively recent study showed that only about 6 out of 1000 people would have even near mean values for all important traits (Vacha 1978, p. 830). Because of this, Quetelet allows anybody who falls within a certain statistical range to be acceptable while still considering his composite "average man" the ideal.

Another important milestone of statistical normality is found in the work of Hermann Rautmann. Rautmann was a German doctor and philosopher from the 1920's who believed that "normal" by nature included the most typical, common traits. He then believed that all traits that go beyond certain limits were abnormal (Vacha 1985). This is relatively similar to Quetelet's conception of normality, but it was focused more from the perspective of health and less on a general idealistic notion. In addition, Rautmann includes a broader range that is considered fully normal, while Quetelet is focused primarily on the average. From there, Rautmann then saw a direct correlation between frequency of a trait and degree of health, where beyond a certain point of abnormality, a trait is almost always pathological. In the end, according to Vacha, Rautmann defines health primarily based on how often a characteristic is found, despite the fact that "he knows that [health is] not objectively definable in theory" (Vacha 1985, p. 345).

Other theorists on normality in Rautmann's time were less certain about precisely how normality was related to health. For example, Hans Günther suggested that while having an abnormal trait is not a disease itself, it may lead to disease (Vacha 1985, p. 348). So, there is a connection between being normal and being healthy, but only in some cases. The normality of certain traits, like eye color, for example, does not affect health status, while normality in a trait like the inability of the pancreas to make insulin in Type 1 diabetes is crucial for health.

Bauer, another German constitutional theorist, contributed the idea that because of natural selection, the most common trait must be the one that is best suited for the environment. This is based on the idea that the characteristics of a certain race are based on adaptations to the environment. Then, this typical "race type" can be seen as the biological optimum, with any deviation from that optimum being biologically inferior. (Vacha 1985, P. 348). A similar notion is central to Quetelet's idea that the most common traits are by nature superior, although Quetelet was writing before Darwin, and therefore saw the average traits as being the best suited for their environment without actually recognizing them as direct adaptations to the environment (Hankins 1908). In evolutionary biology, a concept similar to this interesting idea is supported by Alfred Wallace and some other evolutionary biologists, but was strongly opposed by Darwin. That idea, which has been called adaptationism, suggests that the systems that result from evolution are the best of all possible systems. It assumes that the power of natural selection is unrestrained, suggesting that natural selection is the major force driving all evolution (see Gould and Lewontin 1994). From this, we can reach Bauer's conclusion that if the most common traits are seen as the ones that evolution has selected most often, then at least in their native environment, those traits must be inherently better.

However, according to one good argument against this idea of adaptationism, there is no way to know whether a trait that is common in nature is really best evolutionarily or whether it just happens to be common based on more random factors (Vacha 1978). For example, genetic drift plays an in important role in determining the character of many traits in a population. In addition, especially in human communities, where there is much less selective pressure than in natural communities, it seems unlikely that the most common traits really are best suited to the environment. In fact, there are often traits that are not environmentally beneficial that are still common for other reasons. For example, we still have the vestigial instinct to eat rich foods even though we no longer have to do so to stay healthy like our hunter-gatherer ancestors did. While in the past, then, there would have been selective pressure favoring that trait, the healthier individuals today are those that have the opposite tendency. Still, at least so far, selection has not made a love of broccoli and an aversion to cheeseburgers instinctive in people. This is both because our environments are changing much faster than selection can operate and because the environment that we live in is not harsh enough to force fast adaptation to new situations in order to survive. Still, this lack of adaptation can have serious consequences, like those observed in the Pima tribe of Native Americans, who have had a significant increase in their rate of obesity and diabetes as they have shifted to a more modern diet (Marx 2002).

Norms Determined by Functional Ability

The primary focus in defining what is normal changes somewhat with modern conceptions of normal. In most of the attempts described previously, the search for normality is an endeavor that examines the precise character of traits. However, today, we usually focus more on function and less on that precise character. This idea originated with Plato and the Greek philosophers, who suggest that anything that realizes the body's natural potential is the ideal (Moravcsik 1976). As we have seen, people like Quetelet moved away from this concept later, but it is now an important part of health theory once again. The strict functionalist idea of normality is clearly asserted by C. D. King, who writes that normal must be defined as "that which functions in accordance with its design." He goes on to say that using normal to mean the most common is a misuse of the term, and that merely knowing whatever is most common really tells us nothing about the ideal functionality that normality should represent (King 1945).

So, Quetelet's "Average Man" is discarded. In its place, according to King and the view often advanced in medicine today, is the idea that a part of the body that functions according to some defined ideal physiological standard is "normal." A standard pathology textbook asserts that there is one normal standard for function, and anything that differs from that standard is abnormal (Hopps 1964, quoted in Nordenfelt 1995, P. 178). In one sense, this makes the concept of disease much more objective, since we can objectively define the biological function of a part of an organism much more easily than we can define, for example, whether a person is able to reach a desired quality of life (Reznek 1987).

According to some, this often still involves some value judgments. For example, the ideal function of a physiological process depends on whether individual survival, species survival, some other end, or a simultaneous mixture of goals, is the ultimate objective of an individual in nature that biological processes are aiming to reach. The choices here often create very different physiological goals. For example, in environments susceptible to malarial outbreaks, the heterozygous genotype for sickle-cell anemia is beneficial to the population, as it prevents malaria while causing no physiological symptoms when a person is heterozygous for it. But, on an individual level, the homozygous sickle-cell genotype can lead to a debilitating condition, and those individuals would be better off if they did not have this trait. (Engelhardt 1976, quoted in Boorse 1997, p. 24).

According to philosopher Christopher Boorse, a trait such as this one, which is good for the species as a whole but is bad for the individual, is still a disease. This is because if disease is based on whatever the functional goal of a system is, and evolution only happens on an individual level, the functional goal of a system will necessarily be one that is in the best interests of the individual, not one that is in the best interests of the species (Boorse 1997). Still, the true ultimate functional goal of biological systems is an open philosophical question, and the standard used to determine functional normality in those systems will necessarily depend on what is seen as that ultimate goal (Brown 1985). Therefore, since no single interpretation of the functional ideal is conclusive, it is questionable whether "normal function" can be defined objectively, as Boorse is attempting to do.

The theory of health proposed by Boorse, known as the biostatistical theory (BST), combines functional normality with the strict statistical norms of the past. Boorse's theory judges health as normal function when compared statistically to other members of a reference group. Normal functioning, according to the BST, is when an individual performs "each internal part of all its statistically typical functions with at least statistically typical efficiency," with statistically typical efficiency defined as "levels within or above...[a] central region of the population distribution" for the individual's reference class. (Boorse 1977, p. 558-559)

This theory of health and the other statistical conceptions of normality are designed so that, by using statistics, value judgments can be eliminated and the normal that results will be completely objective. However, other philosophers remind us that it is difficult to define what is normal purely using objective scientific or mathematical methods. First, even when we have a distribution curve, how do we decide where on that curve "normal" ends and "abnormal" begins? This was a key question for many of the Germans who considered the problem of normality in the 1920's. The debate on this issue included question as to whether it was necessary to include a transitional area between the normal and the abnormal as well as questions about precisely where the boundary between normal and abnormal traits should lie (Vacha 1985). Another important issue, raised by philosopher Jiri Vacha, questions which values are included on the statistical curve; it seems ridiculous to include data from people who are already identified as diseased when determining what is normal, so in most cases, only healthy people are used for finding statistical norms. However, what definition is being used to identify who is healthy? That is what these methods are attempting to define to begin with. So, any preliminary classification by health status must be based on mostly subjective evaluation, and the results of the statistical analysis could potentially be skewed by whatever method is used (Vacha 1978).

Questioning the Concept of a Single, Universally Applicable Norm

All of the conceptions of "normal" described previously attempt to define a single, universal standard of normality that applies to everybody, at least within a specific group. However, in some recent articles, there is some question as to whether a true "normal" state that can be universally applied really exists at all. This is based on the idea, supported by Theodosius Dobzhansky and by some other recent evolutionary biologists, that there is so much diversity in individuals that no one state can be considered best. In fact, as with the example of sickle cell anemia in malaria-prone environments, the optimal condition for the population is often a balance between traits, here leading to the heterozygote that is immune to malaria but that also has none of the ill effects that those with the disease have (Vacha 1978). Because of this necessary balance, neither version of this trait can really be considered ideal by itself. There are many environments. What is "normal"—in the statistical, ideal, or representative sense—may differ between them.

In responding to this idea, Boorse says that although there is variation within species, there is also a significant amount of similarity within a species. This is supported by Ernst Mayr, who defined the concept of a biological species that we use today, and suggests that there really can be standard functional organization within species (Boorse 1997). Still, philosopher Ronald Amundson points out that there are examples showing the success of abnormal forms, making even this idea questionable. The most striking example is of a British student who had hydrocephaly, a condition where the ventricles of the brain are filled with excess cerebrospinal fluid, often greatly reducing brain volume. While this condition often does create serious functional problems, in this person's case, it had virtually no impact on his life. He had a 126 IQ and normal social life, but on examination, was found to have "virtually no brain," about 10% of normal brain tissue. (Amundson 2000, p. 40-41). In fact it is not uncommon for people with greatly reduced brain mass to still perform normally (Amundson 2000, p. 40-41). Then, are these people functionally normal? With such a small amount of functioning brain tissue, their brains clearly don't function the same way that the brains of most people do, but they are still able to get similar results. This and the other examples that Amundson gives suggest that there are multiple and very different ways for natural systems to function successfully.

Whether variation truly makes a single conception of normal impossible or not, the idea does lead to some interesting conceptions of normality. These views all abandon the concept of a species norm as central and turn to an individual norm. For example, the view proposed by Jiri Vacha suggests that comparing to the species norm is only of "rough orientational value," and then only to see whether the functional relationship between an individual's traits can be considered "normal." For the primary determination of normality, an individual norm should be used, which compares the current state of an individual to that individual's variation range throughout their lifetime. As long as certain essential relationships between traits are observed in their overall function, a wide range of individual variation can still be completely normal (Vacha 1978).

Taking this idea even farther is the idea, suggested by Lennart Nordenfelt and supported by the work of Ron Amundson, that any characteristic is normal and healthy as long as a person is physiologically able to reach their "vital goals" in life (Nordenfelt 1985). These vital goals are then defined as the "states of affairs [which] are necessary...for minimal long term happiness." So, while all people have certain vital goals, and there are likely to be similarities in the goals of different people, the specific goals depend on a person's situation. Then, because of the immense variability between different people, the level of function that one individual expects can be very different from what another person expects and requires. So, there can be no universal standard for what is "normal" and what is "abnormal." Thus, Amundson defines disability as the "lack of species typical functioning at the basic personal level."

Mental Illness

One area where concepts of normality similar to these have been applied is in defining mental illness. While there have been attempts to define mental illness on a biological, functional basis (see Boorse 1976 for one), it often is more difficult than that. Thomas Szasz, who has written many influential accounts questioning our ability to define mental illness, has suggested that while disease is "conventionally defined" as a deviation from biological norms, mental illness is defined based on social norms (Szasz 2000). For example, one account of psychological normality suggests that the normal person is comfortable around other people, has set ambitions in life, and is able to conform to social expectations while still showing individuality (Tallent 1967). Determining whether a person meets criteria like these obviously must be based on much more than physiological abilities. The full debate on this subject will not be addressed here, but the articles referenced here and those papers cited in the referenced articles give a good introduction to this debate.

Disability...an Abnormal Characteristic?

What does it mean to be disabled? To be considered "disabled" is in a logistical sense an absolute term—one cannot be slightly or mostly disabled, you either are or you are not (Davis 1995). According to the American Disability Act of 1974, disability was defined to encompass limitations with daily life activities, including: speech and hearing, motion, learning, sight, and breathing, as well as less clearly observable traits, such as epilepsy, multiple sclerosis, and certain mental illnesses. Although by the definition provided, the range of who is considered disabled is quite broad, the majority of "normally able" people consider another to be disabled when their phenotype is such that it prevents standard physical activity due to a "nonstandard or nonfunctioning body or body part." (Davis, 1, 1995). By current standards, to have a normal body is equated to a type of functionality that is not dependent on an outside agent (such as a hearing aide or wheelchair).

Does this modern perception mean that being physically "normal" equates to a type of functional determinism? It has been contended by certain modern disability advocates that ideological associations of biological normalcy as a necessary standard for a "good" quality of life inadvertently undermine those considered to be disabled. What is biologically normal is considered to be functionally beneficial, while biological differences are seen as functionally disadvantageous or "bad." (Amundson, in press.). As was described in the earlier section, this idea that what is normal is inherently better has been fundamental to many theoretical conceptions of normality in the past, most notably those of Quetelet and Bauer.

These value judgments are often intoned into both a biomedical and social scheme. In the biomedical model, an individual's disadvantages are viewed as an inevitable outcome of biological fact(or)s. Societal discrimination of the disabled is described such that the "normal" population are offered advantages that are, for a variety of reasons, withheld from certain disabled people due to their "abnormal" conditions (Amundson, in press). Is functional uniformity the norm for humans (and other species)? What level of divergence from these "norms" are acceptable deviations to still be considered socially and biomedically normal? Does abnormality equate with a lower quality of life than would be expected from a normal person, and if so, should preventative genetic methods be utilized to avoid having abnormal people born? Philosophical issues concerning normalcy, especially with the increased knowledge of genetics and bio-technological gains during the past century, have become an increasingly discussed issue. Within this section of our analysis of "what is normal?" the questions posed above will be addressed through a combination of scientific, philosophic, and social perspectives on the nature of human disability. If a person in a wheelchair or needing a seeing-eye dog has a good job, friends, and a loving family, that person may have a disability when it comes to driving or cooking; but it does not necessarily mean that they have a poor quality of life.

There is likely never to be an absolute definition of what constitutes a "good quality life." However, phenotypes typically characteristic of disability are often categorized as less than "good," which will have a "very strong negative impacts on the life of individuals that have" disabilities (Amundson, n. d., 2). Surveys suggest that this is a commonly held view, both by the general population and within academia and medicine. Amundson argues that the term "disability" is often inexorably linked with the notion that it must entail a limit on the opportunities faced by the person having such a characteristic—such physical "abnormality" in turn leads to a lower quality of life. It is often presupposed in western culture that this is a biological fact, rather than a subjective interpretation of a specific state of being. Philosopher Dan Brock attempted to present empirical evidence for this assumed substandard life quality for the disabled based on the theoretical concept of health proposed by Boorse. As is described in more detail in a previous section of this paper, Boorse argues that "species typical function" is founded on an empirically defined, biomedical concept of "normal and abnormal function." Boorse then proposes that because these characterizations have been empirically assessed, they are scientifically natural distinctions that are unprejudiced by subjective biases (Amundson, in press).

Based on this analysis, Brock contends that it is the purpose of health care to maintain normal (non-impaired) functionality, which he asserts means protecting or restoring normal opportunity as "a necessary condition for a high quality of life" (Amundson, in press). Brock concluded that one's "quality of life must always be measured against normal, primary functional capacities for humans" (Francis et al., 105, 2000 ). Surveys have shown that many people who are considered biologically abnormal report, on average, a high quality of life; with people considered to be severely disabled considering their lives to be of only slightly lower quality than reported from physically normal individuals. Brock argues that this subjective value of the quality of one's own life is distinct from the objective value (Francis et al., 105, 2000). Based on this argument, a person with abnormal traits can consider their own life to be of high quality, while it is objectively low due to the objective limitations of their "abnormal" biological disposition. While this definition of quality of life sounds incongruous, it is broadly accepted in scientific and popular culture.

A Brief History of Disability

Where do these societal conceptions of physical disability as disadvantageous to quality of life that are so omnipresent in modern culture stem from? The conception of disability discussed here often relies on a bio-medical model, where disabled people are dependent on others for help with basic needs, and are often perceived as making heroic strides when performing a task which may be considered mundane for a normal person, such as in athletics, the arts, or careers. This current standard contrasts to a historically applicable conception of physical differences, where the definition and treatment of disability was made on the basis of one's socially prescribed role, rather than on a clear distinction between what constitutes normal and abnormal biological traits (Edwards 2000). Ancient Greeks appear to have had no term for "disabled" that would have had similar social and political connotations to that of modern America. Several terms in ancient Greek do seem to refer to in some way a person who would currently be classified as disabled: unable (adunatos), maimed (peros), and formlessness (amorphia). A key difference between their conception of disability and ours that this helps to demonstrate is that, while in modern times, the body is often perceived as machinelike—the sum of its parts—Greeks viewed the body in totality, as a whole, distinct entity. (Perhaps it is our technology that has created a disabled category distinct from the unabled. In cultures without wheelchairs, hearing aids, prostheses, and trained animal companions, people having defective functions might be more "unabled" than "disabled.") The Greek words for physical differences were not set medical classifications (like Down Syndrome or multiple sclerosis) but ways of identifying a set of conditions within a certain context, with no term in specific reference to physical impairment. (Edwards 2000)

Does this mean that in classical times there were no people who would now be considered disabled? In fact, it was the opposite set of conditions that most likely was the rationale for there being no set of specific labels for physical differences. People who would currently be seen as disabled were perceived as "integral to the society" rather than as a distinct social minority that carries the social stigmas inherent in such a classification (Edwards 2000). Although the Greeks' artistic representation of the human body was mathematically perfect, it is a certainty that the results of disease, physical injuries, congenital defects, etcetera were common in this culture. However, physical impairment alone did not automatically classify a person as unable. There is documentation of the practice of a yearly assessment of those wishing to be classified as "unable" to determine whether they should receive a small pension for food. This classification was based on two criteria: poverty and a physical condition rendering them otherwise unable to work. This process is recorded in the case ("On the Refusal of a Pension," Lysias 24, Edwards 2000) of a man who was formerly classified as unable. However, because he had been recently seen as able-bodied—riding a horse, trading, and associating with the wealthy, the pensioner was accused of being ineligible for further subsidy. Although the pensioner clearly had some sort of impairment, as it was mentioned in the text that he walks on "two sticks," this alone was not merit for the pension.

To be "disabled" in ancient Greece was not a distinction of the body alone; instead, it was based on how the physical state of the body affected a person's ability to function appropriately in a particular situation (with a specific focus on the ability to earn an income) (Edwards 2000). It was not an uncommon for people with physical impairments to be employed in various tasks, and physical abnormalities were seen more as a characteristic of a particular person than as a certain imposition on their overall life quality (Edwards 2000). This is somewhat similar to Boorse's conception of functional normality; however, while Boorse examines the inherent functionality of a trait, the ancient Greeks were concerned with a person's functional ability in a specific context. The Greek concept of normal is also evident in the modern conception of health advanced by Nordenfelt, as both are based on specific functionality as a necessary factor in defining what is physiologically acceptable, although they apply their conceptions in somewhat different ways.

The majority of evidence suggests that the disabled were not subjected to any distinct classification simply because of their physical differences, and likewise, there was no assumed prohibition from a task because of it. People with impairments were recorded in social classes ranging from artisans, to soldiers, to Alexander's father, who was recorded as severely mutilated by war (Edwards 2000). Did those who were physically disabled in ancient Greece utilize implements to help alleviate their impairments? There is historical evidence of tools to assist with walking, such as canes, crutches, staffs, and even corrective boots and shoes for "lame feet." (Edwards 2000). Prosthetic limbs have been found in the archeological record as well as being recorded in ancient texts, and were likely individually fashioned items. For people with disabilities that prevented them from walking even with the assistance of an implement, there is no evidence for any sort of wheelchair or pushcart—most likely the ancient equivalent was the donkey (Edwards 2000).

Although it can be seen in the period's documentation that physical impairments and their consequences did have an effect on the person bearing them—generally as aesthetically displeasing in relation to the context of the individual—physical impairment was not perceived as the "institutionalized horror" portrayed in modern media. The disabled of the ancient world were not the disabled of today, although they may have been characterized by the same physical differences. Their bodies were an acceptable variation on the whole rather than a tragedy of biology (Edwards 2000). The stigmatization of the disabled, and what constitutes in disability cannot be clearly traced into the ancient world. There is likely no clear historical turning point from the classical conception of physical impairment to the modern social stigmatization.

From Victorian sentimentality into twentieth century conceptions of Darwinism and the biological nature of inheritance, physical "abnormality" has slowly changed from an acceptable state of being, to a stigma to be gawked or laughed at, into a break with the acceptable, normal design and function of the human machine that is both repulsive and pitiable. With a developing conception of genetic inheritability in the late nineteenth and early twentieth century, it was believed by many that "less than desirable" physical characteristics could be erased from the population through applied eugenics. Why should eugenics concern a discussion on the concept of normality as relating to physical disability? Eugenics of the first half of the twentieth century is a little remembered segment of American history (Pernick 2000). However, the influence of its theories on popular culture, and in turn the equally strong reception of and influence on the eugenics movement by the public is a striking example of the power of social value judgments in defining what is considered normal and the dynamic quality of this opinion over time. To understand current perceptions and practices regarding the disabled, the clock should be turned back less than one hundred years prior—to the age where Eugenics therapy was considered a popular, objectively valid science in America that would provide an improvement in social quality (Pernick 2000).

Eugenics was a scientific attempt to improve the population—in intelligence, fitness, and beauty. It is amazing in retrospect to see the enormous influence aesthetic values contributed to a scientifically approved construct of what constituted good versus defective, and a justification for selection against these less than adequate traits (Pernick 2000). The eugenics movement was neither an extremist movement nor a particularly conservative view towards social improvement. Its effects ranged from forced sterilization of the mentally "unfit" and criminals, to "better baby contests," a fairgrounds competition for the most "fit" child based on a livestock shows model, to restrictions on immigration. Perhaps the most extreme examples of the eugenics movement can be seen in the public debate over the moral legitimacy of the Chicago surgeon Harry Haiselden, who from 1915 to 1918 allowed the deaths of at least six infants he declared as "unfit," going so far as to exhibit the dying infants to the media and writing about his exploits for Hearst newspapers (Pernick 2000).

While this may seem morally reprehensible to modern standards, a (now startling) majority of the many who publicly responded to Haiselden's action condoned his decision to allow the "defectives" to perish. The supporters of this social cleansing of the physically "unfit" include such socially liberal notable names as the civil rights lawyer Clarence Darrow, and the blind and deaf education reformer Helen Keller, a woman who was by the standards of eugenics, herself unfit due to her acquired disabilities (Pernick 2000). To many of the time, it was cruel to retain a child considered unfit to live, where they would surely lead a less than desirable life. The issue was a raging media topic during the teens; even a film of the events was produced. Called aptly enoughThe Black Stork, the movie documented Haiselden's eugenics crusade (and starred the doctor himself). The Black Stork exemplified the eugenic linkage of beauty with health, fitness, and morality (Pernick, 2000). In the film, Haiselden begs a fictional young man with a non-specified genetic disease to refrain from marrying, but his advice is ignored, and a "defective" child is produced. The defective requires immediate surgery to save its life, but Haiselden refuses treatment based on the child's future of crime and misery, compliments of a vision provided by God to the characters and movie audience. The defective body's soul then enters the arms of Jesus, following the mother's agreement to withhold medical treatment for a baby undoubtedly doomed morally as well as physically (Pernick 2000).

The link between health, aesthetic beauty, and morality is seen in other films of the time period, such as the high school biology film reel known as The Science of Life, which pledged, "An attractive appearance goes hand in hand with health." (Pernick 2000) The educational film stressed classical Victorian notions of feminine beauty and hygiene, while highlighting the mechanical, streamlined body as an ideal when in motion—Pernick argues that preferred physiological beauty became linked with an active form, in part because of the motion pictures which could highlight it. A body that cannot freely move, bend, see, etcetera, is less than ideal, less beautiful and less fit (in a genetic sense). It is probable that the rapidly changing media of the twentieth century played a crucial role in establishing the disabled of western society as not only bodily defective, but incapable of living a good life (Pernick 2000). What a person will be capable of is predetermined under the eugenics schema; undesirable physical difference is a defectiveness, a factory failure.

It is clear that the American eugenics project had a broad influence and strong social support in its heyday. Individual's lives were clearly affected against their own will (and bodies), for the good of a society captivated with the notion of Darwinism and the potential hereditability of traits (Pernick 2000). The term "heredity" did not have the connotation that it would have today in any introductory biology textbook. Heredity was, to the layperson (as well as certain scientific literature) of the period, what you received from your parents; the genes as well as environment in which you were raised. A respected British statistical geneticist of the time argued that "in the practical sense [through familial environment].... We are anxious to make a more perfect mankind and we are interested in the practical side." (Pernick 2000) Such logic was similarly applied to "mental defectives" and their families, leading to the much publicized forced sterilization of Carrie Buck (Seehttp://www.eugenicsarchive.org/eugenics/).

To a eugenics supporter in early twentieth century America, the goal was to maximize what was considered to be attractive while minimizing and ultimately depleting the population of what was not. These standards were based on a specific segment of the American population, generally of the middle class (Pernick 2000). Eugenicists, rather than denying their selected traits were primarily aesthetic, celebrated this as a scientifically supported, objective method of discrimination. This does not seem so far away from Brock's empirical dissemination of "the good life,' published decades following the end of the eugenics movement in America, and the eugenics extremes of Nazi Germany during the 1940's. The ability of the power of societal expectations of what is normal and good, when combined with the "objective" nature of scientific support, is clearly evidenced by the widespread effects of the alluring and troubling promises made by supporters of the eugenics movement in America, and its later manifestation during World War II Germany.

When faced with the choice of letting the unfit die, the social opinion remained in favor of a medical decision throughout the mid-twentieth century, while finding it morally abrasive to publicly discuss such issues due to the implicit moral qualms of such choices (Pernick 2000). The influence of aesthetical considerations in the classification of disease remains a question of biomedicine. Are people with disabilities treated or thought of in certain negative connotations simply because of cultural values and subjective definitions of what is functional and normal, just as hindsight shows us to be true of the "scientifically objective" eugenics movement? As technological and scientific advances allow increasing control over and knowledge of the inherited physical traits of humans, it seems possible that disability will become a rarity—screened for and selected against in the prenatal stages of development.

A Modern Approach to Disability

Within the past two decades, scientific advances have allowed us to leap beyond the pedigree analysis techniques of the eugenics age and mid twentieth century, to a more precise genetic screening for a number of heritable diseases. The Human Genome Project has created the potential for eventually understanding each gene's product and its role in the functioning of "normal" people (Wertz, n.d.). Currently, a biomedical approach for heritable conditions often relies on prenatal genetic screening for a select number of traits and diseases. For people who are born with or acquire a disabling condition, drug therapies, and/or physical implementations, such as wheelchairs or hearing aids, are often the prescribed courses of action to assist them in attaining "normality." Philosophers Ron Amundson and Anita Silvers have shown that the use of these implements are often stigmatized on the basis of their mode of function (unnatural) rather than the level of function achieved (Amundson, in press) For example, although the world record for a marathon is 45 minutes faster for a wheelchair user than the fastest runner, wheelchairs are often shunned socially as a less efficient, cumbersome means of movement that should be avoided if possible. An alternate case of preference of form over functionality is the past practice of attempting to teach oralism (lip reading and speech) while suppressing the more efficient and intuitive use of sign language in deaf-schools (Amundson, in press). It is acknowledged in many physical rehabilitation programs that cosmetic normalcy is a standard that should be sought, even at the sacrifice of greater functionality. Thus, we see clear examples of a negative social judgment on the importance of phenotypic "normality," even when functionality can be achieved that is at least equivalent to what is perceived as "normal."

So how does this question of form versus function relate to the advent of genetic knowledge and technology? As it becomes increasingly possible to assess a person's genotype, there is an parallel increase in the ability to assess their genetic "health"—the individual's potential to develop certain conditions or pass such traits onto their offspring. What such analyses may reveal range from a single base-pair difference in a particular gene that codes for a partially mis-folded protein causing cystic fibrosis, muscular dystrophy, or any number of other conditions, to broader genetic variations, each with their own distinct physiological effects (Wertz, n.d.). It is not a new issue for potential parents to consider the possibility of the inheritance of certain traits when choosing to whether to have a child. But as the technological potential for accurate prediction of potential traits or differences becomes increasingly accessible to the population, how does one define which traits are acceptable to carry to term and which are not? Will disability become a class-based phenomenon, where the wealthy and middle-class subsections of the population will be able to afford screening, while the lower class will not?

Currently, there is little consensus among geneticists worldwide as to the ethics of prenatal selection, as surveyed by Dorothy Wertz and Jon Fletcher of The Shriver Center (Wertz 2002). This applies to a broad range of phenotypes, including, (but not limited to) sex preference, mental and physical disabilities such as cystic fibrosis or mental retardation, and deafness. As we have seen, social support of eugenics in America during the first half of the twentieth century was influential on the "science" that supported it, and was further supported by the stamp of scientific objectivity. The eugenics approach to social "improvement" is neither forgotten nor erased from modern social and scientific causes. China's stated goal of human genetics program is the "improvement of the population quality and decrease of population quantity." (Wertz 2002) In Western Europe and Latin America, The Shriver Center research documented that surveyed geneticists separated the term "eugenics" from their own work. While eugenics was considered to be a "state-sponsored, coercive social program" (Wertz 2002), geneticists perceived their own work's preeminent goal as "prevention," determined by the individual or family concerned (Wertz 2002). Such convictions leads to the question: How individual can such a decision be?

It is evident that social pressures to conform to what is deemed phenotypically normal have been present throughout history. In the twentieth century alone, we have seen cases of societal discrimination against physical difference stemming from the eugenics movement, to a more subtle social scorn of disability today expressed in such cultural icons as the Muscular Dystrophy telethon (see Longmore 2000). Secondly, the condition of services for those with disabilities, and the extent of reproductive choices available to parents vary greatly throughout the world and within individual countries, including America. As such, the stigma associated with certain conditions is also conditional upon the socioeconomic conditions of an individual faced with such "preventative choices" or lack thereof (Wertz 2002.).

In recent years, opposition from disability activists has been voiced concerning the intrinsic devaluing of the lives of those with disabilities by selective abortion after prenatal testing. They purport that by supporting the abortion of infants with certain heritable conditions, a socially approved stance that their lives are not worth living because of their abnormal, or less than desirable, conditions is being created (Wertz 2002). When asked what advice the geneticists would provide to potential parents faced with the potential of a child with a genetically predisposed undesirable trait, there was no standard response given to any of 24 hypothetical fetal conditions presented. In general, however, there was a trend within the U.S. for geneticists to wish to be "as unbiased as possible," aside from sex selection and two conditions leading to certain infantile death (Wertz in press.). This is in contrast with a general trend outside of the U.S. for geneticists to support giving advice that would, for the majority of the hypothetical cases, be slanted pessimistically towards encouraging the termination of the pregnancy (Wertz 2002).

Within the U.S., The Shriver Center research found that 85% of geneticists would support abortion for fetuses with Down Syndrome, 92% for severe, open spina bifida, and 56% for achondroplasia, an example of the variation in "preventive" measures deemed ethically acceptable based on the perception of what is an acceptable condition for a life worth living (Wertz 2002). The theoretically informed views of geneticists, which themselves varied, also varied against a survey of the U.S. public. In this case, there was only one condition for selection against a trait by abortion that was favored by the majority (56%)—a severely retarded child who could not comprehend (or therefore produce) spoken language. Thus the variance in what constitutes an acceptable life (one that should not be aborted based on genetic test information) varies greatly from specific populations down to an individual level (Wertz 2002).

Should genetic prenatal testing be limited to only what is (potentially) deemed to be "serious" cases of disease and/or disability? An intriguing hypothetical situation was posed by Wertz that will be left to the reader to consider: If it is ethical for the "normal" parents of a fetus diagnosed with a disability to abort it, is it also ethical for disabled parents to abort a non-disabled fetus? The Shriver research proposed this question to geneticists in the following way: Is it ethical to allow hearing parents to abort a deaf child? Likewise, is it ethical for deaf parents to abort a hearing child? While a "large majority" of geneticists supported the first case, they rejected the second as an exploitation of the power of genetic diagnosis to knowingly bring a disabled child into the world (at the theoretical expense of a "normal" one). Wertz asks if such a sentiment is a fair judgment, or a slightly obscured form of eugenics?

Deafness

One of the major conflicts over normality concerns congenital deafness. During the past three centuries, deafness has been categorized as many different things, such as a disability, an illness, and even a race, culture, and ethnic group. However, how does deafness relate to the category "normal?" Throughout history, the hearing world has looked upon the deaf as a disability and an abnormality in society. However, how do the deaf define normal? We will look at the past and present issues of deafness that shaped the word normal in society.

The start of eighteenth century marked the beginning of a gradual change in the lives of deaf people. In the beginning of the century, deaf people were typically born into a hearing family and were quite isolated from other deaf persons. There was no category to put deaf people in, and there the term "disabilities" was not used. They were basically "isolated deviations from a norm, as we now might consider, for example, people who are missing an arm." (Davis 1995, 52) The deaf had no separate society and were not a subculture as they are today.

People in the eighteenth century focused not on the fact that some were deaf, but more on how the deaf communicated with the hearing world. The language of the deaf started out with reading and writing. While literacy rates increased in the world, people observed that the deaf could indeed process thought through reading and writing. In 1771, in Paris, the Abbé de l'Epée started to hold public displays of the abilities of deaf students (Davis 1995). It was known that crowds of people came to see these public performances. The deaf students were asked thought provoking questions and they replied in written English, French, Latin, German, Italian, and Spanish. Through these displays, the hearing world acknowledged that the deaf could process their language visually, but not audibly.

Sign language grew out of written language around this time. It started out with the deaf using a finger as a pen to write the words or ideas that they were trying to convey. Davis says, "Writing is in effect sign language, a language of mute signs." This in fact is true, which would explain why the deaf can understand written language (Davis 1995). Eventually as sign language grew into a universal language, groups of deaf people started to form a society. However, the deaf were both looked upon with wonder and pity, and would not be part of the "able bodied" society.

The nineteenth century would bring about a social change in the deaf society. It was in this century that society looked upon the deaf as its own ethnicity or race. It is a strange occurrence especially when looking at other disabled persons.

But unlike the other people with disabilities, also ostracized if not ghettoized, the Deaf have a community, a history, a culture; moreover the Deaf tend to intermarry, thus perpetuating that culture. (Davis 1995, 78)

Being born deaf was looked upon almost similarly to being born Jewish, Irish, or into any other ethnic group. Socially, unless born into a particularly wealthy family, the deaf were lower class citizens.

The eugenics movement started in this century and with it a sense of anti-disability, as was described earlier. Interestingly, the inventor of the telephone, Alexander Graham Bell, was a passionate eugenicist. He feared that the deaf would bring about a certain doom for humanity. He thought that the deaf would lead to "the production of a defective race of human beings [which] would be a great calamity in the world" (Bell 1869). Bell believed that sign language should be forbidden, education through sign language should be abolished, and that deaf people should not be allowed to teach other deaf students.

Despite all this, deaf schools started to spring up into Europe. In the beginning of the eighteenth century, "there were none... and close to sixty [schools] in the end" (Davis 1995 82). Education of the deaf continued into the nineteenth century and through this, the deaf created their own community. The numbers of deaf peoples joining into these communities increased spectacularly.

Today, deafness is looked upon as a disability and as a way of life. This is an ongoing conflict, and conflict informs the play (and movie) Children of a Lesser God. If looked upon as a way of life, then there definitely is a society and a culture behind deafness. There are national societies such as the National Association of the Deaf (or NAD) to support deafness and its culture. In this aspect, the NAD is in support of "promotion, protection, and preservation of the rights and quality of life of deaf and hard of hearing individuals in the United States of America" (Finn 1998, 6). The advent of modern technology indeed led to improvements in the quality of life of deaf and hard of hearing persons (i.e., the use of closed captioning, hearing aids, two-way radios, and the Internet).

However, not many see deafness as a culture and a way of life. Some view deafness as a disease and a disability that needs a "cure." The notion of the normal as being the average, the biologically functional, and the ideal is expressed in this view. The ideal person would be able to hear and take part in the hearing world. All of the senses of the ideal person would be functional and complete. The statistical normal also could be applied to the fact that the majority of the world is hearing. To remedy a deaf person and raise them up to the ideal or statistical normal, cochlear implants have been developed. The use of cochlear implants has evolved to be a major issue in treating deaf children before they have the command of language. The cochlea is a spiral shaped organ in the ear that recognizes sound waves. When high-pitched tones are produced, one end of the cochlea is stimulated by the tones, and when a low pitched tone is produced, the opposite end of the cochlea is also stimulated. These tones are standing waves that displace the stereocilia on the tips of the hair cells. The cells are then depolarized by the wave and consequently, vesicles inside the cell release neurotransmitters that are then accepted by the auditory nerve that transmits the signal to the brain. When the hair cells are damaged this is called sensorineural hearing loss. This is where cochlear implants can replace the hair cells on the auditory nerve (Finn 1998, 7).

Cochlear implants convert speech into electrical pulses that the auditory nerve can process. To do this, a microphone that is fit outside a person's ear receives speech sounds and the microphone amplifies the sound into a speech processor linked by a cable. The speech processor then converts the sound into electrical signals that are sent to a transmitter that is fastened to the head. The transmitter then sends the coded electrical pulses to a receiver-stimulator that are connected to the cochlea through a bundle of wires that stimulate the cochlea to send signals to the auditory nerve and the brain (Finn 1998, 5).

There is significant discord in the deaf community because of these implants. Many in the deaf community feel that these implants would decrease the population of their community and destroy the deaf culture. The NAD asserts that cochlear implants do not significantly help deaf students learn English more easily or achieve greater educational success (Finn, 1998 6). However, the National Institutes of Health found that the value of giving implants to children under the age of two is much higher than implanting after that time. After the age of two, children have passed the critical period for audible input for language acquisition. The benefits of having implants include speech recognition, bilingualism, and being able to exist in both the deaf and hearing communities of the world.

The issue is based more on personal preference and views on deafness than the medical benefits or cons of having the cochlear implants. Deaf people who receive these implants often are ostracized by their deaf friends and community. Moreover, the NAD urges parents to know more about the issues surrounding cochlear implants. Many deaf people believe that these implants will cause cultural genocide in their communities, but some do believe that these implants could relieve them of their burden. Whichever is the case, an informative decision is encouraged to be made by those on both sides of the issue.

The differences in views and issues in the hearing and deaf worlds are great. In the past, the split between deaf and hearing caused many people to categorize the deaf as either a disability or even an ethnic group. Most would think that the normal exists in the full body politic. Fully functional and able human bodies are mostly looked upon as normal because of their idealism and because of their frequency. However, when looking at a separate deaf community, who is normal? A deaf person might look upon another deaf person and say that he/she is normal, but how does a hearing person look upon a deaf person? The answer lies in human perspective. Through the eyes of a hearing person, the deaf could be placed into many categories (including normal) and vice versa.

Hermaphrodites

Hermaphroditism has also been looked upon as a disability and an abnormality. Having a child born with ambiguous genitalia can lead to surgery and forced gender roles on an unknowing child. Do the parents have consent to change the sex of their own child or does the child have to grow up and then decide? Society has a concrete idea of male and female, but hermaphrodites either choose to remain the way they are or change their sex to fit the models written by society. The majority of the world is male or female, which leaves hermaphrodites in a statistical minority. This leads many to believe that hermaphrodites are abnormal because they do not fit the model sex. Functionally, hermaphrodites are not normal, since they lack their reproductive ability. However, in every other way they can function as "normally" as any other human beings (including those individuals who lack children by choice or chance). The ambiguity of their sex and in their gender roles has brought up the question of normalcy in these people (Fausto-Sterling 2000).

The birth of a child of who has an ambiguous sex has often been the cause of distress in a family. Questions arise such as: should we raise our child as a male or female? What should we name our child? However, the most controversial question of all is: should we have surgery done for our child? This question arises from the assumption that the child is not "normal" and modifications must be made to make this child more "normal." For more information, see the websites of the Intersex Society of North America (http://www.isna.org/) and the United Kingdom Intersex Society (http://www.ukia.co.uk/).

In retrospect, it seems clear that the surgical refashioning of infants' genitalia must be assessed during the adulthoods of those patients, after the sexual organs take on their distinctive importance in intimate and procreative relationships. To judge success by genital appearance and psychosexual development prior to puberty is to fall victim to narrowed vision (UKIA, 2002).

The United Kingdom Intersex Association compilation of statements from various medical literature shows that there are some that find that the "refashioning" of genitalia is a decision that parents must not make. If a child is born with two X chromosomes, however features male genitalia, how do the parents react? In most cases the child is raised as male, and when an adult, he is sterile. However, the biological example of female can be made as having two X chromosomes. How does one determine sex in these cases?

The written testimony of Mairi MacDonald, attests to the physical and emotional pain of being intersexed. Mairi's parents chose his sex to be male despite being sexually ambiguous at birth. Mairi was not satisfied with being male, being forced to be a man and inherit the gender roles associated with maleness. "However, given the choice of 'male,' 'female,' 'intersex,' I would unhesitatingly select 'intersex'—but society does not give me that option so I select 'female.' I do so with deep reservations, gritting my teeth at a society which will not accept my right to simply be who I am" (MacDonald, 2000). Society is structured in a way that there is nothing other than male or female. The minority that fits in between male and female are often looked at as the disfigured and abnormal. Many intersexed children have an excruciatingly difficult time relating to other children who are deemed "normal."

My years in school were a minefield of emotions and secrecy, even as a junior it must have been explained to the teachers by my parents that I could not stand to urinate, and that I was very different to other boys, as I suffered the humiliation of only being allowed to go to the lavatory when every other child in the class had been. I didn't understand what all the fuss was about as I didn't feel different to anyone else but was certainly made to feel that way. Why did they have to treat me like such a freak? (Anonymous, 2000, in UKIA website)

Obviously these experiences are only due to the lack of acceptance of people of intersexed backgrounds and the inner conflict that society creates for people born with ambiguous genitalia. Many have gone through surgery when reaching adulthood because of this ignorance. The experiences that these people go through are not just individual, but shared among many people of the intersexual background. Normalcy in this case seems to be shaped by society and not by individuals. Society has a definition of normal that describes the ideal, fully functional person. Statistically, since the majority of people are from totally male or female sexes, the minority becomes the abnormal.

Coda

Ever-changing societal perception will always influence the definition of "normal" and "normalcy." Regardless of the phenotype, members of a given minority could view themselves as normal while society can define these members as abnormal. The people in the "normal" majority will likely share common (if naïve) views on the definition of a normal phenotype, a definition that will be applied to all members of society. It is interesting, although perhaps not surprising, to note that a large majority of the texts written on what is "normal" are by those who by their own admission are themselves considered abnormal by society. Thus, it can be seen that those in the minority are often the first to question the conceptions held without question by those in the majority. However, these philosophical and practical questions of the nature of normalcy are left to the inherently dynamic nature of "normal" society to reconcile.

Works Cited

Amundson, Ron. (n.d.) Disability, Ideology, and Quality of Life: a Bias to Biomedical Ethics. [Unpublished Manuscript]

Anonymous (2000) "From Birth to Realisation"http://www.ukia.co.uk/voices/pais.htm

Boorse, Christopher. (1976). "What a Theory of Mental Health Should Be."Journal for the Theory of Social Behavior 6: 61-84

Boorse, Christopher. (1977). "Health as a Theoretical Concept." Philosophy of Science 44: 542-573.

Boorse, Christopher (1997). "A Rebuttal on Health." In: What is Disease? (James M. Humber and Robert F. Almeder, eds.). Totowa, NJ: Humana Press.

Brown, W. Miller. (1985). "On Defining 'Disease.'" Journal of Medicine and Philosophy 10: 311-328

Cooper, Brian P. and Margueritte S. Murphy. (2000). "The Death of the Author At the Birth of Social Science: The Cases of Harriet Martineau and Adolphe Quetelet." Studies in the History and Philosophy of Science 31: 1-36.

Davis, Lennard. (1995). Enforcing Normalcy: Disability, Deafness, and the Body. London, England: Verso.

Dobzhansky, Theodosius. (1962). Mankind Evolving. New Haven, CT: Yale University Press.

Englehardt, H. Tristam, Jr. (1976). "Ideology and Etiology." Journal of Medicine and Philosophy 1: 256-268.

Edwards, Martha. (2000). Constructions of Physical Disability in the Ancient Greek World: The Community Concept. In: The Body and Physical Difference: Discourses of Disability. David Mitchell and Sharon Snyder (ed). Ann Arbor, MI: The University of Michigan Press.

Fausto-Sterling, Anne. (2000). Sexing the Body: Gender Politics and the Construction of Sexuality. Basic Books, NY.

Finn, Robert J. (1998). Sound from Silence: Development of Cochlear Implants. Washington D.C.: National Academy of Sciences.

Gould, Stephen Jay and Richard Lewontin. (1994). "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme" In:Conceptual Issues in Evolutionary Biology (Elliott Sober, ed.). Cambridge, MA: MIT Press

Hankins, Frank. (1908). "Adolphe Quetelet as Statistician." In: Studies In History, Economics, and Public Law, vol. XXXI. New York: Columbia University.

Hopps, Howard C. (1964). Principles of Pathology (2nd edition). New York: Appleton-Century-Crofts.

King, C.D. (1945). "The Meaning of Normal." Yale Journal of Biology and Medicine17, 493-501.

Longmore, Paul. "Conspicuous Contribution and American Cultural Dilemmas: Telethon Rituals of Cleansing and Renewal." . In: The Body and Physical Difference: Discourses of Disability. David Mitchell and Sharon Snyder (ed). Ann Arbor, MI: The University of Michigan Press

MacDonald, Mairi (2000). "Intersex and Gender Identity" <http://www.ukia.co.uk/voices/is_gi.htm>

Marx, Jean. (2002). "Unraveling the Causes of Diabetes." Science 296: 686-689

Moravcsik, Julius (1976)."Ancient and Modern Conceptions of Health and Medicine." Journal of Medicine and Philosophy 1: 337-348

Murphy, E. A. 1972. The normal and the perils of the sylleptic argument.Perspect. Biol. Med. 15: 566—582.

Murphy, E. A. The normal. Amer. J. Epidemiol. 98: 403—411.

Murphy, E. A. The epistemology of normality. Psychol. Med. 9: 409—415.

Nordenfelt, Lennart. (1985). "A Sketch for a Theory of Health." Acta Philosophica Fennica 38: 203-217

Nordenfelt, Lennart. (1995). On the Nature of Health: An Action-Theoretic Approach (2nd edition). Dordrecht, Holland: Kluwer Academic Publishers.

Pernick, Martin. "Defining the Defective: Eugenics, Aesthetics, and Mass Culture in the Early-Twentieth Century. (2000). In: The Body and Physical Difference: Discourses of Disability. David Mitchell and Sharon Snyder (ed). Ann Arbor, MI: The University of Michigan Press

Quetelet, Adolphe. (1969). A Treatise on Man. (Robert Knox, Trans.) Gainesville, FL: Scholars' Facsimiles & Reprints. (Reprint: Original work published 1842).

Reznek, Lawrie. (1987). The Nature of Disease. New York: Routledge & Kegan Paul.

Szasz, Thomas. (2000). "Second Commentary on 'Aristotle's Function Argument'"Philosophy, Psychiatry, & Psychology 7: 3-16

Tallent, Norman. (1967). Psychological Perspectives on the Person. Princeton, NJ: Van Nostrand.

United Kingdom Intersex Association (2002). <http://www.ukia.co.uk>

Vachá, Jirí. (1978). "Biology and the Problem of Normality." Scientia 72: 823-846.

Vachá, Jirí. (1985) "German Constitutional Doctrine in the 1920's and 1930's and pitfalls of the contemporary conception of normality in biology and medicine."The Journal of Medicine and Philosophy 10: 339-367.

Wertz, Dorothy. (n.d.) Society and the Not-so-New Genetics: What are we Afraid of? Some Future Predictions from a Social Scientist. The Shriver Center:http://www.umassmed.edu/shriver/



Comments