There is an equivocation in modern philosophy that makes ethics a problem atic undertaking. It stems from our ability to give impersonal accounts of human behavior, the things people do, while at the same time experiencing ourselves as participants in history. So accustomed have we become to looking at ourselves from an imaginary point outside ourselves and outside our world that we often forget who we are; we forget that thread of history that makes us uniquely ourselves. We could never tolerate a complete dissoci ation from the circumstances of our existence and maintain our sanity, yet a selective surrender to the forces of impersonality can from time to time be a convenient escape from the awesome responsibilities that are a part of the human condition.
A theory of ethics must speak to this equivocation. One may look at ethics from the perspective of the self, the person, the agent who must decide and act in a social context (upwards perspective), or one may look at ethics from the perspective of the social order into which the individual must fit (down wards perspective). The questions that one formulates for ethics may be very different in these two perspectives. It is even possible to divert one's attention away from the self by abstractly formulating the questions for the ethical undertaking, yet the self never disappears no matter how impersonal the considerations become. "What is right or wrong?" is a very different question from "What ought I do?" Yet both are properly considered ethical questions. What is not proper is to consider one approach exclusive of the other. The distinction between absolutist and relativistic ethics is itself specious, suggest ing that it may be possible and appropriate to ignore either the person or the context. This tension, which has plagued modern philosophy since the En lightenment, leaves us believing that we must make a choice--adherence to ourselves or to an objective reality--when in fact we need to understand how to locate ourselves in a reality that is beyond ourselves.
The recognition of multiple tasks for ethical reflection, which are inclu sive and not exclusive, raises questions about the methods of thought that may be appropriate and helpful in studying ethics. When is a question or issue understood to be an ethical question or issue? Does saying that it must be personal mean that it depends on one's perspective? If so, is this to suggest that different conclusions could be reached in different situations? If not, could one hope for an absolute guideline as a point of reference in making decisions?
But after all, what is goodness? Answer me that, Alexey. Goodness is one thing with me and another with a Chinaman, so it's a relative thing. Or isn't it? Is it not relative? A treacherous question! You won't laugh if I tell you it's kept me awake two nights. I only wonder now how people can live and think nothing of it. Vanity!1
The abstractly formulated question "What is goodness?" is not the way ethical dilemmas are usually experienced. They emerge from the context of the lives we lead. It is in an attempt to get away from the immediacy of the decisions we must make that we abstractly formulate our question, so that we may act with less turmoil than Ivan Karamazov experienced. People can live and think nothing of the question because action is often taken without the intense reflection that perplexes Dostoevsky's characters. At the one extreme are those impulsive characters who reflect not at all on their actions; at the other extreme are those obsessional Hamlets whose actions are almost completely inhibited by their thoughts. Descartes even based his existence on his thinking: Cogito ergo sum. At least he thought this was what he was doing.
This chapter attempts to get a bearing on the problem of doing ethics in our highly scientific and philosophically abstract culture by looking in some depth at the person who must make moral decisions and then act. By looking at the decision-making process, we realize that as important as abstract thought and cognition are, there are other ingredients as well. So completely have thought and action come to be separated in the modern imagination that we have become accustomed to asking the epistemological question "How do we know?" independent of the ethical question "What shall we do?" So thoroughly do we allow ourselves to maintain this split in our minds that often we do not even know who or what we are. We are not only confused but depersonalized by the impersonalization of philosophy and ethics. Thus when we say that ethics involves more than morality--that it is the process of reflecting on values as values are translated into action--we are identifying the person by the actions that person chooses. One is not what one thinks but what one does--or more precisely, one is known by what one does. This is what Hauerwas means when he claims that integrity, not obligation, is the hallmark of moral life. One's wholeness is constituted by one's consistency over a period of time, a lifetime, not in proclaiming what one considers to be virtuous. When Hauerwas faults the standard account of ethics for its attempt to be universal, impersonal, atemporal, and acultural, he is reminding us of what we already know, but are inclined to forget; namely, that ethics is contextual (arising from particular life situations), and it is personal, histori cal, and cultural. We are inclined to forget this because of the demand that such awareness places on us. We are obligated to follow the admonition "Know thyself" because our sense of identity depends on it, and our moral reputation rests on what we choose to do.
The critique of the standard account may be summarized and contrasted with its alternative by means of the following chart.
|Standard (Regnant) Account||Alternative (Post-Critical) Account|
|Based on obligation||Based on integrity|
|Enforced by control, suasion, or sanction||Enforced by willing assent, trust in a Convivial order, or community|
Each of these features will be examined in some detail in the discussions that follow. We may start by looking at what at first may appear to be a rather mundane clinical situation. If ethics is to be understood as a process of reflecting on the value judgments one makes, that process is very much like clinical judgment for the physician. Does clinical judgment include ethical ingredients even if not recognized as such? Again consider the decision involved in choosing an analgesic:
A 54-year-old engineer is now in his eighth post-op day following abdominal surgery for ulcer repair. While a likable person, this engineer gives the impres sion of being a chronic complainer, a hypochondriac. And he requests his Demerol for pain always half an hour before it is ordered, once every four hours for pain: His surgeon comes in and talks to him a little bit about it. The surgeon says, "Most of my patients are better by the eighth day. Demerol is a highly addictive medication." After some discussion along these lines, the surgeon says, "Well, I think what I will do is switch you to Talwin for pain. You'll still have your pain medication but with a less addictive drug."2
This example has the virtue of illustrating a real conflict that involves more than just technical criteria and facts and that, though of no apparent controversy, is of grave importance to the people involved. The physician might not recognize this as a matter for ethical reflection, thinking it to be merely a matter of clinical judgment. The philosopher/ethicist is quick to spot the value dimensions of this kind of decision and raise in an ethical context the kinds of questions the physician might well also be raising without identifying them as ethical questions per se. Is depriving the engineer of the narcotic he requests causing him undue suffering, or conversely, would acceding to his request/demand mean irresponsibility, causing him to be come addicted? The matter might in fact become quite controversial if it is recognized that the surgeon is making a judgment about what might be called the engineer's "character," even if that character judgment were camouflaged as a "psychiatric assessment" of addictive potential."
Physicians make numerous such choices every day. When are these decisions to be identified as ethical choices? I submit that a mundane choice is considered an ethical matter when an element of conflict is involved. This conflict may be a conflict within oneself: A given individual faces a dilemma between two courses of action, each having some merit; we call this intrapsy chic conflict. Or the conflict may be between two or more persons committed to different courses of action, which we may call interpersonal conflict.
We may hope that when such conflicts arise they can be resolved peace fully, but we can easily imagine that such conflicts might generate friction, heat, antagonism, even anger. Where there is conflict, there is likely to be affect. Ethical discussion often becomes difficult because of the unpleasant ness of the affect it generates. Indeed it is such conflict that often occasions ethical reflection. It is a task of ethics to resolve or mediate such conflicts, and many approaches to ethics try to minimize the affect. Objectivity and equanim ity are seen as desirable. Analysis of alternatives, it is sometimes suggested, should be rational, cool, dispassionate, and logically rigorous. While this may be desirable, abstract analysis can often miss the point of real conflict and be too remote from the genuine concerns. It is also a task of ethics to identify real concerns, which may have lain dormant because dealing with them would be uncomfortable.
Thus as a starting point and point of orientation when ethical reflection seems to be getting abstract and remote, I often find it useful to ask "Where is the affect?" as a way of locating ethical conflict.
At a panel discussion of the Karen Quinlan situation, the problem of when to turn off an artificial respirator was addressed by a physician, a lawyer, and a philosopher, from the perspectives of their varied disciplines. Each in turn gave a talk outlining the issues from his point of view. The talks were very abstract with qualifications and requalifications about possible definitions of death and how a decision to turn off her respirator might be reached. The comments made had a very detached quality as if the speakers were indifferent to the situation, yet there was an undercurrent of tension, anxiety, perhaps as everyone present imagined being in such a situation and how to deal with it. Where was the affect? Finally a point was raised which seemed initially to be tangential to the substance of the discussion. How much was it costing and who was paying? At that time Karen Quinlan's hospitalization had cost some $130,000 at public expense. Perhaps that should not have mattered, but it did. It was mentioned that Mr. Wrigley, the Chicago chewing gum magnate, had kept his wife alive on a respirator for a number of years at personal expense. The lawyer argued that as long as it was done at personal expense, there was no matter of ethical concern. A political science professor objected very strenuously that who paid was an irrelevant issue, and that there was a real ethical conflict involved. At this point the discussion became very heated and everyone got drawn in. It seemed that one of the key affective issues involved was the issue of money. Who was spending money and how the money was being spent was something about which people had very strong feelings. An issue which started out as a concern with definitions of death, turned out to involve "allocation of scarce resources," distributive justice, and free economy as well.
These two examples in particular and any of a number of situations we might choose to examine, several of which we will later have occasion to consider, illustrate a problem in ethics, which John Macmurray terms "the crisis of the personal" in philosophy. In the example of choosing an analgesic, we see that it is possible for a person (the physician) to act (to choose an analgesic) without necessarily reflecting on that action. In the extreme of this impersonality the action could become almost reflexive: "I act this way because that's what I always do" or because "that's what is always done, standard procedure." In consideration of the Karen Quinlan panel discussion, we see reflection either with a paucity of affect or an almost uncontrollable excess of affect.* The equivocation in these situations comes from an uncer tainty about where to locate the person or how to identify the person. Does one indwell the situation (mind and body, thought and affect)? Or is the person removed from the situation, reflecting on it from the Archimedian point outside the world? Our first instinct might say both or either, so accustomed have we become to moving back and forth by acts of mental distancing. Because of this acquired ability, we should take a closer look at what is meant by the personal.
*The task of the ethicist is in at least one very important regard similar to that of the psychotherapist or psychoanalyst: in the attempt to achieve the right titration of affect so that learning about human experience may take place. When confronted by a hyper-rational elaboration of detail and an isolation of affect, it is necessary to raise the titer of emotion. When confronted with hysterical over-emotionality, the task is to lower the level of affect, so that reason may have an opportunity.
What is a self, a person? We may easily be misled if we follow a philosophy based on understanding the physical universe; we are not merely atoms, which mean nothing more to one another than an occasional bump in a vast emptiness. The word "person" was in its earliest forms the Latin "persona," the mask worn by actors in a drama. The linguistic clue suggests an awareness of the equivocation in selfhood, for the mask that one presented publicly was not necessarily the private self. Indeed there is the inherent possibility for self-deception in assuming a public role to disclose one's conscious inten tions, for if one holds up a mirror to oneself, it may be the mask that looks back.
But the mask cannot be worn at all times, so one must look again and again to know a self. A self is dynamic, not static; it changes and develops over time. This is what it means to have a history and to live and act in history. When one is identified by a variety of the predicates that one might choose, for example, I am a...carpenter, father, Christian, male, diabetic, Democrat, teenager, or whatever, these are but partial identities, chosen for the sake of expediency to attempt to convey something more complex than language can easily disclose. One's self must encompass all that one is and has been, a totality of life history and its memories, not all of which are accessible to conscious memory at any one time, many of which one might wish to delete and may do so by selective amnesia. Somerset Maugham recognized this when he said:
What makes old age hard to bear is not a failing of one's faculties, mental and physical, but the burden of one's memories.3
The view of history Maugham is expressing is not the one that is so often held, the chronicle of time or march of events, but a very personal view, one that Merleau-Ponty has called "temporal thickness."4 History is not just the public and recorded events, but each person has a unique history of his or her own. Furthermore, it must be emphatically emphasized that this history extends back to childhood and to birth, though the earliest memories may be obliterated. It is important to keep this point in mind as we reconsider the meaning of the personal, the historical, the temporal, and the cultural in relation to ethics, for the early antecedents of rationality have some forms that are very different from the adult thought processes that are usually meant when one refers to "rationality." History is not just the chronicle of events of successive generations of rational adults, as we might find in the textbooks, but each person starts as an irrational child and carries into adult life the childhood antecedents of prerational thought, what psychiatrists call "primary process" thinking.
According to Piaget's studies of cognitive development, there is a disjunc tion in cognitive processes that occurs around the onset of adolescence. At this time,
the child achieves the Cartesian cogito and reaches the truths of rationalism. At this stage...he discovers himself both as a point of view on the world and also as called upon to transcend that point of view and to construct an objectivity at the level of judgment.5
The cogito thus becomes the child's passport to adult society. As he learns the rules of his culture by living them, he develops a critical sense. Thus sup ported by the norms of his group, he acquires a sure standard for judgment. And as a self-assured judge, he of course is less vulnerable himself. Yet even while recognizing this developmental transition, Piaget brings to the child a mature outlook, as if the thoughts of the adult were self-sufficient and devoid of all contradiction. In reality, as Merleau-Ponty reminds us,
it must be the case that the child's outlook is in some way vindicated against the adult's and against Piaget, and that the unsophisticated thinking of our earliest years remains as an indispensable acquisition underlying that of maturity.6
The study of philosophy is an undertaking of the rational mind. As a branch of philosophy, ethics is a rational undertaking. Yet people do not always behave in rational ways, and ethics must account for this irrationality in human behavior. As a way of getting beyond the limitations imposed on ethics by an impersonal view of objectivity, I am proposing that a look at the childhood antecedents of rationality is useful. There are several avenues to this understanding, which converge to broaden the conception of what ethical reflection may be able to accomplish. One is psychoanalysis, which emerged historically at the time when critical thought was gaining its greatest incisiveness, indeed at a time when logical positivism was emerging in Vienna and elsewhere. Another useful approach is the ethical revisions offered by such philosophers as Edmund Pincoffs or Stanley Hauerwas, who suggest that narrative accounts of the life stories of those involved in making decisions offer clues to the ethical issues at stake. John Macmurray offers a critique of traditional philosophy, which suggests possible reorientations, and Michael Polanyi's epistemology similarly offers new possibilities for ethics. As we better appreciate the limitations objectivist epistemology poses for ethics, we are in a better position to consider alternatives that may be more productive.
John Macmurray in his Gifford Lectures of 1953, given under the title "The Form of the Personal" and published under the title The Self as Agent, identifies what he calls "the crisis of the personal." Macmurray is concerned that modern philosophy has led to a situation in which man is seen imperson ally, thus impeding the possibility of accounting for man in relation either to other persons or to God ("always an I,' never a Thou'"). He levels two charges against philosophy: (1) that it is merely theoretical and (2) that it is egocentric. Of the first charge he notes:
Philosophy aims at a complete rationality. But the rationality of our conclusions does not depend alone upon the correctness of our thinking. It depends even more upon the propriety of the questions with which we concern ourselves. The primary and the critical task is the discovery of the problem. If we ask the wrong question the logical correctness of our answer is of little consequence.7
He further notes that "Common tradition conceives the philosopher as a man of a balanced temper, who meets fortune or disaster with equanimity."8 The theory with which philosophy deals must be rooted in the practical questions which man experiences or it risks being trivial or misleading.
Of the second charge, that modern philosophy is egocentric, he notes that "It takes the Self as its starting-point, and not God, or the world or the community; and that the Self is seen as an individual in isolation, withdrawn from active relations with other selves." About this solipsistic outlook much more is said in the following chapter, but for the time being we must note the distortion in the notion of a self, of a person.
Macmurray proposes to redress this imbalance by transferring the center of gravity in philosophy from thought to action. It is in this sense that he defines the self as an agent and attempts to substitute the "I do" for the "I think" as the defining characteristic of personhood.
Considerations of thought are static; they do not move. Considerations of action are dynamic, and we may inquire into the forces that cause motion. In so doing we rely metaphorically on the mechanical images of physical forces moving bodies through space and time in order to grasp something of mental life and experience, which so elude our understanding. Gilbert Ryle speaks of the mind as the "ghost in the machine"9 as a way of helping us through the familiar layers of philosophical thought to an understanding of mental life. This metaphor reflects the habitual ways we have come to understand our selves.
Freud similarly relied on mechanical metaphors in his attempt to elucidate the unconscious realms of mental life. When he spoke of psychodynamics, he was casting in scientific language the mythologies of the ages and returning us to the histories of persons. To understand ethics--that is, to understand persons who act and make decisions in history--it is necessary to understand the forces that move people. We do not really want a passionless philosophy, but rather want to understand the emotions that underlie philosophical deliberation, even though we constantly fear that sanctioning the recognition of these forces will unleash uncivilized demons that might better remain contained.10
What forces underlie morality? What emotions or affects provide the dynamos that move people to action? By asking these questions, what are we able to accomplish that a merely theoretical and egocentric philosophy cannot accomplish? What is the self that acts in history and how does knowl edge of this self, ourself, help us in making decisions in relation both to the situations in which we find ourselves and also to those principles that we are able abstractly to adduce?
This nest of questions is complex. They might correctly be answered by saying that all emotions in one way or another at some time or another influence the way people act. Therefore, we must quickly limit our focus to specific emotions and situations by way of example or remain forever on a theoretical level. The clues from the psychoanalytic study of problems in self-esteem and the devices by which a person regulates self-esteem through morality therefore are illuminating.
The psychoanalytic study of the self derives from Freud's early observa tions in "On Narcissism: An Introduction,"11 in which he introduced the concept of the ego ideal, also sometimes referred to as the ideal self, which later gave origin to the concept of the superego in Freud's topographic (id, ego, superego) theory of personality development.12 The term "narcissism" is derived from clinical description and was chosen by Paul Nacke in 1899 to denote the attitude of a person who treats his own body in the same way in which the body of a sexual object is ordinarily treated. Freud at that time was developing his libido theory, which derived support from the studies of children and from primitive peoples. "Totem and Taboo"13 preceded Freud's work on narcissism, and Jung's departure from psychoanalysis occurred about this time in protest of Freud's insistence on infantile sexuality. Autoero ticism was clearly evident in the normal development of children, a pleasur able "instinct" from the start, but the ego, that mental agency that mediates and regulates the impulses, had to be developed over time.
The study of primitive cultures' abhorrence of incest and the function of taboos against it demonstrated the dynamic forces that enabled these forces to be suppressed, and the unacceptable (hostile, murderous, even sexual) im pulses of children in civilized Vienna followed the patterns of the primitives. Libidinal instinctual impulses undergo pathological repression if they come into conflict with the person's cultural and ethical ideas. The formation of an ego ideal and later a superego are the devices by which the young child can learn to live harmoniously in a family and in a culture. They are pleasurable deceptions, which the child can unconsciously maintain while the ego gradu ally learns how to cope with reality.
Freud declared the ego ideal to be the "heir of the original narcissism"14 and Hartman and Lowenstein say, "the ego ideal can be considered a rescue operation for narcissism."15 In other words the ego ideal is born of an effort to restore the lost pleasures of the symbiosis with an all-giving mother. In this blissful state the human infant lies securely at the center of his universe. His every need is met by a mother (or sometimes another) who needs him almost as much as he needs her. He is omnipotent. If he is hungry or in some other way uncomfortable, he cries, and mother comes to meet the need. This state is referred to as primary narcissism, but it is a precarious condition.
The enemy of primary narcissism is the reality principle. The symbiosis is ruptured too soon after the infant is expelled from the womb. He is hungry and cries, but the mother does not immediately appear. The universe is not perfect as he had imagined, and it is not under his control. The result is rage, possibly despair if he is abandoned or if his mother is not dependable. The original sense of omnipotence has received the first of many "narcissistic injuries." Reality has entered in.
The infant copes with this intrusion by shifting focus to an idealized parent; it is not he but the parent that is perfect. As the infant's physical and mental abilities, particularly locomotion, develop, it is less necessary to be completely dependent on the mother, and the infant becomes aware of his separateness from her. The small, helpless infant forms an ideal of what he would like to be; that is, omnipotent like the parents. But even under the best of circumstances, the idealized parents prove fallible, incapable of providing the total gratification remembered from earliest infancy, and the infant de velops a new and better possibility for the self, the ego ideal. The love that was originally invested in the self and then in the idealized parent is now invested in the ideal self that he desires to become. This state is called secondary narcissism. Freud observed that man creates ideals for himself to restore the lost narcissism of childhood, to restore that state of contentment in which one's needs are passively met. It is the ego ideal, Freud claims, "by which the ego measures itself, towards which it strives, and whose demands for ever-in creasing perfection it is always striving to fulfill."16 But the grandiose aspira tions for perfection in reality can never be fulfilled. A person's self-esteem is determined by the distance between his actual self--his strengths, talents, abilities, and accomplishments--and his ego ideal.
The superego, thought by many psychoanalysts to be distinct from the ego ideal, is a developmentally later acquisition. In contrast to the ego ideal, which lures the person on to higher and often impossible standards of perfection, the superego is restrictive. It is the superego that is the interna lized representation of the parents and of the culture's standards for conduct. The superego is the unconscious conscience that criticizes the id impulses and keeps the ego in line. Instinctual gratification (whether sexual or aggres sive) is renounced either out of fear of loss of love by the ego ideal or by fear of punishment by the superego. This is why Freud suggests in "Civilization and its Discontents"17 that neurosis is the price we must pay for the harmony of civilized culture, a verdict that is only reluctantly accepted.
Earlier I contrasted upward and downward perspectives on tasks of ethics. Considered not only in terms of the function of ethical theory but also of the development of an individual, it may now be said that ethics in the upward perspective functions by means of the ego ideal, and ethics in the downward perspective functions by means of the superego. One is beckoned upward by the highest standards of moral perfection, but restricted from falling below certain minimal standards by means of superego strictures, if internalized, and by the possibility of external punishment if necessary. Teleological ethical theories appeal to goals that all might be expected to hold. Deontological ethical theories set standards by which one may know what limitations are expected. Earlier I criticized the standard account of ethics as being inadequate because it attempts to be universal, impersonal, acultural, and ahistorical. I now add that teleological and deontological accounts are both inadequate. They fail us not only because they are too abstract but also because they rest on immature developmental forms, the ego ideal and the superego. With this said, we may now explore in greater depth the affects associated with the ego ideal and the superego and why they are inadequate as a basis for morality.
Morality in any given culture is enforced by the affects of shame and guilt. Shame is the primary affect that mediates the functioning of the ego ideal in a given person, and guilt is the affect by which the superego exerts its force. Shame consists of feelings of inferiority, humiliation, embarrassment, inade quacy, incompetence, weakness, dishonor, disgrace, "loss of face"; the feeling of being vulnerable to or actually experiencing ridicule, contempt, insult, derision, scorn, rejection, or other "narcissistic wounds"; and the feeling of not being able to take care of oneself and of being dependent on others. Jealousy and envy are members of this family of feelings. Guilt refers to the feeling of having committed a sin, a crime, an evil, or an injustice; the feeling of culpability; the feeling of obligation; the feeling of being dangerous or harmful to others; and the feeling of needing expiation and deserving punish ment.
James Gilligan in a very provocative analysis, "Beyond Morality: Psychoan alytic Reflections on Shame, Guilt, and Love," sees morality as a "force antagonistic to love, a force causing illness and death--neurosis and psy chosis, homicide and suicide." He views morality as a "necessary but imma ture stage of affective and cognitive development, so that fixation at the moral stage represents developmental retardation, or immaturity, and regression to it represents psychopathology, or neurosis.18 He claims that morality is dead, that it killed itself. Citing the self-criticism moral philosophy has subjected itself to over the past two centuries (for example, Hume, Kant, Nietzsche, and Wittgenstein), he believes that the only knowledge possible is of scientific facts, not moral value.
As evidence for this, he draws on demonstrations of the effects of shame and guilt in various cultures. An example of a pure shame culture is the Kwakiutl Indians of Vancouver Island, described by Ruth Benedict:
Behavior...was dominated at every point by the need to demonstrate the greatness of the individual and the inferiority of his rivals. It was carried out with uncensored self-glorification and with gibes and insults poured upon the oppo nents....The Kwakiutl stressed equally the fear of ridicule, and the interpreta tion of experience in terms of insults. They recognized only one gamut of emotion, that which swings between victory and shame.19
An example of a pure guilt culture is the Hutterites, a Protestant sect scattered through the northern Middle West and southern Canada on communal farms and colonies, where they consciously attempt to adhere strictly and literally to the ethic of the New Testament. They are described by Kaplan and Plaut:
Religion is the major cohesive force in this folk culture. The Hutterites consider themselves to...live the only true form of Christianity, one which entails communal sharing of property and cooperative production and distribution of goods. The values of brotherliness, self-renunciation and passivity in the face of aggression are emphasized. The Hutterites speak often of their past martyrs and of their willingness to suffer for their faith at the present time.20
In these extreme forms shame cultures display a maximum of hostility toward others in order to maintain a maximum of love for the self; the guilt cultures display a minimum of love for the self in order to check hostility toward others and maintain nonviolence and pacifism.
I am in fundamental agreement with Gilligan's contention that morality can be a destructive force if it excessively relies on shame and guilt. I disagree that morality must necessarily be destructive, however. Considered in histori cal perspective--that is, in developmental perspective--morality serves the growing person well by providing an orientation to one's culture and to those others with whom one lives. Morality also provides a sense of identity, that understanding of self by which one lives, and by which one is known to others. This sense of agency, however, is achieved gradually over time and emerges from the immature forms of the ego ideal and superego. The mature sense of agency carries with it the sense of competence, empowerment, potency, internal force, confidence in initiating change or control, and a realistic appraisal of what it may be possible and appropriate to accomplish.21 The mature agent has more or less successfully resolved his narcissism: His expectations for himself and his (ego) ideals are more or less in line with his abilities, and his goals, though perhaps ambitious, are neither grandiose nor represent a striving for omnipotent perfection. Given the vicissitudes of human development, this state of mature agency is not usually achieved. Even when it is achieved, it is in precarious balance with those stresses in life that would precipitate helplessness and dependency: the various life crises, loss of loved ones, aging, and especially the loss of one's health through unexpected illness. These stresses may bring forth a yearning for an ideal world and a return to a state of blissful dependency, and that yearning may include the idealized wish for the physician to be an omnipotent god, a wish the physician may be seduced into sharing. Such dependency, however, engenders much ambivalence, especially in a culture that so values independence, self-reli ance, and autonomy. When reflecting ethically on the possibilities for such an ideal world, we must be mindful of our own narcissistic temptations and keep reality firmly in view.
There is nothing wrong with maintaining perfectionistic ideals unless they are used destructively to clobber mortal humans who cannot live up to their standards. It is these immature and grandiose forms of morality that can be destructive.
We must now turn to a consideration of the destructive force of excesses of morality and the demand for moral perfection. In the appendix to this chapter, some of the positive, life-sustaining characteristics of morality are further considered. The thesis advanced here is that values form the core of the identity of a person and thus regulate self-esteem. If viewed from the perspective of the regnant epistemology, which defines a person in terms of thought instead of action, morality becomes a destructive force. In its abstract forms, morality insists on moral perfection, which no person can achieve. Gilligan is right insofar as he views morality in the context of the standard epistemological accounts of morality. Morality has killed itself, or rather, it has failed to give an adequate account of itself. However, we may carry Gilligan's use of the psychoanalytic approach and Polanyi's use of the post-critical approach beyond the limitations of the standard/regnant account to demon strate that love or trust as the most fundamental and earliest developmental issue is the correct basis for morality, not shame or guilt, which represent pathological aberrations if not properly resolved. This stands in staunch opposition to the view that holds (solipsistically) that the autonomy of per sons is the basis for ethics. Such a view cannot be true because persons are in no real way autonomous; they exist in relationship with others, whom they may or may not trust, but on whom they are unquestionably dependent and interdependent. Descartes's dualistic philosophy bases knowledge on doubt rather than belief (trust) because, I believe, of his own insecurity in trusting, an insecurity that touches all of us fundamentally because of the developmen tal traumas we must face in early childhood. Polanyi corrects the Cartesian mistake by reminding us that we never have been able to base knowledge on doubt; rather, knowledge rests, now in modern science as it always has, on a fiduciary enterprise of a community of knowers rather than on the isolated efforts of individuals who bear no relationship to one another.
The epistemological work of Michael Polanyi is useful as a counterbal ance to such excesses.22 I believe it may be shown that ethics follows quite legitimately from epistemology by rooting one's knowledge in the commit ments one holds. Himself a scientist (initially trained as a physician), Polanyi took exception to the objectivist view of science by demonstrating the "tacit dimension" of knowing in the process of scientific discovery. His post-critical epistemology seeks unification of the sciences and the humanities by stressing the role of the knower in the knowing. Only in a culture admitting to a scientist-humanist dichotomy could there be such ambiguity about the role of the physician.
The moral inversion is a phenomenon in which moral passions are repudiated in the face of strict scientific objectivity. What happens in the repudiation is often a reversal or inversion in which quite immoral ends are justified scientifically. Polanyi sees the moral inversion as being derived from conflicting aims of our knowledge. He sees two conflicting ideals of our age (moral passions and intellectual skepticism) "locked in a curious struggle in which they may combine and reinforce each other."23
The moral inversion comes about when an individual feels unable to meet the absolute ideals of moral perfection. Such an individual imbued with the demands of critical objectivity sees such moral perfectionism as hypocriti cal (literally less than critical) and repudiates it in the name of something honest, authentic, or real. Often this may be some social cause that derives its merit from simplification of a complex situation, or it may be an utterly gratuitous act, which has nothing more to commend it than the fact that it is indeed chosen.
The example of the 23-year-old mother whose lawyers opposed involun tary commitment may be understood as a moral inversion. Rigid adherence to the principle of liberty, which is good for the preservation of a free society, is cruelly immoral for the woman who ended up dead.
Existentialist literature offers numerous examples in which gratuitous acts affirm the reality of the individual who would otherwise feel dehuman ized in an objectivist world. Dostoevsky's Raskolnikov in Crime and Punish ment is a prime example. By murdering the old woman for no reason he can order his otherwise senseless existence. Andre Gide's Lafcadio similarly vindicates himself by pushing a man off the train. The moral inversion is so prevalent in American cinema as to be commonplace. The cinema extols its antiheros; the good guys are the bad guys, who are admired for their audacious defiance of hypocritical social convention. Butch Cassidy and the Sundance Kid rob bad banks for the sheer fun of it. Bonnie and Clyde perform the same crimes to overcome impotence. In A Clockwork Orange, an honestly self-serving psychopath is attractive when compared to the behavior modifiers for whom behavioral ends justify cruel means.
Polanyi's key example is what he calls the "dynamo-objective coupling" of Soviet Marxism, which he uses to illustrate the proximity of morality to politics and the close reciprocal workings of moral passions with an objective view of reality.24 In the dynamo-objective coupling, alleged scientific asser tions, which are accepted as such because they satisfy moral passions, will excite these passions still further and thus lend increased conviction to the scientific affirmations in question, and so on. Moreover, such a dynamo-objec tive coupling is also potent in its own defense; any criticism of its scientific part is rebutted by the moral passions behind it, while any moral objections are coldly brushed aside by involving the inexorable verdict of its scientific finding.
In the Marxist example, one sees a coupling between the utopian ideals-- liberty, justice, and brotherhood--and their translation into an objectivistic view of the social order, namely, dialectic materialism. By covering moral passions with a scientific disguise, moral sentiments are protected against depreciation as mere emotionalism. They acquire instead a sense of scientific certainty. On the other hand, material ends are impregnated with the fervor of moral passions.
In modern psychiatry and behavioral science, examples of the moral inversion are also quite common. The disciplines--to the extent that they remain bound by the tenets of positivism--disclaim any moral intentions. Yet the vast enterprises of psychiatry, psychoanalysis, behavior modification, and counseling proceed according to value judgments, with the metaphors health or illness or the measurements normal or abnormal surviving as surrogates for the moral terms good or bad (valued or disvalued).
Thus the antipsychiatrists have a point in their criticism of psychiatry for its failure to give attention to moral values. The antipsychiatry movement is an example of a critical approach to ethics--the criticism of an articulated or implied moral position. Psychiatry as a profession is vulnerable to this criti cism because its ethics are implicit, tacit, sometimes unconscious, and usually tied up with a social consensus of how much deviant behavior a given community will tolerate. Civil rights, human rights, and personal rights are often the vehicles for calling attention to such value controversies. The psychiatric detention of political dissenters by the Soviets is an extreme example of the interconnections between shared social beliefs and psychiat ric practice.
If one believes Marxist doctrine, that is, believes it to be a correct explanation of reality, then someone who dissents from that doctrine could legitimately be considered to be out of touch with reality. Conversely, a psychiatrist recognizing the realities of harsh political treatment could argue that psychiatric detention is a more humane compromise.
These arguments confuse concepts of rights (morality) with concepts of reality (facts). The Soviet system probably has more in common epistemolo gically with American psychiatry than advocates of human rights would care to acknowledge, even though the moral outlooks are very different. Both are thoroughly modern in their camouflaging of social beliefs with scientifically objective facts. Neither system attempts a systematic articulation of moral principles. The moral passions of the antipsychiatrists, however, become inverted in a passionate therapeutic nihilism that would destroy anything short of perfection and along with it the possibility of genuine moral enrich ment.
The Marxist example is a fruitful one for Western medicine because one of the focal conflicts is that between the individualism of medicine and the socialism of the economic system that supports it. As high-technology medi cine becomes more and more expensive, we are increasingly forced to look at the tension between the needs of the individual and the cost to society. We are forced to ask tough questions about allocation of resources and about individ ual and societal responsibility. Witness the millions of dollars spent on Medicare for renal dialysis and the scant funding for public health, which might shift some of the cost and responsibility from treatment of debilitating illness to its prevention.
Earlier the question of a methodology for ethics was raised, suggesting that the force of the critical tradition tends to depersonalize ethics by abstracting thought from action. Generally speaking, ethical issues are raised around points of conflict--either an interpersonal conflict in which two or more people are committed to different courses of action or an intrapsychic conflict in which a given person may see compelling reasons for conflicting decisions. Modern medicine seldom offers us the clear-cut alternatives that could neatly be dissected as either good or bad, or either right or wrong. Instead, we are offered complex situations in which any course of action compromises certain ethical principles, and few decisions can be made with more certainty than ambiguity.
We are a very narcissistic society. By this I mean not so much that we are self-indulgent, although this may be the case, but that we are ruthless in the pursuit of our ideals. Often, ethical norms serve as impossible ideals, which we should strive toward but can never live up to. If we recognize these ideals as just that, then I believe we have set the stage for a humanistic approach to medicine. If we try to translate these ideals into imperatives, we run the risk of further moral inversions as the already grandiose expectations of medicine become inflated even further, and we begin to think perfectionistically rather than realistically.
If we can succeed in establishing ideals that are sufficiently general as to be applicable in most situations, we can save wear and tear on our con sciences, but this generalizing risks being rigid and unresponsive to human needs. Indeed, I suspect that much of the social criticism of medicine as mechanistic and dehumanizing actually stems ironically from medicine's unyielding idealism.
A humanistic approach to medicine must recognize and be responsive to the diverse and often contrary individuality of our human lot; few rigid principles are sufficiently flexible to account for such diversity. Humanistic medicine also must recognize and tolerate the humanity of physicians. Gran diose perfectionism often results in stereotyped idealism rather than mutual respect and further contributes to the unrealistic ideal of physician omni science and omnipotence.