Evaluation of Social Work Services In Community Health and Medical Care Programs BASED ON THE PROCEEDINGS OF THE 1973 ANNUAL INSTITUTE FOR PUBLIC HEALTH SOCIAL WORKERS Supported by a grant from Maternal and Child Health Service U.S. Department of Health, Education and Welfare Edited by ROBERT C. JACKSON JEAN MORTON Program in Public Health Social Work Earl Warren Hall University of California Berkeley, 94720 1973 Preface & me Pie L Public The papers in this document represent the pro- Hedlih ceedings of the 1973 Public Health Institute, held an- library nually on the Berkeley campus for social workers in maternal and child health programs in the western United States. The Institute is an activity of the Public Health Social Work Program, a joint educational project of the Schools of Public Health and Social Welfare, Uni- versity of California, Berkeley. Participants in the Institute represented such program concerns as maternity and infant care, children and youth projects, university-affiliated training centers for mental retardation, state-level crippled children programs, family planning services and acute care hospitals. Although the participants are primar- ily concerned with health services to mothers and chil- dren, their specific job responsibilities, program goals, and target populations are sufficiently divergent that evaluation issues and methods were approached with a broad perspective. Consequently, it is hoped that these Proceedings will prove useful to a wider audience as well. The papers contained in this publication are reproduced essentially as they were presented to the audience. The two exceptions are the summarized pre- sentations by Doctors Shepherd and Loeb, who respec- tively led working sessions on patient care audits and management by objectives. Gratitude is expressed to the faculty for their valuable contributions and to the Planning Committee for their helpful assistance in formulating the Institute Program. The Institute and these Proceedings were made possible through a grant from the U.S. Maternal and Child Health Service. The funding is gratefully ac- knowledged, as is the helpful support of Virginia Insley, Kathleen Johnson, Kathryn Koehler and Ruth Olson. Miss Koehler and Miss Olson have recently re- tired and their many contributions to the Institutes will be missed. —iii- AUG2 1977 Finally, that the Institute was possible was due in large part to the dedicated and untiring efforts of Jean Morton, Field Program Supervisor for the Public Health Social Work Program, and Audrey Champion, Pro- gram Secretary. I hope the high standards that they bring to their work are reflected in this document. Robert C. Jackson Lecturer in Public Health and Social Welfare September, 1973 -jy- Advisory Committee and Staff VELMA ANDERSON, M.S.W. Chief Project Social Worker East Los Angeles Child and Youth Clinic 929 North Bonnie Beach Place Los Angeles, California 90063 MARGARET BUTCHER, M.A.* Field Work Supervisor and Field Work Consultant School of Social Welfare 120 Haviland Hall University of California Berkeley, California 94720 AUDREY CHAMPION Secretary Program in Public Health Social Work School of Public Health 416 Earl Warren Hall University of California Berkeley, California 94720 JANE COLLINS, M.S.U. Director Social Service Department City and County of Denver Department of Health and Hospitals West Sixth Avenue and Cherokee Street Denver, Colorado 80204 HYMAN GOLDSTEIN, Ph.D. Research Biostatistician Program in Maternal and Child Health Lecturer Program in Biostatistics School of Public Health University of California Berkeley, California 94720 *Deceased JANE HESS, M.S.UW. Project Social Worker Children and Youth Clinic Kirksville College of Osteopathic Medicine Jefferson Street Kirksville, Missouri 63501 VIRGINIA INSLEY, M.S.U. Chief, Medical Social Work Section Maternal and Child Health Service Department of Health, Education and Welfare Parklawn Building 5600 Fishers Lane Rockville, Maryland 20852 ROBERT C. JACKSON, M.S.W., M.P.H. Chairman Program in Public Health Social Work School of Public Health 416 Earl Warren Hall University of California Berkeley, California 94720 KATHLEEN JOHNSON, M.S.U. Regional Medical Social Consultant Maternal and Child Health Service Department of Health, Education and Welfare Regions IX and X Federal Office Building 50 Fulton Street San Francisco, California 94102 ANDIE KNUTSON, Ph.D. Professor and Chairman Program in Behavioral Sciences School of Public Health University of California Berkeley, California 94720 KATHRYN KOEHLER, M.S.UW. Regional Medical Social Consultant Maternal and Child Health Service Department of Health, Education and Welfare Region VIII 9017 Federal Office Building Denver, Colorado 80202 —vi- JEAN MORTON, M.S.W., M.P.H. Lecturer and Field Program Supervisor Program in Public Health Social Work School of Public Health 416 Earl Warren Hall University of California Berkeley, California 94702 FRANCIS MURPHY, M.S.W., M.P.H. Chief Public Health Social Work Section Colorado Department of Health 4210 E. 11th Avenue Denver, Colorado 80220 RUTH OLSON, M.S.UW. Regional Medical Social Consultant Maternal and Child Health Service Department of Health, Education and Welfare Region VII 601 E. 12th Street Kansas City, Missouri 64112 STEVEN SEGAL, Ph.D. Assistant Professor School of Social Welfare 120 Haviland Hall University of California Berkeley, California 94720 DUANE THOMAS, Ph.D. Director of Social Services UAF - University of Kansas Medical Center Rainbow and 39th Streets Kansas City, Kansas 66103 HELEN M. WALLACE, M.D. Professor and Chairman Program in Maternal and Child Health School of Public Health University of California Berkeley, California 94720 -vii- Institute Faculty MILTON CHERNIN, Ph.D. Dean School of Social Welfare University of California Berkeley, California 94720 NANCY CLAPSADLE, R.N., M.P.H. Medical Audit Consultant RMP Quality Assurance - Medical Audit Project 745 Parnassus San Francisco, California 94122 JANE COLLINS, M.S.UW. Director Social Service Department City and County of Denver Department of Health and Hospitals West Sixth Avenue and Cherokee Street Denver, Colorado 80204 DONALD DENNIS, Ph.D. Coordinator of Medical Education St. Joseph's Hospital 1100 Stewart Drive Orange, California 92668 LILLI FAKLER, C.R.A., R.R.A. Chief, Medical Record Services for Institutions City and County of San francisco San Francisco General Hospital 22nd and Potrero San Francisco, California 94110 VIRGINIA HARRICK, M.P.H. Medical Audit Consultant RMP Quality Assurance - Medical Audit Project 745 Parnassus San francisco, California 94122 —-viii- CARLESSIA HUSSEIN, M.S. Chairman Program in Community Health Nursing Administration Associate Dean of Student Affairs and Admissions School of Public Health University of California Berkeley, California 94720 FRANCES KELLEY, M.S.W. Public Health Social Work Consultant M&IC and Family Planning Projects City of Houston Health Department 720 Gillette Houston, Texas 77019 AL M. LOEB, Ph.D. Chief, Audits Division State Department of Finance 1025 "P" Street Sacramento, California 95814 JAMES LYON, M.S.W., M.P.H. Assistant Professor and Assistant to the Associate Dean for Community and Clinical Programs Al0l East Fee Hall College of Human Medicine Michigan State University East Lansing, Michigan 48864 JACK MANLEY, B.S. Administrator Valle jo General Hospital 300 Hospital Drive Vallejo, California 94590 JOAN McCRACKEN, B.S.N. Director Planned Parenthood 2718 Montana Avenue Billings, Montana 59101 -iX- EDWARD J. MULLEN, D.S.U. Director of Research Community Service Society Professor School of Social Service Fordham University Lincoln Center Campus West 62nd Street New York, New York 10023 BEATRICE PHILLIPS, M.S. Director Social Work Department Beth Israel Hospital Boston, Massachusetts 02216 THOMAS B. SCULLION, Ph.D. Associate Professor College of Human Medicine Director Division of Social Services University Health Center 112 Fee Hall Michigan State University East Lansing, Michigan 48864 RODGER SHEPHERD, M.D. Project Director California Regional Medical Program, Area I Quality Assurance - Medical Audit Project Pacific Medical Center P. 0. Box 7999 San Francisco, California 94120 LUCIA SOMMERS, M.S.S. Medical Audit Consultant RMP Quality Assurance - Medical Audit Project 745 Parnassus San francisco, California 94122 ELIZABETH L. WATKINS, D.Sc. Associate Professor School of Applied Social Sciences Case Western Reserve University 2035 Abington Road Cleveland, Ohio 44106 Contents —-]- Section Page PREFACE + « « 4 vo o ¢ ¢ « o o o o o o o o o o o o iii ADVISORY COMMITTEE « « vv 4 « « « o « « « « « « v INSTITUTE FACULTY « « 4 & # #5 5 5 « » « 4 =» +» + » 0did INTRODUCTION o « « « = « 2 3 # 3 # » » #3 « « » « wxiid PART ONE: EVALUATIVE RESEARCH PERSONAL CURIOSITY AND PROFESSIONAL COMPLIANCE Thomas B. Scullion, Ph.D. . . . . . . . 1 CONCEPTUAL FRAMEWORK FOR EVALUATION OF SOCIAL SERVICE PROGRAMS Elizabeth L. Watkins, D.Sc. . . . . 13 EVALUATIVE RESEARCH IN SOCIAL WORK Edward J. Mullen, D.S.W. + « « « « « . . 37 PART TWO: PERSPECTIVES FROM THE FIELD THE DECISION TO EVALUATE: CONTRIBUTING FACTORS Frances Kelley, M.S.We +. + + + «+ « « +. « B55 ASSESSMENT OF SOCIAL SERVICES IN A LARGE HEALTH CARE ORGANIZATION: A SOCIAL WORK ADMINISTRATOR'S PERSPECTIVE Jane Collins, MuS.Ue « oo & # ¢ + = + » =» B7 Section Page PART THREE: EVALUATION OF QUALITY THE PROBLEM-ORIENTED RECORD: UTILIZATION IN A COMMUNITY HEALTH PROGRAM Joan McCracken, R.N. + « « « « « + + 71 FACILITATING COMMUNICATION: SOCIAL WORK AND THE PROBLEM ORIENTED RECORD SYSTEM Beatrice Phillips, M.5. . +. + + « + 93 MEDICAL AUDIT AND MEDICAL SOCIAL SERVICES Rodger Shepherd, M.D. . « « « « « + 115 PART FOUR: PROGRAM PLANNING FOR EVALUATION PLANNING FOR PROGRAMS AND EVALUATION: MANAGEMENT BY OBJECTIVES Al M. Loeb, Ph.D. «+ ¢ + « + « « « « 123 PART FIVE: EVALUATION OF COSTS AND EFFECTS OF REIMBURSEMENT SOCIAL SERVICES COVERAGE IN HEALTH \/SETTINGS: PROSPECTS AND ISSUES James Lyon, M.S.WU., M.P.H. + « + « . 139 INSTITUTE PARTICIPANTS « « « « « « « « « « « « =» 155 INSTITUTE BIBLIOGRAPHY + 4 « « + 5 5 + ¢ « » = = 169 -X1ii- Introduction We have chosen to exercise an editorial preroga- tive and to offer a brief introduction to this volume. The papers that follow are primarily concerned with the application of evaluation methodologies to social work services in the health field. It is our good fortune to have an Institute faculty who cogently address a variety of topics and require no further clarification from us. These remarks are intended to (1) describe the central themes around which the Institute on Evaluation was de- veloped and (2) indicate our thoughts on the next steps that will be necessary if the challenges of evaluation are to continue to be addressed. We have defined evaluation as the process of de- termining the worth of something for the purpose of tak- ing action for change. Given this definition, evaluation is not a singular event but a cumulative process. Evalu- ation methodologies encompass a variety of approaches and techniques; the choice of the approach and subsequent methodology depend on the nature of the questions to be ansuered through evaluation and the kinds of actions that will likely occur in response to the findings. The suc- cessful use of evaluation technology for change ultimately depends on a shared set of definitions and a common sense of direction. In these Proceedings, evaluation is discussed from several perspectives. The central theme that ties them together is the use of an explicit standard against which comparisons between what was intended and what occurred are made. The approaches range from experimental research design to management by objectives; from program evalua- tion to patient care audits. It is recognized that this constellation of methods represents a wide-ranging approach to evaluation. The issue is not whether these methods are precisely comparable but to recognize that they produce different kinds of findings in order to satisfy different information requirements. For example, evaluative research requires technical skill in research design, time intervals of several years, and substantial amounts of funding. The potential reward is neu knowledge about effectiveness that stands the test of applicability to other settings. In contrast, the approach to auditing, as presented in this volume, is concerned with the process of service delivery -xiii- and not the outcomes. The method is not intended to pro- duce information with universal application. That is not to say, however, that reports of well done process evalu- ations would not be very useful in clarifying the complex- ities of service delivery. Our point is that several approaches to evalua- tion are available and potentially useful. Their appli- cability to the needs of social work warrants testing. Some of these methodologies are intended for use in the management of operating programs and others are for the purposes of producing generalizations applicable to many programs. The critical concerns are choosing an approach that is appropriate to the question being asked and usable in the particular setting; handling the methodology well; and understanding the limitations of the evaluation data. Preparation of this Institute required a revieu of the "art and science" of evaluation in several areas of the human services besides social work. We have yet to uncover evidence that there is a definitive evalua- tion technology. The question of effectiveness and the outcomes produced by various interventions are the cen- tral questions of all the disciplines. To date, evalu- ation of social work interventions has relied largely on experimental research designs to test the effective- ness of services. Despite the disappointing or ambigu- ous findings of these efforts, social work can hardly be faulted for failing to ask the "right" question. Asking the correct question cannot substitute for pursuit of the answer. Proof of effectiveness en- compasses the control of a multiplicity of complex vari- ables. It does not represent a retreat from the issue of effectiveness to suggest that the most important questions may be the last to surrender their answers. A necessary step towards further accounting for effectiveness is the development of a conceptual frame- work that identifies the different levels or indicators of effectiveness. An example of this approach is the structure-process-outcome continuum developed to differ- entiate levels of variables in the evaluation of medical car» (Donabedian, 1966). Another author has expanded that framework and suggested that, for medical care, the variables to be examined are: (15 resources made avail- able, (2) attitudes of consumers, (3) quantity of service, (4) estimated quality of service, and (5) impact on health status (Roemer, 1971). In both of these formulations, the implication is apparent that effectiveness is the ultimate question but that does not preclude the presence of other important questions. -Xiv=- In summary, the intent of these introductory re- marks has been to alert the reader to the alternative approaches to evaluation that are available. We have differentiated between the need for research evidence about the efficacy of different service approaches and the requirements of practitioners in operating pro- grams for evaluative feedback information. If the dis- tance between the levels of these evaluation needs is to be lessened, we suggest that a more comprehensive and planned evaluation strategy will be required. The central concern in this volume is services and their evaluation. This single concern is consistent with our view that there is no evaluation technology that yields global answers to sweeping questions. Ue recognize that serious problems remain in accessibil- ity of care, in the organization and responsiveness of institutions, and inh the nagging problems of how to accomplish income transfers. Some years ago, "articles of faith" about the attributes of quality in medical care were formulated (Lee and Jones, 1933). Two of the "articles" are pertinent to this introduction presentation: No. 4 - Good medical care treats the individual as a whole. No. 6 - Good medical care is coordinated with social welfare work. As the health care system assesses its impact on health status and the social services address the improve- ment of the functional independence of individuals (Morris, 1973), the sense of the Lee and Jones "articles" becomes more compelling. It is, however, easier to keep the "faith" if one has convincing evidence that the de- liberate juxtaposition of medical care and social work can achieve in collaboration that which could not be achieved independently. An involvement in the evaluation process implies that "business as usual" is no longer an operating prin- ciple. The guestion of how to best assist individuals and families cope with social health problems requires contin- uing study and testing. In the opening paper of the Insti- tute, Doctor Scullion posed the choice: will future eval- uation of social health services be generated by our col- lective curiosity and concern or by an enforced compliance? Robert C. Jackson Jean Morton References Donabedian, A. Evaluating the quality of medical care. Lee, R. Morris, Roemer, Milbank Memorial Fund Quarterly, Part 2, 1966, 44, 166-206. I. & Jones, L. W. The Fundamentals of Good Med- ical Care. Chicago: University of Chicago Press, 1933. R. Welfare reform 1973: The social services dimension. Science, 1973, 181, 515-522. M. I. Evaluation of health service programs and levels of measurement. HSMHA Health Reports, 1971, 86, 839-848. -XVi- Part | PERSONAL CURIOSITY AND PROFESSIONAL COMPLIANCE Thomas B. Scullion, Ph.D. CONCEPTUAL FRAMEWORK FOR EVALUATION OF SOCIAL SERVICE PROGRAMS Elizabeth L. Watkins, D.Sc. EVALUATIVE RESEACH IN SOCIAL WORK Edward J. Mullen, D.S.W. Personal Curiosity and Professional Compliance THOMAS B. SCULLION Michigan State University In his introduction to Self-Renewal John Gardner relates a story concerning a small boy who wanted to go outside and play with some older children. As the older children returned to the house a friend suggested to the boy that they hide behind a curtain so that the older boys wouldn't know where they uere. The boy looked at him sadly and said, "Suppose they don't care?" I have been asked to present a perspective as to why those of us active in public health and social work should or must become familiar with the issues and problems associated with evaluation research. I would like to suggest that the only way in which we can meet these new requirements will be to decide to come out from behind the curtain and become better known to the outside world. When they see what we do, hou we do it and the usefulness of many of our services, I think ue will find that they care and are willing to help us to continue our work in a more rational, certainly more visible way. I believe that it is necessary to define three terms and to keep these distinctions in mind throughout these proceedings. A major part of my effort will be directed towards clarification of these words and their applications. While it is possible to separate evalu- ation and accountability, there remains considerable confusion in their uses, first as words, and secondly, as tools of management. They are points along the way and the activities encompassed by them include all the activities which take place within planning and service programs. By this I mean not only issues relating to patient care, but also issues of financing, construc- 4 tion, staff development and basic belief system which provides the rationale for the program's existence (Scullion, 1973). Pi Evaluation Research and Research An important starting point is to distinguish between "evaluation research’ and "research." Since the terms can become confusing, I found it helpful to use the terms originally specified by Suchman (1967) to introduce this discussion. Suchman has offered a series of statements aimed at clarifying these concepts: Evaluative research is a specific form of ap- plied research whose primary goal is not the discovery of knowledge, but rather a testing of the application of knowledge. In terms of objectives, evaluative research is more likely to be aimed at achieving some prac- tical goal—its major emphasis is upon utility. The research, if successful, should provide helpful information for program planning, de- velopment or operation. In contrast, non-evaluative research, while it may have practical implications, is primarily aimed at increased understanding rather than manipulation or action. A basic research project has as its major objective, the search for new knouledge regardless of the value of such knowledge for producing social change. The emphasis is upon studying the interrela- tionships of variables rather than upon the ability of Man to influence these relation- ships through controlled intervention Cp. 75]. Accountability In a monograph written for educators, Mortimer (1972) has provided an excellent review of adminis- trative issues. He identifies the difficulty in ar- riving at a mutually understandable and acceptable meaning of the terms and also indicates that many agencies and interests calling for accountability often are very unclear as to what they mean by accountability. In the case of education, Mortimer identifies the gen- eral public, legislatures, governors, governmental agencies, courts, faculties, students and parents. Types of Accountability Mortimer suggests three contexts for differ- ential meaning of the term accountability: A. Managerial Accountability. The principal function is control/compel events to occur. B. Evaluation and Accountability: an area of overlap 1. Evaluation is internal to the organiza- tion with respect to the stimulus for evaluative effort and who participates in evaluation. Evaluation is concerned with effectiveness. In a more general sense, evaluators tend to have profes- sional qualifications recognized by the service profession. 2. Accountability carries with it the notion of external judgment and is con- cerned with both effectiveness and ef- ficiency. In a general sense, those who assess accountability tend to have backgrounds or roles in management, business and finance. C. Accountability and Responsibility. Within this context a variety of individual actions and effects can be considered. In more global terms, Mortimer sees evaluation as related to input or process and aimed at understand- ing events related to the individual. Accountability is related to measures of effect or output and offers information on program levels. Summary This section has presented a series of defini- tions which distinguish among the concepts of research, evaluation research and accountability. These differ- ences are offered because of my belief that they are useful in identifying the sources of influence and control associated with these activities. While it is clear that clinicians, program administrators and policy-makers have a general interest in supporting each of these activities, it seems reasonable that members of these groups would be more engaged in one work area than another. In this view of the issue: A. Clinicians are most committed to patient care and are consumers of research and contributors to evaluation research. B. Program managers are committed to evalua- tion research and contributors to policy formulation. C. Policy-makers are committed to the utiliza- tion of evaluation research. They use the information to respond to requests for ac- countability from the public. Compliance and Inquiry The distinctions between evaluation research and accountability offered by Mortimer emphasize that the location of the stimulus for evaluation is a sig- nificant factor in leading to the definition of the activity as either evaluation or accountability. The idea that the initiation of the evaluation effort is external to the profession in the case of accounta- bility and internal to the profession in the case of evaluation reminds me of Reisman's powerful distinction between inner and outer directed men. I plan to use these concepts to separate the remainder of this pre- sentation into two parts: l. A section which will describe the require- ments of parties external to the service professions. These are typically government or non-governmental funding agencies. They usually ask for reports of past effort and compare effort and outcome to statements of goals and objectives. The response to these demands by professionals is best characterized as compliance. 2. A section which will describe the environ- ment and interests that are stimulated by interests within the profession. Briefly stated, these are most frequently addressed to broad issues of need definition, assessment of the quality of care and the general func- tion of contributing to the knowledge base of the pro- fession. This distinction between issues of accounta- bility and evaluation research are never as clear as one would like them to be and the difficulty in keeping ~~ them conceptually separate reinforces the notion that evaluation research, research, and accountability are naturally related steps in a process which has been described as professional inquiry. Demands for Evaluation from Qutside the Profession Many would argue that evaluation activities are essential if the service professions are to sustain their credibility with the public. There is little reason to quarrel with such a notion and unfortunately we can already identify groups who have little con- fidence in professional services. While many profes- sionals think that evaluation and accountability are neu concepts, we should recall that we have had forms of these processes as part of our everyday business for a long time. My examples range from individual supervi- sion and consultation to budgeting processes with United Funds or state bureaus. If our interest in evaluation should end with concern about professional survival, ue need to recognize the self-serving connotations inherent in such a response. Much of what I have seen in my work during the past few years can be described in terms of reluctant compliance or active resistance to evaluation programs by the professionals within the programs. Actions of Funding Sources The lack of support for evaluation programs by the professional service staffs has led to the intro- duction of mandatory reporting systems by funding sources. Among the reasons offered by these bodies are: 1. Reduction in funds available for health and social services. 2. The transfer of funds from categorical pro- grams to either general or special revenue sharing concepts. 3. The need for increased specification of services rendered or reimbursed by private and governmental insurance programs has re- quired them to demand more suitable forms of accounting for the disbursement of funds. In developing their own rate schedule they —6- have required more complete information from providers. 4. Public concern about the increasing cost of social services as well as professional de- bates about the effectiveness of social work methodologies have made visible ques- tions of legitimate public interest which were previously the concern of a small group of researchers and administrators within the professions. Taken together, these have led to a specific series of public laws, state statutes and administrative rules. l. Public Law 92-603 The Social Security amend- ments of 1972 represent a major revision of the sources of funding as well as a dramatic change in the location of power and influence. The revenue sharing alternative to categorical program funding alone has rapidly en- hanced the power of state governments to establish pro- gram priorities and formulas for disbursing funds. In most jurisdictions the competition for funds has been heavily in favor of those who could provide information in support of their budget requests. Increasingly, the managers of state programs are seeking project justifica- tion in terms which report past outcomes as well as past efforts. Many of the programs encompassed under the concept of maternal and infant care programs are exper- iencing difficulty in rationalizing their past efforts (Health Impact Project, 1973). 2. The establishment of Professional Services Review Organizations as part of Public Lau 92-603 is aimed at the review of services rendered in health care programs. While initially emphasizing review of physi- cian related activities, this structure should eventu- ally seek to review the contributions of all health pro- fessionals who participate in the delivery of personal care services. 3. The legislation supporting the development of Health Maintenance Organizations provides opportuni- ties for those interested in public health social work to develop rationales for the provision of preventive health services within these structures. The use of terms like health and maintenance are familiar to you, but less so to those whose professional life has been devoted exclusively to the care of the sick and severely disabled. Yet, the lou status of departments of com- munity medicine, public health and social work within oy the university is carried out to the field of practice. I think that it is safe to say that we can anticipate an arduous series of tasks in developing and sustaining an orientation towards health within these new delivery structures. Early steps as basic as a review of the literature and a sorting out of successful from less successful programs are prerequisites to offering lead- ership to the search for service models which emphasize health. 4. The proposals for the provision of national health insurance provide another context in which to identify the need for evaluative studies. In many ways, the conditions for participation will identify the services and providers who will be reimbursed. Given the high cost of health services, competition for in- clusion in reimbursement categories will be keen and again it is reasonable that those who are able to describe the process and outcomes of their professional contributions will be more favorably considered. Previous sections have described a series of events which will require professionals to more fully demonstrate the effectiveness and the efficiency of their programs of service. Taken together, however, these requirements represent forces external to the service professions and the initiative for defining the manner in which policy decisions and resource alloca- tions are determined appear to remain outside the control of the service professions. On balance, this seems to be a desirable change and one which will con- tinue to enhance the role of consumers in determining both the scope of care and the manner in which care is rendered. At the same time professional responses to reasonable questions need not preclude the participa- tion of the professions in shaping the questions and in describing the form in which they can be answered. We should be careful to note that we are revieuw- ing proposed or recently enacted legislation and we lack experience in observing and documenting the effects of these changes. The principal effect upon those pro- viders who have been participating in these events has been to reluctantly comply with the new requirements. One exception to this is the early experience of the Experimental Medical Care Review Organizations (National Center for Health Services Research and Development, 1973). This section ends my review of external demands for evaluation, and leads to what I hope will be a more ~B- optimistic presentation describing ways in which we may proceed to relate evaluation requirements to the con- duct of the everyday interests of health professions in direct-service, administrative and policy roles. Curiosity and Inquiry I believe that most of us entered into our pro- fessional careers with a sense of curiosity, if not wonder, about ourselves and the world around us. We manifested this interest in our questions and demon- strated our commitment by entering into professional training programs. Our detractors in the academic de- partments may have classified our curiosity as voyeur- ism, but soon we also took that course and learned to appreciate the power of the mantle of the professions. But curiosity needs direction and purpose and we have failed to develop curiosity to the level of scientific inquiry. We have been trained in universities that ap- pear to have one set of expectations for the develop- ment of students in the natural, biological, and social sciences and an alternative set of expectations concerning the knowledge and behavior of students in professional training schools. The research and statis- tics requirements are often taught by professional school faculty who attempt to tailor the educational experiences to the needs of the students. This needs assessment is more frequently addressed to the entry skills of the student rather than the attitudinal or performance skills required to enter into professional life. In addition, time spent in research methodology courses is viewed competitively by other segments of the faculty who often feel that these exercises are merely a necessary accommodation to accrediting com- missions. Finally, many academic faculty members are not active practitioners and experience difficulty in understanding current demands on practitioners to examine and report on the progress and effect of their work. The result of these forces is that students learn that evaluation skills are not highly valued by the faculty. When they leave the training institution, they may be concerned about their treatment skills, but they are infrequently anxious about their ability to understand the effects of their efforts on patients. -9- Activity and Evaluation Projects In a curious way, evaluation measures which focus on effort reporting systems reinforce attempts at increasing productivity without concern for a qualitative assessment of the professional's service. For example, if the measure describes cost per unit of service, then the service system is encouraged to move more clients through the system so as to achieve a lower unit cost. Many professionals recognize the un- certain value of such an approach but are helpless to influence a policy change as they cannot articulate their concerns in either scientific or systematic terms. The vocabulary and strategy of evaluation methodologies are essential if professionals are seek- ing a capability to redirect the focus of the profes- sional evaluator. Conclusion It is in the interest of those served by profes- sionals to insure that the professionals are engaged in disciplined search aimed at identifying effective and efficient ways of meeting the health maintenance and medical care needs of the population. It is difficult to be curious when one is overwhelmed with work. Simul- taneously, it is easier to be overwhelmed when one lacks the skills to monitor and evaluate the process and effect of one's work. Programs of evaluation can be very use- ful to professionals in terms of identifying problems, describing alternative techniques for problem resolu- tion, and predicting outcomes associated with specific approaches. In addition to assisting the professional to understand his work with patients or populations, evaluation efforts can chart changes over time in both consumers and providers and assist in the reformulation of old goals and the generation of new ones. Evaluation need not be a constraint upon professional behavior, nor must it be punitively exploited as a device for exposing the unsuccessful efforts of the professionals. It is only a method or device; its effectiveness is determined by how it is used. Throughout this presentation I have tried to describe why evaluation is important for those in the service professions. I do not believe that it means that we must become unwilling participants totally sub- ject to an impersonal set of management tasks. In fact, -10- evaluation tasks are a small portion of the responsi- bilities of professionals. The essence of a helping profession is to be useful to those they serve. To that end we must engage the challenges of evaluation and accountability so that we can monitor how well we are doing our job of service to others. References Health Impact Project. Publicly Funded Family Planning Programs. Lansing, Michigan: Executive Office of the Governor, 1973. Mortimer, M. Accountability in Higher Education. Wash- ington: American Association for Higher Educa- tion, 1972. Public Law 92-603. The Social Security Amendments of 1972. Scullion, T. On coming to terms with research evalua- tion and accountability. Paper presented to the Tri-County Community Mental Health Board, Lansing, Michigan, June 1973. Suchman, E. Evaluative Research. New York: Russell Sage Foundation, 1967. =] Ll SUMMARY OF DISCUSSION PARTICIPANT: It always seemed to me that one of the problems in evaluating the effects of your profes- sional work is that it is so complex. Most projects that I see resort to a counting of the number of persons they have seen. To try to evaluate the results of work in a service program is very difficult. I would like to know if you have any ideas about how we can begin doing it. DR. SCULLION: I think that your comment re- flects a change in expectation in terms of the knowledge that professionals are supposed to have and the failure of continuing education to have anything to do with gither describing or, even better, anticipating those changes and then developing programs to help it. PARTICIPANT: What I mean is that if you are really going to evaluate, you have to think about such things as control groups, which are really very diffi- cult to build into a service program. DR. SCULLION: I don't think that you necessar- ily need control groups. There are activities in the course of our daily professional lives that we need to know more about. Not all evaluations require control groups and a high level of technical expertise. PARTICIPANT: Another difficulty is specifying and measuring the intangibles. You can count numbers of patients and sometimes results, but how people feel is intangible and can't be counted. DR. SCULLION: I find that if one would just sit down and write out an intangible, it becomes much more tangible. And counting is just one way of understanding something; there are other ways. PARTICIPANT: My question relates to what I think is the general omission to set goals at the out- set in dealing with clients. How can you evaluate what you've done unless you set goals and objectives? DR. SCULLION: You can't. Many social work pro- fessionals are vulnerable on this point, but so are -] Te other groups. I brought with me a 1973 study under- taken in the Executive Office of the Governor in the State of Michigan; it concerns publicly funded family planning programs. The evaluators said exactly what you are saying, but they also held the legislature re- miss for not specifying what they wanted out of the program. They describe spending $5,000,000 in State agencies and publicly funded family planning programs and state the following: In large measure, elected and appointed officials have not provided a much needed definition of the problem that is to be solved by the expenditure of these tax dol- lars. Further, State agencies can shou little in the way of cause and effect rela- tionship between spending any money and achieving the objectives. Where the problem has been defined and the objectives have been established: 1. Available measures of impact are too gross to determine evidence of effectiveness, and 2. Insufficient effort has been made to structure into the programs methods of determining impact on relatively small target population groups. Conceptual Framework for Evaluation of Social Service Programs ELIZABETH L. WATKINS Case-Western Reserve University It seems ironic to be addressing an assembly of social workers on the theme, "What Is Evaluation," when they probably have been the subjects of individual eval- uation more frequently than any other profession. Per- haps because we have had the very personal, subjective, and occasionally traumatic, evaluation of our work per- formance, we may have developed reactions to the word. It could be helpful, then, to begin this meeting with a clear understanding of the objective, scientific process of program evaluation. Definitions The American Public Health Association (1970) has defined evaluation as "ascertaining the value or amount of something, or comparing accomplishment with some standard | p. 1546]." There are various ways of evaluating a program. There can be an administrative review by the staff of the agency, or a team of outside experts can be employed to survey the program using a standardized evaluation guide. This latter approach was commonly applied by the American Public Health Associa- tion in their use of the "Community Health Evaluation Guide" in assessment of local or state public health departments. The standards used in these evaluation processes are often arbitrarily formulated and judg- ments of whether programs meet the standards are often based on the fragmented gathering of facts and sub jec- tive opinions. This form of evaluation has met with less favor as a result of the development of more sophisticated evaluative research techniques and ad- vances in computer science. Because there are several approaches to evalu- ation, Dr. Eduard Suchman (1967) believes it is neces- sary to make a distinction between evaluation and evaluative research. He states: -13=- "rm Based on this awareness that an evalua- tion study may take several different forms and a recognition that the primary function of most evaluation studies is to aid in the planning, development and operation of serv- ice programs, we would like to propose a dis- tinction between evaluation as the general process of judging the worthwhileness of some activity regardless of the method employed and evaluative research as the specific use of the scientific method for the purpose of making an evaluation[ p.31]. In this definition he is seeing evaluation as a goal and evaluative research as one of several methods which could be used in reaching that goal. In order to see more clearly the relationship of evaluation to program planning, it may be helpful to review the six steps of program planning (Suchman, 1967, p. 34). 1. Value formation—the planners determine what behavior is good. 2. Goal setting—what are the desired good behaviors; what are the outcomes which the planners wish their program to achieve. 5. Goal measuring—the planners determine the methods they are going to use to measure whether they have met the goal, the cri- teria and indicators of success. 4. Identifying goal activity—the planners determine the means to reach the goal, the activities to be carried out in order to achieve the desired goal. 5. Putting goal activity in operation—the program is implemented. 6. Assessing the effect of this goal opera- tion—program evaluation. Evaluative research is a process to be used in achieving the sixth step of program planning, that is, program evaluation, but it must be initiated at the first step and be an integral part of the remaining -15- four. Because evaluative research offers the most ob- jective and valid way of achieving evaluation, the re- mainder of this paper will be concerned with evaluative research. Types of Evaluative Research Evaluative research is the application of the scientific method of investigation in assessing a pro- gram. There are two models of evaluative research. One is the goal-attainment model and the other is the systems model. The goal-attainment model is basically an ex- perimental research design. It "begins with the state- ment of a 'causal' relationship hypothesized between some program or activity (the independent variable) and some desired effect (the dependent variable) (Suchman, 1967, p. 84)." The desired effect is usually some change in the condition, behavior, or knowledge of the consumer or recipient of the services. The verification of this evaluative hypothesis requires the design of a research project which would shou that the desired effect was more likely to occur in the presence of the program being evaluated than in its absence. It is recommended that there be a control group in order to rule out the influence of extraneous variables. The focus of the goal-attainment model is on whether some predetermined goal is met. An advan- tage of the goal-attainment model is that a sub-unit of a program, for example, the social work program in a Maternity and Infant Care project, can be studied to assess whether the social work program achieved its goals without having to study whether the total pro- gram met its goals. A limitation of this model is that it often does not take into account the con- straints which other systems may be placing on the unit under study. In the systems model the evaluator is more con- cerned with the process by which the organization per- forms its functions and achievement of goals is second- ary to concern with the process. The evaluator's focus is on the program's utilization of its means and re- sources in fulfilling its purpose. In other words, his attention is devoted to the way in which the program utilizes its input to achieve its output. In the sys- tems model the evaluator is required to determine what would be the best allocation of resources in order for -16- the program to achieve its objectives and he studies the organization's allocation of means to see whether it meets this optimal distribution (Schulberg, Sheldon & Baker, 1969, p. 11). The systems model is based on organization and systems theory and the evaluator must have a great deal of knowledge about the varied purposes and activ- ities of many parts of the organization. It is a very demanding and time-consuming model of research. The goal-attainment model on the other hand can be made simple or complex, depending upon the problem to be studied or the time and money available. It is for this reason that the goal-attainment model is more frequently used than the systems model. The goal-attainment model is the one which has been used most frequently by social work in the past and it has been used mainly to evaluate the effectiveness of therapeutic techniques. For example, in the January 1973 issue of Social Work Joel Fischer analyzes 11 con- trolled studies of the effectiveness of casework. This review concludes that nine of the studies showed there was no statistically significant difference between social workers with M.S.W. degrees and social workers with B.A. degrees, or between social workers with M.S.U. degrees and personnel with no skills in social work intervention, in regard to the proportion of clients whose social functioning improved as a result of re- ceiving their services. In the remaining two studies the research methods were deficient and no comment could be made. Limitations in the research methodology of all 11 studies will be commented on later. One of the major deficiencies of all the studies, however, is that they did not take into account the effect the system of the agency had on the ability of the social worker to be effective. This is particularly true of the studies conducted in welfare departments where the trained social worker was supposed to completely rehabilitate multi- problem families at the same time the agency was limit- ing the amount of money available for rent, food and clothing. The systems model provides a mechanism for tak- ing into account the influence one part of the agency has on the other, or the influence other systems have on the delivery system under study. The other advan- tage the systems model holds for social work is that it ~17~ provides a way to assess the structure and organization of social services. The phrase "allocation of means" includes, among other things, manpower utilization; that is, does the social work program have the most efficient pattern of levels of staff and deployment of their talents? The goal-attainment model and the systems model are not necessarily mutually exclusive. Because one of the indices of the systems model is evaluation of unit productivity, procedures of the goal-attainment model may be used as a component of the systems model. Levels of Evaluation In order to determine which type of evaluative research to use in evaluating a program, one should first determine what level of evaluation is desired. The following is a description of a typology of level of evaluation developed by Dr. George James (1969) with comments added by this author regarding the most suit- able type of research for the specific level of evalu- ation. Evaluation of effort. Evaluation of effort is concerned with the quantity and quality of activity that takes place. It answers the questions—'"What did you do?" "How well did you do it?" This is usually a numerical count of activities measured against national standards; for example, the number of patients seen per staff member. If this was the only aspect of a program being evaluated, a qguantitative-descriptive design would be sufficient. Evaluation of performance. This level of eval- uation is concerned with the outcomes which the pro- gram's efforts have produced. This level is often re- ferred to as the evaluation of effectiveness. It asks the questions—"How much was accomplished relative to the immediate goal?" "Did any change occur and is it the one intended?" If evaluation of perfarmance is the only level of evaluation desired, then the rigorous goal-attainment model is used. The study of perform- ance, however, is often done in conjunction with the study of efficiency (described below) and in that case the systems model is recommended. wll Evaluation of adequacy of the performance. This level of evaluation attempts to answer the ques- tion—"To what degree has the program solved the ex- tent of the problem in the community?" The ansuer would involve conducting community surveys. Evaluation of the efficiency of the program. This level attempts to answer the question—"Is there any better way to achieve the same results?" Efficiency is concerned with the evaluation of alternative methods in terms of costs of money, use of personnel and public convenience. It is the ratio between effort and perform- ance, that is, output divided by input. The systems model is recommended in conducting this level of evalu- ation. General Conceptual Framework for Program Evaluation with Particular Reference to Government Sponsored Health Programs Because the majority of the participants in this meeting are employed in a governmental system of deliv- ery of health services, this author will first describe a general conceptual framework to be used in evaluating such programs, regardless of the level of evaluation or model of evaluative research selected. This will be followed by comments on each of the models. In a paper entitled "A Behavioral Model for Assessing the Effectiveness of Social Services Pro- grams," Dr. Robert Washington states that the govern- mental system of service delivery consists of three subsystems. They are: 1. The donor subsystem. It includes congress- men, the administration, and federal agen- cies which make laws, allocate funds, and develop guidelines and mandates for imple- mentation of the federal program. 2. The consumer subsystem. It is the target population to be served by the service system. 3. The service delivery subsystem. It in- cludes program staff, site, facilities, and activities related to the problem to -19- be solved by the program. This third sub- system intersects and joins together the donor and consumer subsystem. The program evaluator must be aware of the inter- action of these three subsystems, the role each plays in the establishment of program objectives and activities, and the effect their interaction has on the achievement of objectives. It is important to keep this breadth of perspec- tive in mind when considering the following comments on the specific research models because, as previously stated, this is the advantage current evaluative re- search approaches in the field of social work have over earlier attempts. Goal-Attainment Model The method of conducting a goal-attainment re- search project is the same method used in conducting any project of scientific investigation. Included are the following five steps: 1. Statement of the research problem or hypothesis. 2. Statement of the operational definitions of the independent and dependent variables; establishment of criteria and methods of measurement. 3. Development of methods of collecting data. 4. Analysis of data. 5. Interpretation of findings and application to present program planning. The hypothesis to be tested in evaluating a pro- gram is that the initiation of a program (independent variable) will cause some change in the condition, behavior, knowledge, or attitudes of the consumer (de- pendent variable). Because it is difficult to control for the influence of other events in the consumer's life which may result in changing his situation or behavior, the program can be considered an attributable and not a necessary cause; it is for this reason Suchman (1967) -20= states the hypothesis as the desired change being more likely to occur in the presence of the program than its absence. The preferred research design is the Pretest- Postest Control Group Design. This is the design in which the evaluator selects two population groups or cohorts which are similar in characteristics. We shall refer to them as Group A and Group B. Prior to the initiation of the service, the evaluator measures the groups in relation to the variable; that is, the condi- tion, knowledge, behavior or attitude which is to be changed. This is known as the pretest. Then Group A is enrolled in the service program but Group B is not. Both groups are followed forward in time and after a specified interval they are measured in regard to the same variables as in the pretest. This second measure- ment is called the postest. If there are observed changes in the designated behaviors in Group A (those exposed to the services) but not in Group B (those not exposed), then it may be assumed that the changes in Group A are due to program services, depending upon the degree to which extransous variables are controlled. This design is preferred over the one or tuo samples postest-only design because it has more poten- tial for validity. Validity is the degree to which the study really finds out what the evaluator says it does. In the postest-only design the evaluator has no opportunity to control the variables which may be af- fecting the impact of the services; and he is also de- pendent upon retrospective analysis of records which were not kept for research purposes. Even in the Pretest-Postest Control Group De- sign the evaluator is not able to control all the variables affecting the consumer nor is he able to randomly assign people to services or control groups. Therefore, it is difficult to conduct such a study according to the rigorous demands of the classical experimental form. Instead, evaluators use procedure more similar to the quantitative descriptive, or as MacMahon, Pugh and Ipsen (1960) refer to it, a cohort design of the analytical classification. The Quasi- Experimental Design (Campbell and Stanley, 1963) al- leviates the pressure for rigorous matching of control and experimental populations and points to ways of using similar populations while retaining validity of the study. -21- The tasks which an evaluator has to perform in order to design and implement the Pretest-Postest Control Group Design are as follows (Suchman, 1967, ps. 31): 1. Identification of the goals to be evalu- ated. 2. Analysis of the problems with which the activity (that is, the program or services) must cope. 3. Description and standardization of the activity. 4. Measurement of the degree of change that takes place. 5. Determination of whether the observed change is due to the activity or to some other cause. 6. Some indication of the durability of the effects. Because the goal-attainment model is concerned with the measurement of whether a program achieves its objectives, it is recommended that the evaluator be in- volved in the early planning and designing of the pro- gram. The reasons for this are several: 1. Program planners rarely define goals and ob- jectives in specific enough terms so that they can be measured. Therefore, it is valuable for the evaluator to work closely with the planners and staff of the pro- gram in its early stages so that there is consensus be- tween donors, providers, consumers and evaluator on what they think the objectives of the program (the de- pendent variable) are to be. Similarly there should be consensus on the independent variable (the services). 2. It is recommended that a goal attainment study have a control group which has not received the services of the program in order to rule out the in- fluence of extraneous variables as the cause of the ob- served change in behavior. The evaluator should be able to apply a pretest of the experimental and control groups before the program's services are initiated. eT 3. The evaluator should be able to design the data collection tools before the study is initiated so that there can be uniformity and consistency of records in the collection of the data. Let us look more closely at the specific steps of the research design. As Elizabeth Herzog (1959) stated, the research process is to measure the change "from what" "to what" "by means of what." Specification of Evaluation Objectives The first step is to determine the "to what," that is, evaluation objectives. The evaluation objec- tives are based on the program objectives and perhaps it would be helpful to distinguish between the two. The program objective states the result or outcome of program activities. A program objective is usually stated in terms of reduction of consumer need; that is, reduction in discrepancy between the present condition of the con- sumer and some ideal state. The evaluation objective may be one or more of the program objectives which have been selected for study. It is stated in terms of a criterion or condition accepted as a standard of program success (Washington, 1973). An evaluator may have to help a staff define pro- gram objectives before he can define his evaluation ob- Jectives. The stating of program objectives may be dif- ficult because such objectives are subject to the influ- ence of the value system and wishes of the three pre- viously mentioned subsystems—donors, providers and con- sumers. A social worker attempting to establish ob jec- tives of a social work program in a multidisciplinary health program may experience conflicting points of view when she tries to be responsive to the value sys- tem of her own profession, to the perception of the role of social work in the total program by other dis- ciplines, and to the donor's perception of the program. For example, a social worker in a Maternity and Infant Care project, following the precepts of her oun profes- sion, may see her objective as improving the social functioning of families served by the project. Other disciplines involved in determining objectives may dis- agree, seeing "improved social functioning" as too -23- global an objective for a social work program in a health service delivery system and more appropriate for a welfare department or family service agency. They may consider this to be a subobjective but not the prime objective which they see as that of reducing fetal loss and prematurity. On the other hand, if the project is affiliated with a local health department, the director may see the prime objective of the project as expansion of local maternal and child health pro- grams with federal funds. The legislation authorizing Maternity & Infant Care projects states the objectives as helping to reduce the incidence of mental retarda- tion and other handicapping conditions caused by compli- cations of child bearing and helping to reduce infant and maternal mortality. The social worker must negotiate the congruency of her objective of "improved social functioning" with these other objectives. In order to prove that they are effective con- tributors to the delivery of health services, social workers must acknowledge the value systems and expecta- tions of people in the three subsystems. Social work objectives will have to be stated in terms which are in keeping with total program objectives and not refer only to an isolated aspect of social work treatment. Another element of the task of defining program objectives is establishing a hierarchy of goals and ob- jectives. Suchman (1967, Ch. IV) uses the hierarchy of ultimate goals, intermediate goals, and immediate goals. Westinghouse Health Systems (1971), whose work in Maternal and Child Health Services will be discussed later, uses the hierarchy of Goal, Prime Objective and Subob jective. Whatever labels are used they connote descending order of time from long-range goals to im- mediate goals. In addition it is necessary to specify the tasks and activities to be performed in order to achieve the goals or objectives. Performance of the tasks and activities then become the criteria for achieving the goal. Suchman (1967, Ch. III) states that the evalu- ator needs to answer the following questions during the process of defining the evaluation objectives in operational terms. 1. What is the nature of the content of the objective? =D Lye It is necessary to know whether you are inter- ested in changing knowledge, attitudes, or behavior or in changing the rate of occurrence of an event, such as fetal loss. This decision is probably the most diffi- cult one for social workers to make. The objective has to be stated in terms of phenomena which can be objec- tively observed and measured so that the amount of change from pretest to postest can be measured. Change in behavior, therefore, is far easier to set as the de- sired goal because it is easier to get a reliable measurement of change of behavior than change of atti- tude or feeling. Social workers sometimes find it dif- ficult to specify behavior change because they deal so much with attitudes and feelings. The trick is to des- ignate a behavior which is evidence of a change in feel- ing. The previous example of the social worker speci- fying that the program objective was to improve the so- cial functioning of the families served by the project is an exaggerated illustration of the tendency of social workers to select objectives which are vague and global and hard to translate into behaviors which can be achieved within a limited time period. 2. What is the target population which is to demonstrate the desired behavior change? Continuing in the example cited above, the social worker would have to decide whether the target population which was to demonstrate improved social functioning would be the population in the census tracts served by the project, or only those families in which a member received service from the project. The next question to answer would be whether all members in the family would have to demonstrate improved behavior or only the women served by the project. If the latter course is selected, does this mean only women who had direct contact with social work staff or could it in- clude women who have been indirectly affected through social work consultation with other staff members or through community organization efforts? 3. When is the desired change to take place? Is the change to occur within a short time of receiving services or is the change to occur after a long term, developmental approach? Again, social work- ers sometimes feel omnipotent and believe they can af- fect immediate reversal of life-long habits. They have to be realistic about the changes which they can affect D5 within the time limits of the study. 4. Are the objectives unitary or multiple? Is the program aimed at a single change or a series of changes? If the evaluator picks only a single change to measure, the intention of a series of changes may bias the evaluator's findings. 5. What is the desired magnitude of effect? One cannot always effect positive change in 100 percent of the population receiving service. The eval- uator has to specify what proportion of patients exper- iencing a positive change he will accept as evidence of success of the program. Some social workers who read reviews such as Mr. Fischer's believe the author mini- mizes the proportions who were helped by social work services, even though they were not a statistically sig- nificant proportion. 6. How is the objective to be attained? Will the objective be attained by the social worker operating alone or in concert with others? This will have an effect on whether the change can be at- tributed to the social worker or whether there were other influencing forces. As stated above, once the evaluation objectives have been defined, the evaluator has to determine the criteria of success or failure in regard to achieving the objectives and develop indices of measurement. The methods of measurement and the indices have to have re- liability as well as validity. The defining of evaluation objectives is the most important part of the study. Some social workers find it difficult to define objectives before they initiate services because they are accustomed to remaining flex- ible in order to change diagnoses and treatment goals. This resistance, however, is being overcome as the de- velopment of the problem-oriented record, short-term therapies, task-oriented casework, crisis intervention and other concepts are requiring social workers to be more specific in defining the problem to be solved and stating its proposed solution. —26- Once the "to what" has been defined, that is, the phenomena which will be measured in the postest period, then it is comparatively easy to determine the "from what." The same elements of knowledge, atti- tude and behavior should be measured in the pretest period as in the postest period and many aspects of the tools of measurement should be the same since the goal of the study is to measure the degree of change. Specification of the Independent Variable The next step in the goal-attainment research model is to operationally define the independent vari- able; that is, the program or the services to be de- livered. In the case of evaluation of the social work program this means the specification of the social work methods of intervention which will be acting upon the consumer. In the studies described by Fischer (1973), the prime method of intervention under study was case- work, although group work was also used. As Fischer stated [p.6], he was never able to find a satisfactory definition of the tasks and functions which were sup- posed to constitute professional casework services; consequently he had to accept the global definition of any services provided by a social worker with a Master's degree in social work. Social workers in health programs use a variety of methods, not only in direct work with the patients and their families but indirectly through consultation to other professions and through advocating changes in larger organizational systems. They also arrange for services to be provided by other agencies. Staffs of social work departments in health units are composed of a range of personnel with varied professional prepara- tion and they often work in teams. So questions which would have to be answered in relation to the independent variable would be: 1. What are the tasks and functions which constitute the services of the social work program? 2% Are indirect services to be included as well as direct? 3. Does the definition of social services constitute only those provided by the health DF program under study or does it include aervices the family receives from other agencies as a result of arrangements made by social workers in the health program? 4. Should there be specification of the number and professional qualifications of the social work staff? Specification of Other Intervening Variables The evaluator also has to take into account other variables which would intervene between the interaction of the social work program and the con- sumer. These variables include such factors as the services of other disciplines, the services of other agencies, and the occurrence of natural events such as death or disasters over which no one could have con- trol. All of these could lessen or enhance the effect of the social work program and lessen the validity of measurements to prove these effects. The only way they can be "controlled" is through efforts in data collec- tion or through statistical analysis. Suchman (1967, p. B87) stresses the need to prove that Program A really was the cause of Change B in the consumers and that Change B was not caused by a third variable C to which they are both related. In such instances C would be considered the real cause and A a spurious one. Data Collection The next step in the evaluative research pro- cess is the designing of the research tools to collect the data which will be needed to measure the rate or failure of success. The specification of tasks and activities which will be used as criteria of achieve- ment and the indices of measurement will be guides re- garding the data to be collected. Again, the tradi- tional record-keeping methods used by social work pro- grams are primarily statistical counts of services per- formed and are not adequate to the task. The problem- oriented record and attempts such as those made by Miss Mary Jean Clark (Consultant in Social Work Practice in Hospitals, Maternal and Child Health Services, Depart- ment of Health, Education and Welfare) to develop a problem classification system based on health and psychosocial phenomena are advances in this area. The 2 challenge is to translate social work data into com- puter form so that it can be analyzed in quantitative measures. Analysis of Data Data analysis, which is performed after the postest, is the stage where the degree of change which has taken place is actually measured and where it is determined if the change is attributable to the pro- gram's services. In the past social work has relied on descriptive reports of activities. Ordinal scales and non-parametric tests have been the most frequently used in the measurement of data or data was artificially con- verted to interval scales. We need to expand our know- ledge of statistical analysis which can be applied to social work data. Interpretation of Findings On the basis of the findings, the evaluator de- termines whether the desired change was achieved and considers the factors which enhanced or constrained the attaining of the objectives. Implications for change in the present program are formulated into recommenda- tions as are implications for planning of future pro- grams. Systems Model As previously stated, the systems model is based on the concept that organizations constantly pur- sue multiple objectives and the effectiveness with which any goal is attained must be studied in the con- text of its influence upon the attainment of other goals. There are also subsystems of the organization, such as secretarial staff and certain administrative personnel, whose function may be to maintain the sys- tem and who are not directly related to goal achieve- ment. What is important in an organization is a bal- anced distribution of resources among the various organ- izational needs, not maximal satisfaction of any one activity, even of goal activity. The central question to be answered in systems research is "Under the given conditions, how close does the organizational allocation -20—- of resources approach an optimum distribution?" An ex- perimental design is not required to ansuer this ques- tion and therefore a systems model can be used to per- form an organizational analysis of one program. The systems model is concerned with the process by which the organization converts its inputs (re- sources of time, money, and staff) into outputs (serv- ices to the aay It is concerned with linkages between subsystems and the various components of the program. Georgopoulos & Tannenbaum propose three in- dices of organizational effectiveness: 1. Station productivity, that is, the out- put of each unit in the organization. 2. Intraorganizational strain as indicated by the incidence of tension and conflict among organizational subgroups. 3. Organizational flexibility defined as the ability to adjust to external or internal change (Etzioni, 1969, p. 110). The evaluation is concerned with proving the program's effectiveness (the degree to which perceived goals are met) and efficiency (productivity). The pro- gram is being tested against itself rather than national standards or another program so control groups are not needed. The systems model can be illustrated by a de- scription of the Westinghouse Health Systems study which is being conducted in conjunction with the Cleveland Maternity and Infant Care Project. The Cleveland Maternity and Infant Care Project (hereafter referred to as the Cleveland M&I Project) is funded through the Ohio State Health Department but it is a semi-autonomous project, affiliated with the Medical School of Case-Western Reserve University, based at the Cleveland Metropolitan General Hospital (part of the Cuyahoga County Hospital System), and also related to the Cleveland City Health Department. It conducts clin- ics at the Hospital and provides delivery services at the Hospital but has five satellite clinics in Health Department and other neighborhood centers. The social work staff, including the Director, is composed of eight workers with Master's degrees in social work, two workers with Bachelor degrees and ten community aides. BY The Cleveland M&I has a contract with the West- inghouse Health Systems whereby systems engineers from Westinghouse work together with the M&I staff on a self- analysis. The study has tuo parts. In one, the M&I staff does a self-analysis in order to measure the ef- ficiency and effectiveness of its health-care delivery and to identify the potential solutions for the weak- nesses which may be found. The second part relates to the development of a procedural manual for such a self- analysis study which can be distributed to other M&I projects. The objective of the study is to improve the organization and administration of the M&I projects. The self-analysis is based on the concept of the staff determining together program objectives; therefore, pro- cedures in the manual allow for diversity among projects regarding objectives and conditions affecting individual projects. Because of this diversity, it probably will not provide for comparison between projects. The procedures of the study are categorized into eleven tasks assigned ten numbers (Westinghouse, 1972). 1. To identify ultimate goals and intermediate objectives of the M&I Project. This was first done for the program as a whole by a committee with representa- tives of all disciplines and categories of staff. Then, each unit of the project, including administration, de- veloped its own statement of ultimate goals and inter- mediate objectives. 2. To examine modes of delivery. This involved preparation of organizational charts and flow charts in regard to both administrative process and patient care. 3. To develop and test efficiency improvement techniques. The study of administrative structure of the program and modes of service delivery revealed weaknesses in productivity and strain between units. Consequently, the Westinghouse staff together with the project staff worked out improved methods of operation, which led to changes in utilization of staff in the satellite clinics and changes in obstetrical services at Cleveland Metropolitan General Hospital. 3a. After desired procedures are established, to prepare a policies and procedures manual for use in providing services in the program. a 4, To evaluate the efficiency of the improved techniques. 5. To devise effectiveness criteria and stand- ards in order to determine whether objectives outlined in Step 1 would be met. 6. To test the effectiveness of the evaluation and methodology. 7. During this process to develop a lexicon of standard M&I terms. 8. To prepare a preliminary manual for use in carrying out a self-analysis study. 9. To evaluate the manual. 10. To produce a final manual which will be a standardized procedure to be used by other M&I pro- jects. This study started in July of 1971. They have completed step six, the testing of the effectiveness of the evaluation and methodology (Westinghouse, 1973). The social work staff in the Cleveland Mé&I participated in the development and statement of the overall objectives of the program. For example, one of the Prime Objectives is to improve the health care de- livery system. Under it is the criterion of the degree to which the M&I Project receives and implements recom- mendations of the Patient Advisory Board. The social work staff has given leadership to the establishment of the Patient Advisory Board and offers staff services to it. They will be involved in determining the degree to which the Board's recommendations are received and imple- mented. The social work staff have also developed the criteria specifically related to evaluation of the pro- gram of the Division of Neighborhood and Social Services. In order to test the effectiveness of the social work program they selected the Prime Objective of insuring high quality social service to patients enrolled in the M&I Project. The subobjective was to insure that the care provided meets the accepted standard of social service health professionals. Criteria for meeting this BD subob jective was the accurate identification of the socially high risk mother and infant. Records were to be reviewed and answers obtained to the following questions: 1. Are crisis situations clearly identified? 2. How effective is early problem identifica- tion? 3. Does the properly qualified person handle the crisis situation? 4. How promptly is service provided? 5. Were objectives for management of specific problems adequately defined? 6. Were follow-up and proper referral to other services adequate? 7. Could this crisis have been anticipated and thereby prevented or minimized? Application of these criteria in regard to social work was integrated with a multidisciplinary effort in which each profession (medicine, nursing, social work and nutrition) examined the records of 30 patients to see whether the patients' problems had been properly identified, if they referred to the appropriate person for solution, what action was taken, and what was the change in direction of the problem. Each profession had to develop a protocol including the detailed descrip- tion of how records were to be analyzed in order to ob- tain the answers to the above questions. This protocol will serve as the basis of the manual for self-analysis which will be distributed to other M&I projects. The above procedure was completed during the Spring of 1973. A verbal report from one of the social work supervisors who participated in the study indi- cates it did involve considerable time on the part of all professionals. The fact that it was a retrospec- tive procedure brought to light areas in recording which needed to be changed if this procedure was to be adopted on a larger scale. The positive aspects of the study have in- cluded increased multidisciplinary interaction in the TB determination of program objectives and procedures. There has been study of the utilization of staff and improvements in this area. The systems engineers in one of the progress reports (Westinghouse, 1972, pp. 33-36) commented that the project staff were relatively powerless in regard to some of the constraints and re- quired procedures because of the complex administrative structure of the project. As a result of some of the meetings held regarding the findings of the study, particularly with key administrative people in the var- ious agencies involved, change in some of these con- straints was effected. In conclusion, we see the value of both the goal-attainment model and the systems model of evalu- ative research. The objective evidence which they produce regarding the effectiveness and efficiency of the program not only can provide support for continu- ation of a program but also can provide support for measures to improve and develop a program. References American Public Health Association, Committee on Evaluation and Standards. Glossary of evalu- ative terms in public health. American Journal of Public Health, 1970, 60, 1546-1552. Campbell, D. T. & Stanley, J. Experimental and Quasi- Experimental Designs for Research. Chicago: Rand McNally, 1963. Etzioni, A. Tuo approaches to organizational analysis: A critique and a suggestion. In H. C. Schul- berg, A. Sheldon & F. Baker (Eds.) Program Evaluation in the Health Fields. New York: Behavioral Publications, 1969. Pp. 101-120. Fischer, J. Is casework effective? A review. Social Work, 1973, 18(1), 5-20. Herzog, E. Some Guide Lines for Evaluative Research. Washington: Children's Bureau Publication No. 375-1959. Bl se James, G. Evaluation in public health practice. In H. C. Schulberg, A. Sheldon & F. Baker (Eds.), Program Evaluation in the Health Fields. Neu York: Behavioral Publications, 1969. Pp. 29- 41. MacMahon, B., Pugh, T. F., & Ipsen, J. Epidemiologic Methods. Boston: Little, Brown, & Company, 1960. Schulberg, H. C., Sheldon, A., & Baker, P. (Eds.) Pro- gram Evaluation in the Health Fields. New York: Behavioral Publications, 1969. Suchman, E. A. Evaluative Research: Principles and Practice in Public Service and Social Action Programs. New York: Russell Sage foundation, 1967. Washington, R. A behavioral model for assessing the effectiveness of social service programs. Un- published manuscript. Case-Western Reserve University, School of Applied Social Sciences, 1973. Westinghouse Health Systems. Development of an M&I Project Self-Analysis Manual. Baltimore: Westinghouse Electric Corporation, Health Systems, 1971. Study conducted under USPHS Grant No. MC-R390129-020. Westinghouse Health Systems. Development of an M&I Self-Analysis Manual: Progress Report for the Period July, 1971-June, 1972. Baltimore: Westinghouse Electric Corporation, Health Systems, 1972. Westinghouse Health Systems. Development of an M&I Self-Analysis Manual: Progress Report for the Period July, 1972-June, 1973. Baltimore: Westinghouse Electric Corporation, Health Systems, 1973. i. SUMMARY OF DISCUSSION PARTICIPANT: Earlier you recommended the use of a control group. Where do you stand on using other statistical measures for taking care of confounding var- iables? I've been in discussions with social psycholo- gists who are concerned about withholding services from a control group and they feel that some of the statis- tical measures that we nou have can take into account many of the uncontrolled variables. DR. WATKINS: If you can't have the ideal situ- ation, statistical measures can be used. I would also suggest that you read Campbell's material in selection of a comparison group. In lieu of selecting a group and withholding treatment, he suggests examining a group in an area of the country which has not had similar serv- ices; this can be done with contiguous states, for ex- ample. It's not the same as taking a census tract and deciding to give service to people on one side of the street and not on the other. The Cleveland M&I covers only certain census tracts. The Campbell approach would be to look at the people in the census tracts to which these services have not been available. PARTICIPANT: There is increasing interest in the participation of consumers in the evaluation of programs. Could you speak to that? DR. WATKINS: Consumer evaluation of program effectiveness is very valuable. We need to devise measurements to see what the consumer thought of the service and whether it met his objectives. Too often providers are deciding what benefits are deriving to consumers without really looking to see whether they and the consumer are in agreement on what the benefit should be and whether the service really met the con- sumer's criteria. PARTICIPANT: Who determines how much of an agency's resources should be committed to evaluative research? Are there criteria for such a decision? ~JB~ DR. WATKINS: It does take a consensus of the donors of funds and the administration of the project. How much time should be spent on research and how much time should be spent on service? Whatever procedure is established, it should be one that can be carried out normally in the routine of work, and service should not be sacrificed in order to carry out the evaluative re- search. When I was talking to the supervisor of social service in the M&I Project, she said that the chart re- view of just those thirty charts while still trying to carry the heavy load of service was very demanding on the whole staff. I think the decision of how much time is going to be invested in research must be made program-wide so that there is a positive attitude toward carrying out the procedure. Evaluative Research in Social Work EDWARD J. MULLEN Fordham University The effectiveness of social welfare programs and the usefulness of evaluative research were of lit- tle concern to social work professionals a decade ago. In the early 1960s most of us were convinced of the value of social work interventions and optimistic about the opportunities for expanded application of those skills in programs created during the Kennedy and Johnson Administrations. The average social worker in those days had developed his practice knowledge and skills in educational programs that focused on cammun- icating practice wisdom and rested on personal experi- ential learning in closely supervised practice. The concern was with learning to apply in a skillful manner what was already known about helping people in need, and adding gradually to that body of knowledge through practice experience. A decade ago, social work spokesmen seemed to be so convinced of the potency of the profession's skills that significant numbers of policy makers came to entertain the possibility that if only sufficient numbers of social work professionals could be made available and programs funded, many of society's major social problems could be significantly reduced. The theme a decade ago was, "Give us the op- portunity to demonstrate the potency of our skills." The emphasis was on action and expanded resources. The implicit assumption seems to have been that we already knew hou to help people, and that uhere gaps in knowledge and skill existed, they would quickly be filled through experience. Opportunities and funds were forthcoming, and a rather impressive number of demonstration projects was launched. The results of many of these demonstra- tions were reported in the latter half of the 1960s, and the findings are now being widely discussed and debated. This debate within social work began on a we GF we -38- large scale with the publication of Girls at Vocational High (Meyer, Borgatta, & Jones, 1965) and "The Chemung County Study" (Wallace, 1967). Judging by recent pro- fessional conferences and publications this debate is yet to peak. What has happened is that the cumulative re- sults of the nearly tuo dozen experimental demonstra- tion projects reported have unquestionably failed to support the expectations of the average professional. These results and the debate that surrounds these find- ings have called into question more than the effective- ness of the particular programs evaluated; they have called into question assumptions that are basic to social work's existence. This "effectiveness crisis" obviously is not solely a consequence of the results of evaluative re- search in these relatively few demonstration projects, nor is this crisis restricted to social work. Rossi and Williams (1972) have recently reviewed the frustra- tions and disappointments accompanying most of the national social programs launched during the mid and late 1960s in the areas of compensatory education, man- power training and income maintenance. While conven- tional social work interventions were not directly ex- amined in these programs, it seems safe to assume that the failure of these other programs to achieve the hopes set forth by their planners has added to the current crisis among social work professionals. The results of these programs clearly suggest that our society has yet to develop effective methods for deal- ing with its social problems and needs. In the fields of psychiatry and psychology Bergen (1963, 1966), Carkhuff (1964, 1967, 1969, 1971), Eysenck (1952, 1960, 1966), Levitt (1957, 1963), Meltzoff and Kornreich (1970), Shaffer (1956), Shoben (1949, 1956), Truax (1964, 1967) and many others have examined this issue of effectiveness, and these debates have unquestionably added to social work's current con- cern. The question of the effectiveness of psycho- therapy and counseling has been of active concern since the early 1950s, and there does seem to be reason to believe that it has been dealt with more thoroughly and more constructively there, that is, in the fields of psychology, psychotherapy and counseling, than in social work. Of course, the effectiveness crisis in social work has bean considerably aggravated in recent years ~50- by the funding cutbacks and the social policies of the current Administration. This needs no elaboration other than to say the increased competition for funds by ex- panding interest groups and the growing skepticism about the effectiveness of conventional ways of doing things places social work in a highly defensive position. All that I have said to this point can be sum- marized in a single phrase: Social work, a profession until recently assumed by society to be valuable, has exposed itself and its effectiveness, and in so doing is being called into question. Questions are now being posed that demand answers. Few serious critics would conclude that the experience of this past decade has proven social work to be ineffective. Few, houever, would disagree that it is now more than ever a question that needs to be answered. What is even a greater con- cern is that social work seems ill-prepared to provide or even develop ansuers to these essential questions. I make this observation because in my opinion three pre- conditions are necessary for the development of answers, and none of these pre-conditions currently characterize the average professional. first, it is essential that professionals recognize that most of the principles that guide our interventions are unvalidated assumptions, open to question. This is not to say that these guiding prin- ciples are invalid, but rather that their validity is questionable. (Parenthetically, given this fact, Briar, 1973, has suggested that professional ethics may re- quire that clients and students be made aware of this state of affairs before engaging them in training or service.) It appears that adopting such a skeptical stance would be asking the near impossible of most ex- perienced social workers who have learned to believe firmly in the validity of their interventions. How does one move from genuinely believing to questioning? A second pre-condition essential for developing answers to these questions of effectiveness is motivation. There must be, to use french's analogy, a productive bal- ance between the discomfort of not knowing and the hope that knowing is possible and worthwhile. The disappoint- ing results of previous evaluative research studies and the frustrations accompanying the social programs of the 1960s could well lead to a loss of motivation and, there- fore, a turning away from the questions of effectiveness. The third pre-condition is the use of a scientific approach to social work practice. This indeed would be a revolution in social work, for it would require the pro- fession to move from its highly subjective, non-systematic, ly ss metaphysical, authoritarian orientation to one in which guiding principles and theoretical frameworks are de- veloped through the interaction of empirical evidence and intellect in a systematic and controlled fashion. As Reid and Epstein (1972) and Briar (1973) have sug- gested, such a stance to practice would require that social interventions would have their objectives stated in observable terms, and that programs would be struc- tured so that controlled, systematic feedback would be possible. This may well be the most utopian of the three pre-conditions, for the first require changing the atti- tudes of social work professionals, while this latter re- quires in addition changing the social welfare systems. This was recently brought home to me while I was Director of Research and Evaluation at the Community Service Society of New York. As a result of a large self- study in 1969-1970 that agency established the policy that all of its direct service programs would be evalu- ated. It was found that this mandate could not be real- ized unless programs were structured in such a way that scientific inquiry were possible. The practice of at- taching an evaluation component to a program previously designed solely for service was soon found to be ineffi- cient, frustrating and unproductive. The organizational arrangements required to realize both a service and re- search mandate are qualitatively different from those to which social agencies have been accustomed. Adopting a scientific stance to practice also re- quires that programs be initiated within an explicit theoretical framework that can be assumed to present a reasonable rationale for attainment of the stated objec- tives. Programs initiated without such rationales, such as those launched simply because of political expediency, funding availability, organizational or personal gain, have little chance of adding to the knowledge of program effectiveness. Such a scientific stance requires compe- tent planning, program development and administration. Fairweather (its) has outlined an approach to service delivery having many of these elements, and it could serve as a guide to those interested in experimental social innovation and service delivery. What I have indicated then, with these opening re- marks, is that in a very short period, the past decade, the social work profession has experienced and continues to experience a deep shock concerning many of its funda- mental beliefs, and that its adaptability is now chal- lenged. I have suggested three ways in which I believe -41- social work must change if the profession is to ade- quately cope with these questions of effectiveness. I have been asked to discuss the results of eval- uative research studies in social work. As I considered this, I was again reminded of how rapidly the results of evaluative research have been communicated within the profession. While only one major experimental evaluation of social work effectiveness was published to my know- ledge prior to 1960 (Powers & Witmer, 1951), I now count nearly two dozen that have used either an experimental or quasi-experimental design. Many of these reports not only present and discuss the findings of the particular study, but also contain discussions of other related efforts. In addition, the last several years, and in particular this past year, have produced several excel- lent reviews of groups of evaluative research studies in social work. The 14 papers prepared for the Fordham Symposium (Mullen, Dumpson & Associates, 1972) each dis- cuss the findings of recent evaluations and consider their implications for social work education and prac- tice at various levels. The September 1972 issue of the Social Service Review contains reports of four stud- ies and a critical editorial. The article by Joel Fischer and Scott Briar's editorial in the January 1973 issue of Social Work, as well as the reactions of readers contained in subsequent issues, are most informative. Ludwig Geismar's 1970 National Conference on Social Wel- fare paper and the 1973 Council on Social Work Education papers by Scott Briar and Rosemary Sarri also provide excellent summaries and discussions of evaluative re- search findings. The recent work by Rossi and Williams (1972) discusses the results of national social programs, and while not directly addressed to social workers, their analysis is clearly relevant. Since I note that a number of these sources are included in the symposium bibliogra- phy, I will assume that there is no value in restating what you have already reviewed. Rather, I would like to share with you my general reactions to these findings. First, it seems to me that very little has been learned from these studies that would tell social workers how to improve their effectiveness. There are a feu noteworthy exceptions to this statement, which I will come back to later. What these assessments have said as a group is that the interventions studied failed to make major differences in the lives of the people experiencing the services, that is, differences that could be docu- mented by the research. This is not a criticism of the studies for it is not appropriate to expect experiments which are set up to test hypotheses to do anything but wl] Dw test. They are conducted within what Kaplan (1964) re- fers to as the context of justification rather than the context of discovery. We can expect experiments to be instructive and to offer direct suggestions for interven- tion only when hypotheses are confirmed. Experiments are useful to sort out false beliefs, and it appears this is the major service performed by evaluative research to date. What has happened, it appears, is that as a pro- fession we have skipped over developmental research, re- search conducted within the context of discovery, and have gone on to test poorly founded interventions. Ue are left then with questions and unverified skills. Put positively, what has been learned from these studies is that before we put our program to the test of experi- mentation, we would do well to spend time and resources in exploratory research whose aim would be to discover promising methods of intervention that could be subse- quently tested through experimentation. As I have pointed out elsewhere, some of these evaluations have produced findings that do have rele- vance for guiding interventions. Task-structured inter- vention, which is time-limited, has resulted in somewhat more positive effects than conventional open-ended, broadly focused services. Also, programs dealing with the conditions of poverty seem to have more promising results when they uere characterized by multi-level, locally based services, and when increased financial assistance was included. However, what these evalua- tions fail to clarify, even when positive results uere evident, is the reasons for the results, that is, the nature of the cause and effect relationships. There is some indication in the three studies reported by Geismar and his associates ("The Family Life Improvement Pro- ject," 1970; "The Neighborhood Improvement Project," 1967; and "The Area Development Project," 1968, 1969) that an important variable in social work effective- ness is the worker rather than the program or the level of training. This finding dovetails with the results reported by Carkhuff and others concerning counseling. These in- vestigators present evidence suggesting that important causative variables are personality traits of the counselor, such as genuineness, rather than level or type of academic training. Carkhuff (1969) further argues that some current non-academic training programs prepare more effective counselors than do academic pro- fessional programs because these personality variables are their focus. ly Be At this stage of research, all of these results must be interpreted as leads for further research rather than confirmed scientific facts. Nevertheless, the implication from all of this is that what is needed in evaluative research, if it is to contribute to practice knowledge, is variable specification—examination of cause and effect relationships between variables rather than general assessments of broad programs. Up to this point I have been critical of social work practice. I cannot resist taking an equally crit- ical look at its research skill as well. Not only has this past decade been a period in which social work prac- titioners were abruptly confronted with the ineffective- ness of many of their programs, it has also been a time when evaluative researchers experienced as never before the inadequacy and at times the inappropriateness of re- search design and methodology. The standard approach to evaluation characterizing most studies follous the Fisherian model of experimentation, in which an investi- gator interferes with a process so the random sub-sets of the units processed are differently treated, and measurements are collected in such a way that the vari- ability among units which were treated in the same way can be estimated. This approach to evaluation is well presented by Suchman (1967) and is closely associated with what has come to be known as a goal model. It is an elegant and familiar approach to most social research- ers, and does seem to make good sense from a rational point of view. However, experience in the application of this model to social work program evaluation has resulted in identification of numerous difficulties. First of all, it requires that the researcher assume control, have the power to assign people to various groups, with- hold or give services, limit the number of variables to be examined, and hold the intervention stable over a period of time. It also requires relatively exact speci- fication of objectives and interventions and usually does not allow for feedback until the end of the program. It has often been criticized as being overly mechanistic. In spite of these difficulties, it does appear to be the one model most researchers believe has the power to test cause and effect relationships. Others, notably Etzioni (1969), have proposed an open systems approach to evaluation. Advocates of this approach believe that a systems framework is more real- istic and useful. It assumes the existence of multiple goals, even conflicting goals, and a complex set of wll interactions. It assumes the existence of manifest and latent program goals, and seeks to describe the relation- ships among elements within the system as the system interacts with its environment. Koleski and his associ- ates at Case-Western Reserve University (1971) have used this approach to evaluate the effects of a multi-service system in East Cleveland. What potential this approach holds for evaluative research is yet to be demonstrated. Its strengths are that it seems to deal with the complex- ities of reality, it provides on-going feedback, it does not require that the researcher control the program, and it allows for program or system change. Its weaknesses are its complexity, cost, and highly descriptive nature. Its power to test cause and effect relationships also appears to be limited. Feu researchers are currently skilled in this approach to evaluation. Weiss and Rein (1970) have suggested that exper- imental design is inherently unsuitable for evaluation of broad-aimed social programs, and propose instead a qualitative study of program development and change. Their approach seems to assume a systems framework, but relies on qualitative, historical, descriptive data rather than quantitative analysis of elements and inter- actions within the framework of a previously defined model (as is characteristic of the systems approach). Their approach seems to have been formulated in an at- tempt to evaluate and respond to the ill-defined pro- grams of the 1960s. Levine (1973) has proposed an adversary model for evaluation. This approach parallels legal argumentation and requires, first, that field and clinical studies in- clude among the staff an adversary who would be delegated the function of cross examining all evidence from the point of view of developing the rebuttal to whatever evidence is gathered; second, that studies which use sub- jective reactions on the part of the investigator also would utilize some system analogous to clinical super- vision in which another person becomes familiar with the characteristic emotions, distortions, fantasies, de- fenses and values of the investigator and would use this information as the investigator reports the results of his research; and third, that rules of evidence would be established and used in considering the findings of stud- ies. It appears that Levine's approach has been developed out of a reaction to the highly emotional and value-laden context characterizing human service evaluation. To my knowledge Levine's approach has not been applied. —45- A variety of other approaches have been discussed and used during the 1960s, including Bayesian analysis, cost-benefit analysis, cost-effectiveness analysis, social area analysis, and differential evaluation. As an aside, I would suggest that cost-benefit analysis, which we hear so much about, is not a method of program evalu- ation but rather a method of policy analysis which uses the results from evaluative research as data. This may be part of the reason why social work fares so poorly when this method is used, since so few of our interven- tions have known cost-benefit ratios. What is evident, then, concerning research de- sign, is just as practitioners have been confronted during the 1960s with the inadequacies of their inter- ventions, so too have evaluators been confronted with the inadequacies of their designs and methods. The current lack of fit between social programs and research design is evident and serious. To meet both service and evalu- ation requirements innovative compromises in the delivery and evaluation of services must occur, and, I submit, are inevitable. I expect that as we progress three gen- eral classes of evaluation will evolve. First, exploratory-developmental research efforts will be under- taken so that promising approaches to social work inter- vention can be developed; second, experimental evalua- tions will be undertaken to test the effectiveness of these newly developed interventions; and third, on-going service programs will be constantly evaluated so that administrative and planning feedback is available using some form of systems research. A note on social work education. An ability to engage in evaluative research, a concern for effective- ness, and a scientific approach to practice seem to me to be essential qualities social work educational pro- grams should strive to develop in students. Rather than being shielded from the results of these studies, social work students must learn to understand and use them crit- ically. Ideally, the feedback from evaluative research should be taught in practice courses by educators who can use these results in a critical and productive way. It seems to me that scientific social work practice demands such an integration. The current practice of teaching research methodology and research results in one area of the curriculum and practice in another area seems to only further the problems and inhibit integration. -46- The lessons learned from evaluative research to date require that the next generation of social workers be educated to be as concerned about the effects of what they and the programs they are part of are producing as the current generation has been about intervention pro- cess. Perhaps the all too frequent lack of concern about outcome has partially been a consequence of the assump- tion too often made that the profession's skills were al- ready validated and that all that was necessary was to insure the application of those skills! It is this posture that may explain why so often the negative re- sults of evaluative studies are challenged by saying that the quality of the service was not up to standard —thus, the verdict in the "Chemung County Study" that the level of casework offered was above that of the average public assistance worker but below that of the average MSW worker. In summary, what is needed in social work educa- tion as a response to the challenge of evaluative re- search is a curriculum that seeks to instill in its students a scientific, experimental approach to prac- tice. This may take the form of a curriculum focused on preparing students to intervene in some form of target area (whether at a system level, a problem area, a need, a field, or shatever) and orienting them to continually experiment and evaluate various ways of achieving re- sults. This indeed would be a long way from teaching methods as if they were established and validated skills universally applicable. It seems to me that there are currently clear signs that social work education is preparing for such a qualitatively different approach. Hollis' revised text (1972) is remarkable for its integration of re- search data and references; Reid and Epstein's recent text (1972) is clearly pointing in the direction of scientific practice; Briar's writings (1973) also move toward such an approach. The Council on Social Work Education's new curriculum policy statement (1969) clearly not only makes possible but seems to encourage scientifically oriented experimental curriculum develop- ment. Many of my comments may seem to some of you to be over-generalizations and not clearly related to the topic I vas asked to address. You may say evaluative research findings to date do not warrant the conclusions that I have drawn, nor the level of generalization presented. —47- I would have to agree with that point of view if I were to have restricted myself only to the reliable and valid data produced by this limited number of studies. A review of the nearly two dozen experimental evaluations nou published clearly reveals that the client populations involved were relatively restricted, given the broad range of populations dealt with by social workers. We see that these studies dealt primar- ily with the problems of juvenile delinquency, poverty and aging. A few dealt with the family and personal problems of non-poverty populations. We also see that the broad range of social work intervention was only partially represented. The inter- ventions were primarily individual and group services. The level of training among the workers in these studies is also found to be wide and often so heter- ogeneous that it is impossible to draw conclusions about any one level of worker with confidence. It is also true that aside from the St. Paul Scale of Family Functioning, the CSS Movement Scale and the various standardized psychological tests represented, the measurements used in these studies were of guestion- able validity. How then, you may ask, with all of these limita- tions have I been so bold as to draw the conclusions pre- sented in this paper? First, let me restate that given these and the many other limitations present in currently available evaluations, I do not think that very much can be concluded about promising new effective approaches to social work practice. Nor, would I conclude that social work interventions have been demonstrated to be ineffec- tive. However, what is clear to me is that the results of evaluative research to date have raised serious ques- tions about the effectiveness of social work services, questions that demand answers. It is further apparent that to deal with these questions, the profession must recognize their validity and move toward a scientific- ally based practice so that satisfactory answers can be eventually developed. This past decade has given social workers a vision, and it has witnessed their great ability to take —48- public risks. While, at this point in time, it may ap- pear to many that such risk taking was unwise and de- structive, it seems to me to have been a necessary phase in our professional development. True, the profession appears in 1973 to be defenseless in the face of polit- ically based criticisms of its relevance and effective- ness, but hopefully, because of these experiences, when, in 1983 an Ehrlichman challenges our effectiveness, we will be in the advantageous position of being able to point to social work programs of demonstrated potency, Hopefully in 1983 when the cost-benefit analysts insert effectiveness values for social work programs in their formulas, they will not find an absence of effective- ness data, but rather, evidence clearly indicating benefits to our clients. References Bergen, A. E. The effects of psychotherapy: Negative re- sults revisited. Journal of Counseling Psychology, 1963, 10, 244-255. Bergen, A. E. Some implications of psychotherapy research for therapeutic practice. Journal of Abnormal Psychology, 1966, 71, 235-246. Berleman, W. C., Seaburg, J. R., & Steinburn, T. W. The delinquency prevention experiment of the Seattle Atlantic Street Center: A final evaluation. Social Service Review, 1972, 46, 323-346. Briar, S. Effective social work intervention: Direct practice roles. Facing the Challenge: Plenary Session Papers from the 19th Annual Program Meet- ing. New York: Council on Social Work Education, 1973. Carkhuff, R. R. Helping and Human Relations. New York: Holt, Rinehart & Winston, 1969. 2 vols. Carkhuff, R. R. The Development of Human Resources. Neu York: Holt, Rinehart & Winston, 1971. —49- City of East Cleveland, Cuyahoga County Welfare Depart- ment, School of Applied Social Science (Case- Western Reserve University), & Ohio Department of Public Welfare. A Proposal for the Develop- ment of a Comprehensive Social Service Delivery System in Fast Cleveland, Ohio, May, 1971. Council on Social Work Education. Curriculum Policy Statement. New York: Council on Social Work fducation, 1969, Publication #69-380-20. Etzioni, A. The Semi-Professions and Their Organiza- tions. New York: The free Press, 1969. Eysenck, H. J. The effects of psychotherapy: An evalu- ation. Journal of Consulting Psychology, 1952, 16, 319-324. Eysenck, H. J. The effects of psychotherapy. In The Handbook of Abnormal Psychology. New York: Basic Books, 1960. Eysenck, H. J. The Effects of Psychotherapy. New York: International Service Press, 1966. Fairweather, G. Methods of Experimental Social Innova- tion. New York: John Wiley & Sons, 1968. Fischer, J. Is casework effective? A revieu. Social Work, 1973, 18(1), 5-20. Geismar, L. The Rutgers family Life Improvement Project and other outcome studies: Some findings and their implications for social policy and prac- tice. Paper presented at the 97th National Con- ference on Social Welfare, Chicago, June 3, 1970. Geismar, L. Implications of a Family Life Improvement Project, Social Casework, 1971, 52, 455-465. Geismar, L., Gerhart, U. & Lagay, B. The Family Life Improvement Project. Unpublished. Rutgers University, New Brunswick, N.J., 1970. Geismar, L. & Krisberg, J. The forgotten Neighborhood: Site of an Early Skirmish in the War on Poverty. Metuchen, N.J.: The Scarecrow Press, 1967. -50- Hollis, fF. Casework: A Psychosocial Therapy. (2nd ed.) New York: Random House, 1972. Kaplan, A. The Conduct of Inquiry. San Francisco: The Chandler Publishing Company, 1964. Levine, M. Scientific method and the adversary model: Some preliminary thoughts. AERA Cooperative Re- search Monographs. Department of Psychology, State University of New York at Buffalo, 1973, in press. Levitt, E. E. The results of psychotherapy with children. Journal of Consulting Psychology, 1957, 21, 189-196. Levitt, E. E. Psychotherapy with children: A further evaluation. Behavior Research and Theory, 1963, 1, 45-51. Meltzoff, J. & Kornreich, M. Research in Psychotherapy. New York: Atherton Press, 1970. Meyer, H. J., Borgatta, E. F., & Jones, W. C. Girls at Vocational High. New York: Russell Sage founda- tion, 1965. Mullen, E. J., Chazin, R. M. & Feldstein, D. M. Services for the newly dependent: An assessment. Social Service Review, 1972, 46, 309-322. Mullen, E. J., Dumpson, J. R., & Associates. Evaluation of Social Intervention. San Francisco: Jossey- Bass, Inc., 1972. Powers, E. & Witmer, H. L. An Experiment in the Preven- tion of Delinguency—The Cambridge-Somerville Youth Study. New York: Columbia University Press, 1951. Reid, W. J. & Epstein, L. Task-Centered Casework. Neu York: Columbia University Press, 1972. Reid, W. J. & Smith, A. D. AFDC mothers view the work incentive program. Social Service Revieu, 1972, 46, 347-362. Bl Rossi, P. H. & Williams, W. (Eds.) Evaluating Social Programs. New York: Seminar Press, 1972. Sarri, Rosemary. Effective social work intervention: Administrative and planning roles. facing the Challenge: Plenary Session Papers from the 19th Annual Program Meeting. New York: Council on Social Work Education, 1973. Shaffer, L. F. & Shoben, E. J. Psychotherapy: Learning new adjustments. In The Psychology of Adjustment. Boston: Houghton Mifflin, 1956. Pp. 522-529. Shoben, E. J. Psychotherapy as a problem in learning theory. Psychological Bulletin, 1949, 46, 366- 392. Suchman, E. A. Evaluative Research: Principles & Prac- tice in Public Service & Social Action Programs. New York: Russell Sage foundation, 1967. Truax, C. B. & Carkhuff, R. R. Significant development in psychotherapy research. In L. E. Abt & Be Fo Reiss (Eds.), Progress in Clinical Psychology. Vol. 6. New York: Grune & Stratton, 1964. Pp. 124-55. Truax, C. B. & Carkhuff, R. R. Toward Effective Counsel- ing and Psychotherapy. Chicago: Aldine Publish- ing Co., 1967. Wallace, D. The Chemung County evaluation of casework service to dependent multi-problem families: Another problem outcome. Social Service Review, 1967, 41, 379-389. Weiss, R. S. & Rein, M. The evaluation of broad-aim programs: experimental design, its difficulties, and an alternative. Administrative Science Quarterly, 1970, 15(1). Wilkinson, K. P. & Ross, P. J. Evaluation of the Mis- sissippi AFDC experiment. Social Service Re- view, 1972, 46, 363-377. United Community Services of The Greater Vancouver Area. The Area Development Project. Monographs I, II, % 111. Vancouver, British Columbia: United Community Services of the Greater Vancouver Area, 1968, 1969. BP SUMMARY OF DISCUSSION PARTICIPANT: I think that the crucial question for social work is "Shall we as a profession commit our- selves to the point of view that the scientific method is the method or an acceptable method by which this profession can be evaluated; or shall we acknowledge alternative, or at least supplementary, methods of evalu- ation?" DR. MULLEN: To what extent do we want to orient ourselves on the basis of a scientific approach as con- trasted with some other approach? This was particularly brought home to me when I was Director of Research at the Community Service Society. Ue constantly were bat- tling and running into or missing each other as research- ers related to the program staff because, lo and behold, we found that in such cases the program staff were launching programs not with specific goals nor with any thought-out interrelationship between an intervention and an outcome, but rather a feeling of positively re- lating to a community or to an individual or to a group. When we tried to apply the scientific evaluative frame of reference to that, we just missed each other. PARTICIPANT: Has Fordham integrated their re- search and methods courses? If so, I'd like to know a little bit about it. DR. MULLEN: Fordham went through, as some of you may know, an intensive experience with evaluative research about two and a half years ago. Ue brought together representatives from all the graduate schools of social work in the country and a number of ma jor social agencies to consider in a two-day conference the results of social work evaluative research and the implications for social work education. It's curious that the symposium ended up with a frame of reference that assumed one implication from all of this was to move to a systems approach. The Fordham curriculum has been revised so that now it is possible for students to elect a microsystem, mezzo- system or macrosystem level of social intervention. No longer are we offering concentrations in particular -53- methods. Rather, students identify at the beginning of their second year a particular system level in which they're interested, such as work with human systems, individuals, families and small groups; OT neighbor- hood social systems; or broad social systems that re- late to widely scattered populations. What we're try- ing to develop is a curriculum that would transmit to the students what is currently known about intervening at various system levels. We have set up a research concentration where students can elect a joint major in one of the system levels and research. To that extent we have integrated research. We still have a separate research method- ology course during the first semester, which I think is essential. But we're moving more and more toward electives that would bring together system interven- tion and the research findings or methodology appropri- ate to that level. It's nouhere near where some of us would like to see it. PARTICIPANT: I think one of the areas of con- cern is the extent to which practice and research can meet. Hou can we get researchers and practitioners together in a productive relationship, one in which the researcher would be interested in and knowledge- able about program and practice. DR. MULLEN: And I assume, implicitly, vice versa, the practitioner would be interested in research and the scientific method. That same question keeps recur- ring in the literature. I guess it hasn't been satis- factorily answered or we wouldn't still be dealing with the problems in our agencies. I have a hunch that one of the reasons has to do with the point made earlier about your orienting philosophical stance to interven- tion. Man has conventionally used at least four dif- ferent approaches to fixing his belief: tradition, authority, intuition and science. I have a hunch that researchers come to this primarily from the stance of fixing belief by the scientific method, and that prac- titioners come to intervention from a qualitatively different approach—a combination of tradition, intu- ition and authority. PARTICIPANT: Is it your hope to generate social workers who can combine within themselves quality prac- tice plus the ability to evaluate their own interven- tions and the impact of the total agency in this kind of —54- systems approach; or would it be your hope to subdivide even further and generate some new social workers who are more skilled in evaluative research to help the traditional practitioners who are out there trying to figure out if what they are doing is correct? DR. MULLEN: I don't see the first, I do see the second. The first, if I understood you correctly, was that we would produce or educate students who would be able to practice and conduct research on their own interventions in the programs of which they are a part. I think that's asking too much and perhaps would be an unwise division of labor. Rather, on that first point I would hope that we would educate students and prac- titioners who would have a point of view on practice that would make this systematic and controlled type of intervention the way they approach their work; that they would approach their work realizing that it's important to specify observable outcomes ahead of time before they intervene; and that it's important to spell out a frame of reference or some sort of theoretical backdrop to what is being done. Secondly, I do think it is important that social work take responsibility for educating and funding social workers who are specialists in evaluative research pro- gram evaluation. That to me is critical. We have been doing it at Fordham for three years. We've had very happy experiences with it and many of them are going on into administration. Part II THE DECISION TO EVALUATE: CONTRIBUTING FACTORS Frances Kelley, M.S.W. ASSESSMENT OF SOCIAL SERVICES IN A LARGE HEALTH CARE ORGANIZATION: A SOCIAL WORK ADMINISTRATOR'S PERSPECTIVE Jane Collins, M.S.W. The Decision to Evaluate: Contributing Factors FRANCES KELLEY City of Houston Health Department Tuo federally funded projects, Maternity and Infant Care and Family Planning, have been combined by the City of Houston Health Department into a comprehensive network of services for mothers and infants. There are twelve health centers located throughout the city. Each center provides prenatal care, well-child supervision and family planning services. Maternity care is available at Jefferson Davis Hospital, a city-county teaching institution affiliated wit Baylor College of Medicine. The population of Houston is approaching two mil- lion. Our city-wide project registered 27,625 women for services during 1972. There were approximately 10,000 de- liveries at Jefferson Davis Hospital during that same year. Eighty percent of the women had received prenatal care in our clinics prior to delivery. Normal infants and their mothers are seen in the health center well-baby clinics for follow-up care. Those infants who are high risk or borderline high risk are eval- uated one month after delivery in the High Risk Clinic at Jefferson Davis Hospital. The babies who are no longer borderline high risk are then referred to the health centers nearest their homes, and infants who continue to evidence developmental complications are cared for at the Hospital clinics. In all cases, infants and their mothers receive follow-up care at regular intervals and infants with the most serious problems are seen more often. Sixty-two per- cent of the infants delivered at Jefferson Davis Hospital, whether normal or high risk, received follow-up care during 1872. PROVISION OF SOCIAL SERVICES Social work staff are assigned to the health centers and are responsible for continuity of social services in prenatal, well-child and family planning clinics. In addition, one social worker is assigned full-time to the Hospital's High Risk Clinic. During 1972, social work staff provided services to 6,648 individuals, or about 24 percent of the total project population. Qur approach to services is family-oriented and we strive to provide continuing support and assistance as most women utilize our various clinics and services for about two years. Most of the women are quite young and present multiple health, social, and economic needs. A feu years ago, the problems presented by clients in social work inter- views were tabulated. They clustered in the following areas: -55- 55 Ge 1) finances, 2) job-training and employment, 3) family relationships, 4) non-completion of desired education, 5) emotional or psychiatric problems, and 6) fears of becoming pregnant. EVALUATION OF SERVICES The decision to move toward evaluation reflects numerous pressures and concerns. The reductions in federal grants and the possibility of being asked to defend one's own job have become increasing concerns of the staff. Another stimulus towards evaluation has been the staff's questioning of the methodology and impact of some poverty programs. Since we also serve an economically deprived group, the question of whether our approaches’ and efforts, by contrast, produce results that are beneficial cannot be dismissed. A third reason was related to our administrative and fiscal position in the Houston City Health Department. We are funded entirely by project grants. Thus, the agency has no fiscal commitment to our future as staff members. The primary motivation to study our effectiveness came from within the staff itself. For a long time staff members have question=d and discussed what our service goals should be. We wonder what goals are realistic. Many of our clients are caught in a web of poor housing, low income and minimal education. Such conditions are sometimes compounded by the health problems in the fam- ilies. We raised guestions about the remedial nature of many of our services, yet the opportunities for prevention are not always cl2ar. Is it possible to solve individual problems in a family-centered approach? Hou do we know we are seeing the right families given our limited staff re- sourc3s? Are identified psycho-social problems being re- sclved? If so, can we identify the elements which pro- moted a solution? If not, can we use the data to help us deve’ op strategies and/or resources more appropriate to the problems presented by clients? The entire social work staff of the Houston Matern- ity and Infant Care and Family Planning Projects is cur- rently working together to study the effectiveness of their professional activities. We are in the process of develop- ing research questions and identifying a sample for study. Because our basic concerns are shared by many others, we hope to report our data and findings at a later date. Assessment of Social Services in a Large Health Care Organization: A Social Work Administrator's Perspective JANE COLLINS Denver Department of Health and Hospitals The Denver Department of Health and Hospitals is a dual agency formed by Denver General Hospital, which is the city-county public hospital, and the Department of Health. Denver General Hospital is similar to other public general hospitals, with the usual in- and out- patient services. The Neighborhood Health Program was developed through the Public Health Department with back-up hospitalization and specialty clinic services at the Hospital. There are two Health Centers and eight Health Stations in the Neighborhood Health Pro- gram. The Community Mental Health Program is an inte- gral part of the agency's services, with one quadrant of the city being Health and Hospital's responsibility under Community Mental Health funding and other serv- ices, such as alcoholism programs and emergency hospital- ization, being available to all residents of the city. The Social Service Department is made up of staff at the master's, bachelor's, and neighborhood worker level. Staff is assigned throughout the system, including such specialized programs as day care inspec- tion and licensing. Literally, we are spread out all over Denver, giving some form of service in relation to either physical or mental health care. The agency is funded from a variety of sources —city-county tax monies and grants from state and fed- eral programs. Among the special grants the agency has received is one to study the feasibility of at least partial conversion to some form of a Health Maintenance Organization (HMO). An HMO presents special problems in regard to our agency, where 35 percent of the patients are medically indigent; that is, not receiving public assistance and ineligible for Medicaid coverage or without private insurance and within the zero pay range i. ~58~ because of low income. Another 50 percent are on some form of public assistance and are eligible for Medicaid benefits. ASSESSMENT OF SOCIAL SERVICE ACTIVITIES Approximately two years ago the Social Service Department started defining its services in relation to planning for some form of national health insurance and/ or a new health delivery system, such as a Health Main- tenance Organization. These changes in health care financing appeared to be on the horizon. Ue recognized that we would have to be delineating our services in a way which would make them reimbursable under a prepay- ment system. Small groups of staff met over a period of months in order to look at services currently given and to attempt to identify those which ue believed were most important. In those early meetings we identified tradi- tional social work service areas; for example, pre- marital counseling, maintaining persons with psycho- somatic complaints, suicide prevention counseling, assessment and referral to appropriate specialized agencies, sex education, and peer relationship problems of adolescence. All of these are appropriate areas for social service intervention. The problem which became evident as time went on was that a number of these serv- ices are not specifically related to medical diagnoses. They could easily be seen as services which are helpful, and interactive with various problems of health and disease. But are they absolutely essential and, thus, reimbursable under a prepayment form of funding? One of the small groups attempted to look at services as they relate to the age groups of patients and the continuum of patient care from prenatal through pediatrics to adolescence and then frequently back into prenatal care. One reason we considered this method of defining services was that many staff members had worked with the same families over a period of several years and had followed members through the aforementioned phases. This type of analysis became too complex, hou- ever, to be useful in defining services. It became obvious as we worked with the project that we had in the past done relatively little thinking about how to demonstrate that our services influence the patient's use of health care; and health care is pre- sumably the patient's primary reason for coming to the ~59- agency. Social workers were not operating in a vacuum. Individual social workers were conscientious about com- municating with the physicians or nurses who had made the referrals and about contacting other staff when the patient was self-referred. Our goals with the patients, however, were stated in terms of our social work treat- ment plan. Except for some of the very specific health problems presented by the patients, to an outsider, not familiar with the agency, many of our services could have been provided by any other competent social agency. Tuo examples of health-related social work counseling are with women who have had mastectomies and with parents of a child born with a handicap, when the handicap is such that it will require medical attention over a period of time. Both these physical conditions may have a high emotional component which can cause psychological crippling if not appropriately dealt with. A social worker who is knowledgable about the medical and psychological aspects of specific physical conditions can help patients face the realities of the situation together with helping patients work out their feelings about the medical problem. This type of coun- seling is, as a rule, not as successfully accomplished by a social worker who is not familiar with the health field. In contrast, physically well patients who have used a health facility may request marital counseling or assistance with adolescent adjustment problems. If a social service staff is limited in the health agency and if cost effectiveness is a factor in the services of fered, this type of patient probably should be re- ferred to another agency. Because we realized that most forms of national health insurance being discussed and Health Maintenance Organizations were not considering multidisciplinary staffing in the way we have knoun it in recent years, we felt that it was imperative that we define social services in a way which would make their relatedness to health care readily apparent. We also felt it uas important to be able to demonstrate the fiscal savings which would accrue to an HMO if social services were present. Patient utilization of health services, spe- cifically over-utilization, under-utilization or im- proper utilization, appeared to us to be the most ob- vious area in which to demonstrate the fiscal savings that a health agency could derive from social services. -60- The reasons for improper utilization frequently relate to emotional or social problems which are hindering patients from using health care more effectively. It seems more sensible to offer such patients the kind of help they need rather than constantly reappointing them to medical clinics where they receive some emotional gratification from the attention they get but where their psycho-social problems may alienate the staff who must deal with them. Therefore, about 18 months ago we be- gan making a strong effort to have other disciplines refer improper utilizers to the social worker in all clinics and facilities. Social workers have also been aggressive in making initial contacts with patients uho are shown by case conferences or chart reviews to have been making poor use of the care offered to them. In the spring of 1972 a pilot project was under- taken by one of the Emergency Room workers at Denver General Hospital. The three of us involved in this study were eager to have it carried out for somewhat different reasons. The worker and her supervisor had been concerned for some time with the large number of patients who would come into the Emergency Room or Re- ceiving Unit two or three times a week and whose needs were primarily psychalogical rather than physical. In a busy Emergency Room, these patients impede the delivery of optimum care to other individuals who require immedi- ate treatment. There is really no way they can be con- fronted with their psychological problems and ongoing help given to them by the medical or nursing staffs. From an administrative perspective, I was interested in the fiscal savings which might accrue to the agency following changes made in the patient's use of the Emergency Room after social work intervention. The population of the study consisted of 30 patients who the Emergency Room staff, mainly nurses, thought were misusing emergency services and whom they referred to social service for intervention. Each of these patients had been known to have excessively high utilization of services in the six months prior to the social service contact. On a random, alternate selec- tion basis patients were placed in either the study group or the control group. Tuenty-six of the patients were age forty, with the mean age of the test group being 53.8 and the con- trol group, 53.9. Alcohol was a frequent presenting problem in both groups. Most had led marginal existences wolf Jim and social service goals worked out with them related principally to living or working arrangements and non- involvement with alcohol. All patients in the control group were seen at least once by the social worker and were given any brief service deemed advisable. Patients in the test group, however, were involved in continuing casework treatment over a period of five months from the date of referral; contacts per patient averaged one interview every tuo weeks. The results of social service intervention with the study group were dramatic in terms of money and time savings to the agency. Changes in the number of ambulance runs and emergency service visits utilizad by the study group are presented in the tables on page 62, along with the associated cost factors and savings. While the reduction in utilization and costs is more striking in the test group than in the control group, the total savings to the agency for ambulance runs and emergency service visits amounted to $2,784.84. The cost of the social worker's time was $988.72 ($879.78 with the test group and $108.94 with the control group). Thus, social service intervention, taking into account salary costs, saved the agency $1,796.12 in relation to the 30 patients involved in the study. Another area of improvement for the test group was in follow-up medical appointments made for them in any of the Hospital or Neighborhood Health Clinics. The test group of patients was able to break their pattern of not keeping clinic appointments and used services in a more appropriate fashion. Prior to intervention the test group of patients missed 50 percent of follow-up clinic visits; after intervention 28 percent of such visits were missed. The positive change in keeping clinic appoint- ments on the part of the test group represents a fiscal saving to the agency. Missed clinic appointments are expensive in terms of the time lost to staff and to other patients who could have been scheduled. They can also be expensive in terms of illness for which the patient is needing, but not receiving, treatment. TABLE I Ambulance Services Runs Before Intervention After Intervention Percent Cost Group Number Cost Number Cost Reduction| Reduction Test 70 $1,412.60 22 $ 443.96 68% $ 968.64 Control 31 625.58 23 464.14 26% 161.44 Total 101 $2,038.18 45 $ 908.10 55% $1,130.08 1 oa VY TABLE 2 Emergency Service Visits Before Intervention After Intervention Percent Cost Group Number Cost Number Cost Reduction| Reduction Test 131 $2,643.58 60 $1,210.80 54% $1,432.78 Control 86 1,735.48 15 1,513.50 13% 221,93 Total 217 $4,379.06 135 $2,724.30 38% $1,654.76 -63- In addition to demonstrating fiscal savings, the purpose of the study project was to shou that the pa- tients' social functioning could improve through social service intervention. Using a modification of the Hunt- McVickers Client Movement Scale, the social worker rated the progress made by the test group patients in the areas of emotional status and social milieu during the treatment period. The possible ratings ranged from -2 to +4, with minus results indicating no improvement. The actual ratings of the 15 patients in the test group, as designated by the social worker, ranged from -1 to +4; the average score was 1.7. It is interesting to note that in this study reducing over-utilization of medical services through social service intervention was not incompatible with improved client functioning. The social worker involved in the study has found that, with the exception of two or three alco- holics who are brought in by the police, the patients are appropriately using clinic services rather than re- lying on the Emergency Room and that they have sus- tained the gains made in their personal lives. A group of social workers in the Neighborhood Health Program reviewed 26 of their cases of patients who had been over-utilizers in the six months prior to referral. This group of 26 patients involved people with more family ties, who were not as isolated as those seen in the emergency room and not as involved with alco- hol. The patients had had from 3 to 22 physician or treatment room visits for a total of 255 visits. The visits following social service intervention dropped to 53 for the entire group. Services in response to urgent needs and casework uere provided for these pa- tients, with intervention being three months or less for all patients. The number of social service en- counters ranged from 2 to 22 per individual or family. Our focus on utilization made us aware of a prob- lem which professionals must recognize. Social workers and/or other staff may gain satisfaction in having pa- tients depend on them and, as a result, may encourage them in subtle ways to return either for regular check- ups or on a PRN basis. Such behavior does not help a patient to become more responsible in his use of health care. (Nor is the patient helped when the professional fails to provide sufficient information to guide him in ~~ his use of medical care services.) Another group of social workers evaluated their services in approximately 20 areas which relate to health care or the psychosocial needs of patients. The self- evaluation showed that the largest number of encounters focused on referral for public assistance. Ranking close behind were such services as helping patients cope with the implications of their illness, working with other family members, helping patients remain in treatment when this was necessary, and providing consultation to members of other health disciplines. Because the number of en- counters spent upon welfare referrals seemed dispropor- tionately high to the supervisor of this group of workers, a series of meetings was set up with appropriate people in the Department of Public Welfare and a neu system of welfare referrals was worked out jointly between our tuo agencies. Now it is only in an exceptional situation that the social worker becomes involved in a welfare re- ferral for an outpatient. This system has not been in operation long enough yet to determine from our statis- tics the difference it has made in the amount of time which can be given to other services. The Cost and Function Charts, examples of which appear on pages 68, 09, and 79, are a requirement of every department within Health and Hospitals, As you can see, these charts require a fairly simple definition of tasks in words which are intelligible to anyone who might be reviewing them. They are used primarily for budgeting purposes in working with the city budget of- fice and the city council. They are set up so that each of the cost centers can show not only what staffing costs are for that unit but also so that the number of patient contacts or hours spent on behalf of patients may be totalled and then the cost per patient be taken from these figures. We have the number of encounters and the hourly figures documenting time spent with or on behalf of patients and in community activities. These figures could be in each cost center. They are not there because ue have lost 26 positions since late last year and any cost breakdown based on last year would be inaccurate. These charts are particularly help- ful in getting doun specifically what services we are providing and not what we fantasize is going on. Be- cause they are specific, they are useful in dealing with administrators. ~B5~ PLANNING FOR THE FUTURE The Department of Health and Hospitals, along with other health agencies, has recently had to face ser- ious cutbacks because of reductions in federal funding. This has made it imperative for the agency to look at its over-all priorities in terms of what can be provided to patients for whom we have both legal and moral respon- sibility. The other development is that some of the neu monies in the health field are focused on emergency services and will be given to agencies which have pro- grams set up to utilize them properly. As we have de- veloped new priorities in line with the agency's focus, the description of our services is in marked contrast to those which we developed in our early planning and which I mentioned at the beginning of the paper. Some examples are: 1. Intervention in an emergency situation to provide the psychological and environmental support services which are necessary to pre- vent chronicity; for example, suicide at- tempts, the diabetic patient in coma, the obese patient in congestive heart failure. 2, Intervention with a patient who has an acute medical problem and whose fears, lack of judgment, misunderstanding or psycho- logical problems suggest a risk of inability to benefit from medical care. 3. Intervention with the physically or mentally ill patient who, given psychological or en- vironmental counseling, can function in the community and remain out of the hospital. To plan financially for social services under some form of prepayment it is important to have a fairly accurate idea of the number of interviews which may be required for specific kinds of social service diagnoses. Some studies which have been done in the field have shown that close to 75% of patients seen have six or less intervieus (Epstein, 1972). Other studies have documented that extended social service affords no greater positive changes to the patient than shorter term, more goal-focused social work, and in fact may be less beneficial (Reid & Epstein, 1972, p. 275). HEU —66- utilization studies have shown that nationally the aver- age number of physician visits per person per year is between 4.2 and 4.3 (U.S. National Center for Health Statistics, 1972, p. 19). A study within our agency re- sulted in the same figure. At present, several of our social workers are attempting with the patients they are seeing to keep useful data on the number of interviews held and on how successfully this number can be set at the time of the initial contract with the patient. If we had documentation as to specific groups of patients who require longer or shorter periods of contact in order to resolve various problems, this would be very useful in allocating staff resources and also in planning with administrators for the amounts of social services which could be available under some form of a prepayment plan. Measurement of outcome of social service inter- vention is essential. Our experience is that records contain adequate recording of the goals set with the patient, the treatment plan and ongoing casework, but that a final statement as to outcome as related to the initial problems is frequently missing or not clear. When we go to a new encounter form, outcome will be one of the designated areas on the encounter form; final recording should then become more explicit. The de- velopment of a peer review system probably would help by bringing about more focus on specifying the outcome of a series of encounters. Many of our staff members have developed and carried out small studies. We have learned that with no research staff and with the demands of large and dif- ficult caseloads it is more feasible to carry out a study on a service which is specialized in one way or another. For example, it is more difficult to design and implement a workable study in a health station, where social workers are responsible for patients who present a variety of physical and emotional ailments in a continuum from prenatal care through geriatrics, than in a hospital where a social worker may have a special- ized pediatrics caseload. In closing, I would like to emphasize that the inference that we must do utilization and cost studies implies a radical philosophical change for many of us. Many health workers have been attuned to outreach pro- grams and to attempting to have patients utilize serv- ices of any kind within the health system to the maximum. Now we are being asked to critically assess utilization, -0T7 - including factors of timing, appropriateness and benefit. Another painful issue which must be faced is that many of these studies may make us feel that our only interest is in saving money for the agency and in being able to show the appropriate people in power that we are worth the money which our salaries are costing the agency. I must point out, however, that if we do not think in practical terms about money and focus on those aspects of health care where the effects of social service intervention can be documented, not only will social workers be out of jobs but patients will not be receiving services which have been demonstrated to be helpful to them when given through an adequate social service program and by compe- tent staff. References Epstein, L. Paper presented at the Medical Section Meet- ing of the National Association of Social Workers, Denver, September, 1972. Reid, W. J. & Epstein, L. Task-Centered Casework. New York: Columbia University Press, 1972. U.S. National Center for Health Statistics. Physician visits, volume and interval since last visit, U.5.—1969. Vital and Health Statistics Series 10 No. 75. July, 1972. DENVER DEPARTMENT OF HEALTH AND HOSPITALS SOCIAL SERVICE DEPARTMENT SELECTED COST AND FUNCTION CHARTS* 1973-74 Administrative Staff Functions 1. Program planning for Social Service Dept. 2. General program planning for Health and Hospitals 3. Payroll and central records and statistics 4. Development of in-service programs for Social Service and other staff 5. General administrative tasks 6. Coordination with other agencies 7. Develop and distribute memos and other material for entire Social Service Dept. 8. Supervise and evaluate performance of staff. Set standards for Social Service on an agency-wide basis. Personnel Director #0628 150 Social Worker III #0637 150 Social Worker III #1637 150 Social Worker II #9573 132-011 (006) Clerk-Steno III #1017 150 Clerk Typist II #9871 150 Clerk Typist II #0630 150 $83,844 Surgery & Orthopedics—In- and Out-Patient functions Personnel 1. Working with patients Social Worker II re disabilities #9339 - 150 Placement of patients Social Worker I Securing medical equip. #0923 - 105 Consultation to other Caseworker I disciplines #1614 - 150 follow-up after dis- Program Aide I charge of selected #9159 132-011(0D06) patients $37,608 Medical Out-Patient Functions Personnel 1. Counsel patients re Social Worker I employment, feelings #1344 - 150 about illness, per- sonal problems which $12,204 are hampering patient's receiving full benefit from medical care 2. Screening psychiatric referrals -gg- 6 follow-up of selected clinic patients w Working jointly with psychiatrists—parti larly in relation to cu- psychosomatic patients. Working with groups Collaborative work with other agencies 1 Emergency Room and Receiving Unit Functions Work with patients who Social Worker I over-utilize services Refer patients for housing, welfare, or other emergency services Assist in nursing home placements Work with alcoholic patients Work with relatives of patients Follow-up on patients —especially elderly or confused Personnel #1748 133-002 Caseworker I #0924-150 Program Aide I #9085 132-011(006 Social Worker I NEW $28,800 1. Medical In-Patient Functions Placement of patients Personnel Caseworker II #0665 - 150 Working with patients Caseworker I re disabilities Working with rela- tives Referral to ap- propriate follow-up agencies Work with alcoholic patients #0638 - 150 Program Aide II #9392 132-011 (0086) $24,264 *¥Position number and budget follow position classification. in each box. Total cost for center -59~ SOCIAL SERVICE DEPARTMENT 1973-74 [See Administrative Box - figure 1 ls Lo. Ll. functions (Adult Service (Pediatrics (0B-Family Planning)ices are pro- CI II-#9374 r Westside Neighborhood Health Center Personnel )Similar basic SUII-#9599 )casework serv- 132-011 Evaluation and vided. 132-011 casework with SWII-#1985 patients about 132-011 social or emo- SWII-#9699 tional problems 132-011 as they relate floating to the health position care plan. based at Act as liaison between Health Center and other community Center agencies. SUI #1137 Evaluation and casework with 132-011 parents of children who, for PAII #9436 social or emotional reasons, 132-011 are having difficulty follow- PAII-#B8931 ing medical regime. 132-011 Consultation to other members PAI-#9398 of the health team. 132-011 Evaluation and counseling, NAT-#9118 when indicated, of pregnant 132-011 girls and women. $76,608 Evaluation and follow-up of TAB patients. Direct casework with children who are having difficulty with social or emo- tional development. Involvement in cases of suspected non- accidental trauma cr failure to thrive. Social workers supervise family health counselors. Primarily family health counselors— following up of missed appointments. Primarily family health counselors— assist patients to obtain needed assistance from other social agencies. | Uestuood Health Station functions Personnel 1. Relate to all sections, ie: SWII-7#0611 Adult, Pediatrics, 0B and 132-011 Family Planning. PAII-#8928 2. Evaluation and counseling 132-011 with patients with social PAII-#8927 or emotional problems which 13-011 relate to health care plan. $26,040 3. Perform community organiza- tion as it pertains to health care needs. 4. Social worker supervises family health counselors. 5. Assist patients in obtaining needed services from appropriate agencies. 6. Follow-up missed appointments. 7. Conduct groups. 1 Mariposa Health Station Personnel SUIl-#9881 132-011 PAII-#1677 132-011 PAI-#9450 132-011 $24,165 functions See box above. -0L- Part III THE PROBLEM-ORIENTED RECORD: UTILIZATION IN A COMMUNITY HEALTH PROGRAM Joan McCracken, R.N. FACILITATING COMMUNICATION: SOCIAL WORK AND THE PROBLEM ORIENTED RECORD SYSTEM Beatrice Phillips, M.S. MEDICAL AUDIT AND MEDICAL SOCIAL SERVICES Rodger Shepherd, M.D. The Problem-Oriented Record: Utilization in a Community Health Program JOAN McCRACKEN I am sure that many of you upon reading a list of the faculty may have been a little surprised to see a director of a family planning clinic from Billings, Montana listed right along with professors from big universities and consultants and directors from large programs in big cities. And you may even now wonder "how come?" . . . for we have no medical schools, no school of social work, no teaching hospital, and as far as I know, no articles of any significance appear in any professional journals authored by any Montanan. Montana is a very large state, fourth in size right behind California. The population is around 700,000; Billings is the largest city with 75,000 people. Despite producing a fine statesman like Mike Mansfield, Montana is a very conservative state. People are very independent and self-sufficient; change occurs very slowly if at all. Public health and social work have a very lou priority; we have no public health department, no public health nurses, less than 15 MSW degree social workers . . . and it is in this atmosphere that we have implemented a problem oriented record sys- tem to take care of the medical and social needs of our family planning clinic patients. In September, 1969, through a very strange set of circumstances, I found myself, a cardiac care unit nurse and the mother of five children, setting up a fam- ily planning program. At first it was to be only a part- time endeavor, an educational program . . . as I was as- sured by the entire medical community that they took care of all the medical needs in Billings and there was absolutely no need for medical services. Within three weeks after talking to many women I lost a lot of my naivete. Many families had no physician, many women lacked the basic information to even ask the right ques- tions, many women had no money . . . and surprises of all surprises, only three of the Ob-Gyn men in Billings, because of personal religious beliefs, were willing to » 5 wT] Gis discuss family planning with their patients . . . and then only if they were married. It was very obvious that we needed a clinic. And so on January 6, 1970, ue opened our doors and held our first clinic in borrowed space, manned with volunteers, stocked with drug samples and discarded equipment, staffed with an orthopedist who had done some ob-gyn in World War II and had a lot of interest in the population problem. That night six patients came and received a little education in contra- ceptive methods, a blood pressure check, a pap smear, a pelvic exam and a method of contraception. And so we started . . . having clinics twice a month, hoping to serve at least 150 women a year. And it was fun. Our patients, being few in number, received a lot of support . « . we had time to spend, hours if needed, going over and over again basic instructions. I knew my patients well; I even knew when all of my patients were expecting their next period. At that time we used a very simple chart . . . borrowed from Planned Parenthood of Colorado . . . and it seemed to meet all of our needs. A volunteer nurse who had plenty of time took the history, checked the appropriate boxes and tried to remember other items of significance to tell the doctor. Why did we ask these particular questions? Because Planned Parenthood of Denver asked them. And so we grew . . . from six patients, a volun- teer nurse, an orthopedist who gave of his time, tuo clinics a month, and borrowed charts to a sense of sat- isfaction that we were doing a good job. Somewhere along the way, problems began crop- ping up. We had more patients than we had thought we would have. We needed more clinics. The patients started mentioning more problems than we had space on the chart to check—marital problems, social problems, complaints of being tired and of aches and pains, lumps in their breasts, and suicide attempts. By this time we had several doctors with various specialties volunteering their time . . . and one of my major activ- ities was scheduling the patients to match individual needs with the doctor's interests. The surgeon saw the patients with lumps and pains, the internist the ones with headaches and fatigue, the pediatrician the ones who had problems with their children, and so it went. Sometimes notes were made on the chart, either in the «Pe box or under other comments, and we tried hard to re- member what happened to the patient who did have a lump biopsied, or who had been evicted, or whose husband had left her. And then we ran out of the first 150 charts ue had printed. So we decided to make some changes. Ue consulted the physicians working in our clinics and asked them for their comments. Each added something . . . more space, some deletions, some added tests, and on and on. We then came up with our second generation chart and ordered 1,000. We got busier and busier. We trained more vol- unteers, we got a small grant, we hired some staff, ue had clinics once a week. We used our new charts, and ue were satisfied that we were doing a better job than ever . . . because now we knew who had protein or glucose in her urine, and who was anemic. Our physicians seemed happy, and even the medical community decided we were filling a need and were happy with the referrals we sent. And as everyone became happier and happier with the small segment of our program with which they were dealing, I became more and more frustrated. I couldn't keep track of what was happening to my patients and their problems, let alone their menstrual periods. My nurses seemed to emphasize different things in their history taking, some tests weren't getting done, some were getting done twice, and I had no idea if the physician was was treating the patient who was anemic with iron pills was getting any better results than the one who treated anemia with diet . . . or if the patient could afford the food he advised her to eat. I got all sorts of calls about all sorts of problems . . . I would try to jot things down on the charts or at least report it to the physician. Sometimes there didn't seem an ap- propriate space. We were not dictating to the chart « « + it was dictating to us. And then a very wonderful coincidence occurred. We were starting to run low on charts again and we had another chance to revise them . . . and the name of Laurence Weed cropped up again. I say again, because my husband took a pediatric residency in Cleveland many years ago and every friday afternoon they had a tea party with Dr. Weed and we wives tried to outdo each other in the refreshment area. I remember that every- one looked forward to those tea parties, but since it was during my housekeeping and cooking era, I chalked -7lm it off to my good recipes. And then periodically I heard my husband speak of the Problem-Oriented Record (POR) with a great deal of enthusiasm but, like my pa- tients in the beginning, I didn't have enough informa- tion to ask the right questions. But my husband sensed my growing frustrations and said from time to time, "You really should look into a whole new way of think- ing, a whole new system cof record keeping." I was shocked and my hackles went way up. Didn't he realize that the doctors liked the present records? Didn't he know that what worked well in a hospital setting didn't work in a clinic, a specialty clinic at that . . . and besides every other family planning clinic in the nation used records similar to the ones we were using. I had read Cross and Bjorn's book and loved it . . . every word of it . . . I would go to them for my personal med- ical care if I lived in the East . . . but I saw little correlation between what they were doing and what I was doing. They had a computer . . . they took care of their patient's every need . . . there were only two of them and obviously they thought exactly alike . . . and they were physicians who set the pace and hired people to follow their instructions. But my husband persisted . . . and so began a long process. We talked of little else. We spent our time developing flow sheets. I read Weed instead of recipes. The house was strewn with papers. I dreamt POR, we talked POR, we argued POR, and aluays the ques- tion, will it work? I really didn't know if it would work, but last November, I made my one and only auto- cratic executive decision. Without asking their opin- ions or thoughts, I announced to my staff that we were going to use a problem oriented record beginning De- cember 1, that we would all be writing on the same chart, and we had a lot of work to do, a lot of reading, thinking, planning. Little did they know when they agreeably nodded their heads what lay ahead. The only comment at that time was from my social worker who said that social workers have their own records and they uere always kept separate. I started out by going to a very fine obstetri- cian and a good internist and asked them to list those conditions we really could not afford to miss in pre- scribing an oral contraceptive or inserting an IUD. At the same time, I worked with a psychiatrist and a social worker in the area of social concerns. And this area proved a little more difficult. Our staff, for the first <7 Go time, sat down together and started evaluating what we were doing . . . how important was education, exactly what should our patients be taught, by what means, and when? We talked about end results; we talked about parameters we needed in monitoring the course of our patients. How did these meetings go? Terribly. Ue found we all had very different ideas and we were in consensus about only one item. Each person felt she was doing the best job of interviewing and could learn more about the patient and take better care of her if she did it herself. Hou egocentric we all are! Gradually though we made some decisions about our data base, our flow sheet, our patient education. We spent several hours every week learning how to use our new tool. Ue printed up 100 copies and decided to try it out. The first week in December was glum. Even the Christmas music and holiday spirit could not dull the sharp gripes and complaints of the staff. The new sys- tem took too long, there was too much writing, there was too much thinking, we were finding too many prob- lems. My staff tried to cheat and would sneak out the old charts. I hid them. My social worker kept a double set of records . . . her own detailed ones plus a curt note on the chart. I asked her to write more lengthy notes and she finally decided it was too much to write everything twice. I heard one of the staff ask a patient, "I bet you don't like filling that out." I held my breath. The patient replied, "I like it better . . . I can think . . . I always get flustered when a doctor or nurse asks me a bunch of questions." The patients did like it. They loved participating in their own care and planning. What about the physicians . . . Like all neu things, it was viewed in many ways . . . as a new toy, as a pain in the neck, as a refreshing change. 1 had given Cross and Bjorn's book to the physicians to read and asked them to try the new charting for at least three months. It has been accepted by all . . . and I am happy to tell you that just last week I partici- pated in a community wide POR workshop as both hos- pitals are starting to problem orient their charts.