Southern Appalachian Digital Collections

Western Carolina University (20) View all

The Reporter, July 1977

  • record image
  • The Reporter is a publication produced by Western Carolina University featuring news, events, and campus community updates for faculty and staff. The publication began in August of 1970 and continues digitally today. Click on the link in the “Related Mate
  • A Weekly Newsletter for the Faculty and Staff of Western Carolina University Cullowhee, North Carolina July 20,1977 THE COLLEGE BOARD From both sides of the fence by Robert E. Stoltz Dr. Robert E. Stoltz was for many years on the staff of the College Entrance Examination Board. He served as director of their Southern Regional Office, Vice- President, South, and Vice-President, Special Field Services. Prior to joining the Board he was chairman of the Psychology Department at Southern Methodist University. In 1975, he came to Western Carolina University as Vice-Chancellor for Academic Affairs. An active industrial consultant, his clients have in­cluded law enforcement agencies, advertising firms, the Dallas Cowboys, and members of the National Hockey League. For about ten years, I was a staff member of the College Entrance Examination Board. I g rew to have a very high regard for the staff and the services of the Board. It was quite clear to me that what we were about was sound, humane, and much needed. Regret­tably, I lea rned this beneficient view is not shared by everyone outside the Board. Whether I was at a con­ference or a cocktail party, the passenger in a cab, or the holder of a tourist ticket, I ge nerally had to field the same set of not always friendly questions about the Board and its works. I began to have doubts. Were the folks on the campuses really using those fine tools the way we said they would — or sh ould? Perhaps the answer to why we were not always respected, if not loved, lies there. A couple of years ago, this morbid concern with the real world got the best of me. I beg an to yearn for the quiet, ordered world of the campus. After setting this yearning aside from time to time, I found a situation which a friend of mine described as "just the right combination of challenge and opportunity" to be fas­cinating to an academic administrator. Shortly there­after, I was offered the position of Vice-Chancellor for Academic Affairs at Western Carolina University, and I accepted. For two years I have been Vice-Chancelloring, if that is a legitimate word. The switch was very real, and I am not perfectly adjusted yet. The campuses today are not perfectly tranquil; everything isn't orderly, but there is no doubt that the academic world as I find it is far more rapidly changing and exciting than the academic world as I left it. But, to my disappoint­ment, some of the same questions have followed me — questions about the Board and its primary product, the Scholastic Aptitude Test (SAT). The principle blessing now is that my answers have the ring of solidity to them. When I say someone is using the SAT a particular way, I know intimately, one place at least, that is doing just that. As a change for me, if not for you, I'd like to field some of those old SAT questions but back them up, based on what one institution is doing — adm ittedly, an institution whose chief academic officer knows this particular tool of the trade intimately. One of my old favorites was the question, "...but is the SAT any good?" Years ago I would have ducked this one, or shuffled to one side, and given one of several technical responses. Today I can simply say, unequivocally, "Yes, next question." Academic admin­istrators frequently have to cut through the smoke and get to the fire; Board staff have to be accurate and detailed and catch the next plane. The SAT is a good test. It has a long and sound history, with some of the very best in the way of psychometric talent being brought to bear on its development and validation. It isn't perfect, but no test instrument is. What we do know is that it is extremely carefully developed, checked, and dissected each year. We not only know where it is weak, we know a good bit about just how weak it is in those areas. On a given day, I would, if I could spare the time, wish that half the measures I work with are as carefully developed and understood as the SAT scores. Another dandy used to be, "...but what does the test measure?" As staff of the Board, I might have given the standard Student Information Book descrip­tion. This is always safe. It exists in print with a sanctity second only to Holy Writ. Or, if your ego needed a trip, you gave them the graduate school — factor analysis buck and wing. In either case, the next result was that internally you felt adequate and the inquirer was sorry he had brought it up. Today's answer, ham­mered out on the anvil of admissions committee meet­ings, is that we aren't quite sure. We can specify rather well the kinds of items that work. As one tries to explain them, they seem to have a lot to do with knowing the meaning of words and reasoning in one instance, and with being able to think and reason with numbers, in the other case. For ease in referring to them, one can call them Verbal and Mathematical aptitude. What is most important to know, is that scores derived from these collections of items tend to be highly related to ability to learn things in a college situation, particularly things that have to do with verbal expressions or mathe­matical relationships. As a corollary, the scores seem to have little to do with being voted homecoming queen, catching the long bomb, or getting the biggest job offer in the spring. Surprisingly, the scores are only modestly related to running away with some of the top fellow­ships to graduate school — here, as elsewhere, aptitude just provides the base; it's what you build on that base that really counts. "What d oes it predict?" Grades in college is one of the best and still neatest answers. To some this begs the next question, "but grades aren't everything." Let me start with the latter one. Grades may not be every­thing, but if you want to do some of the other college type activities, you had better have good grades if you want to be around to do them. Grades tend to set floors for students, not ceilings. The people with the highest grades don't necessarily make the highest salaries, but they do tend to get into certain professional tracks, such as medicine, where grades will c arry a lot of the weight in deciding who will see medical school from the inside. Sometimes I used to feel apologetic that the SAT didn't do more than predict grades, but not today. That is quite enough. It is really quite impressive that it does it as well as it does for most students. This is no small accomplishment given the variety of programs the typical university provides. In addition, that little characteristic, grade predicting, turns out to be ex­tremely valuable in many respects. Obviously, it can help in deciding whether or not to admit a student. But that is not where the SAT ends for most colleges. It provides additional help in determining how a student enters and how he might best proceed in the institution. For every student the test result is important in deter­mining where and how he begins with us. For example, Western Carolina has a special program for applicants who fall outside the range in which we normally admit students. Programs of this sort are not rare; many campuses have them. Ours is called the PREP program. Each year a fixed number of students with weak or mixed records are admitted. Students in this program get special advising, counseling, and start in courses designed to permit a somewhat different entry path into college level work. Admitting some students into this special program each summer makes sense to us. First, we don't believe completely in either the SAT scores or the High School Rank in Class alone. We always, always look at the two in combination along with anything else we can learn about the prospective student. Sometimes the picture isn't very clear. One index may be favorable, another unfavorable. Or b oth may put him or her in the "we aren't sure" category. When the best information you ha ye says that these are students about whom you can bet that the regular route for them will be a risky business, then it makes sense to design a different route. Keep in mind, the standards for graduation haven't changed. It still takes a particular combination of hours and grade-points to get the university stamp of approval. What we can say is that not everyone has to get there by following the same route. Today more institutions are taking this point of view — don't judge us by the raw material, judge us by our products. So long as this is part of our institutional outlook, then test scores at entry are guides and aides and not blind gatekeepers. Probably one of the tougher questions is the one that starts, "...how do you interpret the SAT?" Any test score is a joint function of at least the test, its administration, and the characteristics of the person taking the test. Rarely will we in every day situations know about everything that makes up a score. So we resort to simple expedients, but expedients that expe­rience has shown work and fit the basic facts. We start by requiring that administration be offered in a standard way, so that source of variability can be reduced. If the tests themselves are the same or equiv­alent for all, we are left with personal characteristics as the major source of uncontrolled variation. We explore that by looking at the scores in a relativistic way. The SAT score scale itself, from 200 to 800 on each scale, is a relative scale going back to a normative group of many years ago. Some people believe the score of 500 is "average" although I have yet to see any group of college bound students in one year make the average score. The 500 is a theoretical point defined in such a way that the test construction pros know where it is. In practical terms we, the university users, are interested in knowing whether a given student or group of students is above or below another reference point that makes sense to our institution. For example, a given score may be described as high or low relative not to the magic "500," but to what our typical freshman over the last several years might have made. Or, the year following our introduction of a new English se­quence into the curriculum, a score might look high or low relative to the scores of those who completed this new sequence successfully. In ce rtain interpretation cases we are most inter­ested in the score relative to the kinds of scores we typically get from that high school. A case in point occurred last year when a young lady applied to us with a score that barely made it to the middle of the SAT scale. But relative to three years worth of students we had seen from that school in that locality, her score would have put her out the top of the scale. We took her, and she is doing quite well but having to fight for every inch in some subjects. She will make it; we will be proud of her, and society can expect much in the way of contributions from her. We would be delighted to have more like her. So don't ever think the score alone is the story... it isn't. It's the score along with the record of what else the person has done and against what odds. The SAT doesn't stop being useful once a student has been admitted or even after being enrolled. Two more uses of the SAT are quite common on campus. Without doubt, more use is made of the SAT in advising students than is made in deciding who will be offered tickets of admission. A good advisor, aware of what the SAT can and cannot tell, will use this information to design a program that will give the student the optimum chance of becoming what he wishes to be­come. In s ome cases the SAT scores can help when the student needs to review an earlier career plan. For example, one can plan on becoming an engineer, but if the grades in math are low and the SAT math scores are low as well, then some shift of goals, without suffering complete loss of time and credit, might be in order. The second major use is when we have to decide whether or not to readmit a student who has gotten into academic difficulty. This particular type of care constitutes one of my major uses of the SAT. After a student has been academically suspended, and after a faculty committee may have chosen to deny readmis-sion, some of these cases come directly to my office as appeals of these lower decisions. If there were no reason for the SAT on our campus other than this, this would be enough to argue for its continued use. In reviewing these cases I have to decide whether the potential is there, why it might not have been fully utilized, why it might not emerge under old conditions but could under new, or whether new applications of time and energy by the student are likely to make a difference. The SAT gives one a modest degree of confidence regarding a number of these issues. But that tiny amount of confidence is what often makes the difference in helping me decide who can come back, or when to do what. Maybe the SAT is not perfect, but it's good enough not to ignore. "...but why do you need the SAT? Aren't the high school grades enough?" There are two levels at which you can field this. First, at the admissions level, the answer is, "We need it because all schools and the students in those schools aren't alike and their pro­grams aren't alike." This is a perfectly good answer and hasn't lost any of its truth over the years. We need an externally administered, independently arrived at, esti­mate of a prospective student's potential. We need it because we have the high school grades. The SAT helps us make better use of these grades by adjusting them for differences in who the schools serve and how they have served them. There is another usage level, that of the academic administrator whose view of things we gave only limited attention to within the old Board. Administrators in offices such as mine need the College Board scores to know how the institution is doing and how its resources might best be allocated. As always, the SAT isn't a perfect single measure of how to do these things, but the'information the SAT provides often helps formulate approaches to these matters. For example, I need to know how many staff members will be needed in Honors English, or how many tutors we might need in the fall term, or how much value to place on the out­comes of analyses of a particular method of teaching history, or the worth of a new approach with students from a different cultural background. The SAT can be anchored back to a population of students of several years ago, and records can tell me if, as a consequence of our plans and policies, we are drifting away from some reasonable standards of performance. This last year, for example, it was useful to know that while the SAT scores nationally continued to drop, ours at West­ern increased slightly, even though we were somewhat more restrictive in whom we offered regular admission. In short, we changed in an institutionally desirable direction without dropping in enrollment and with gains in regional service. Too often the administrator can't get this kind of feedback, not because he or she doesn't want it, but because his yardsticks keep chang­ing the size of their inch. The relative stability of the SAT scores and scale make it a very useful benchmark. "Is the SAT biased?" Answers to this range all the way from "Yes, but not as biased as some others" to "No." One I particularly liked, because it was technically very good and always left the questioner headed for another conversation, was, "Well, you see there are at least seven different kinds of test bias. Which did you have in mind?" Today the practical answer approach has more of an appeal. It still depends a great deal on what your particular notion of bias might be, but if one looks at the variety of criteria against which the test has been used and selects those most commonly employed, he can conclude that the test is sensitive to group differences, but still very useful. For example, it is true that blacks, farm children, southern whites, and chil­dren of some faculty do not do as well on the test as some other groups. But it is probably also true that exposure to some rich and varied educational experi­ences, plus the positive support and follow-through some parents provide, is also a little short for these same groups. What is important to establish is not the degree to which the SAT is or is not biased, but the extent to which the SAT scores, as they will probably be used in a given situation, are an aid in making educationally helpful decisions for students. If the instrument serves well under a definition of that sort, then the heuristic question of its general freedom from bias becomes increasingly less significant. Let me illustrate. Perhaps you didn't know that during the same years when tests such as the SAT were used to keep black youngsters out of college, the same test was being used to identify black youngsters with promise who would be given scholarships to go north to enroll in colleges. Was the test biased? I r ather suspect this question is secondary to what was done with the data from the instrument, biased or not. Finally, there was the conversation that always seemed to end, "...but what can I do to change his score?" The best answer to this one is the same one that the little old man on Seventh Avenue in New York City gave to the young fellow with the violin case who asked how to get to Carnegie Hall. "Practice, my boy, practice," the old fellow replied. The same thing goes for the SAT. Get all the practice you can reading widely, working mathematical problems, and improving your ability to deal with questions under timed conditions. Usually the question is asked at a time when little can be done, such as spring of the senior year of high school. At certain stages the result of all this practice may be a change in score, but prob­ably very little, when the best educational answer for the youngster might involve doing something other than trying to get the score up. If time is short, but not too short, say the beginning of the junior year of high school, then look for some other options. Look for the collegiate level institution which is ready to begin where your son or daughter is, who can begin where they are, not where we parents want them to be. The crash course just to get admitted isn't likely to be the skill developer that will help on the long pull when away from the family. The best advice is to start early, or whenever you start, keep at it. As I warned you, the questions aren't all that new. Perhaps the answers I g ive aren't that much different either, but they are dispensed now with more conviction. Colleges are probably more aware of the limitations of the College Board tools than we used to think. We have less awe about them, but more respect for them than the makers, I think — probably, b ecause we know how much it is to ask of one three-hour segment out of a lifetime. We know far better than I believed just how difficult and complex is the task we are asking this paper instrument to predict. And last, we are cau­tious because we are probably more aware of the day to day decisions institutions and their students make that can make these predictions come true or collapse. From both sides of the fence the old College Board looks fairly solid, a knothole here and there, needing some new paint and less whitewash, but basically solid and sound. It d oes a rather good job of measuring a rather singular aspect of behavior. It doesn't pretend to tell the whole story — and we users know that. When your time comes to deal with it, step right up. You can be sure that those who ask you to take it treat it with caution and care. Reprinted from: The Record for Durham Academy Alumni and Friends; June 1977; Volume 3, No. 1