Another facet of the “college isn’t really worth it” mindset has shown up in an op-ed by Molly Worthen in the New York Times. The op-ed itself complains that assessments of learning in colleges are, as implemented at present, a waste of time and effort. It is in the comments that you see a lot of people arguing quite strongly that some proof of learning is of value to those paying the bill. The irony is that these two sides are probably not disagreeing.
Years ago, the assessment was pretty obvious. Got an “A” in a course–you demonstrated mastery of that subject. Got an “F”? You didn’t. Note this didn’t necessarily measure learning in that class–if you waltzed in having already passed such a course elsewhere, you might have learned nothing, while the F student might have learned a lot relative to a poor base. While on occasion the instructor caught flack for doing a poor job, generally it was the student who discovered he or she was not up to the task. Read More…
One of the frustrations students sometimes have is a feeling that their perception of the quality of instruction is ignored. Some will complain that some faculty got a promotion or tenure or didn’t get fired despite getting a scathing review from students in some form of student review of a course (here at CU these are faculty course questionnaires, or FCQs, a term we’ll use as a stand-in for all the variants out there).
There is some truth to this. Faculty at a tier 1 research university almost never are denied tenure because a course was poorly taught. And unless it becomes a tradition, it will rarely affect a faculty member’s salary. Why is this? After all, teaching is a significant part of the job. And so what impact, if any, do these surveys have?
The first problem is that things like FCQs are only one rather imperfect measure of quality of instruction. They are, for instance, easily manipulated by giving higher grades (the most sadistic trick is to give high grades on a midterm, then the FCQ is administered before the final, where the instructor lowers the boom). At CU these questionnaires are administered the last couple weeks of class, when students are most stressed about completing the course with a good grade, so how a course fits in with the general level of stress can color evaluations. Occasionally even the best instructor will get sideways with a class, perhaps for a joke that falls flat or because of some misbehavior from a student that leads to disharmony. Students’ self-perception of the fraction of material they have mastered fits into this. And for non-major courses, there is much less interest in mastering the material, so a poorly taught intro non-majors course might get high FCQs because it was easy (this is not as common for majors courses, where students tend to recognize that there is stuff they need to learn that didn’t get taught).
What FCQs don’t measure is how much students learned, and how capable they are of completing tasks taught in the course. It is possible to have an ambitious class get low FCQs despite students actually knowing more that those completing a less ambitious section of the same course. One approach to measure what students learned is a concept inventory: a set of questions, usually given at the start and end of a class, that reflects understanding of key concepts being taught in a class. If students don’t improve, poor teaching; if they do, better teaching. These work really well in courses with very fixed academic goals, like intro math and physics, but creating such inventories is difficult and time consuming; courses like intro geology, which might have goals varying somewhat between instructors, can only give an incomplete picture of the success of instruction.
A more common attempt to gauge instruction quality is peer review–having other faculty come in and observe the class and, ideally, interview it. This is most common for pre-tenure professors where a lot of mentoring is possible. But your teaching might seem quite good to peers but lousy to students, and observing one or two classes will often only reveal the most flamboyant of transgressions.
Ideally you’d like to see what students retain 4 or 5 years after completing a course. This isn’t ever done. GG’s one experience was encountering a student in a science museum who had taken his intro course. Asking him if the course helped him at all working in a science museum, the answer was “No, not at all.” Evidently for that student, that course was a disaster.
So FCQs maybe aren’t a great measure of teaching, but then what good are they?
No, not changes to the college, changes by it. For years and years now, higher education has been viewed as the perverter of young minds even as it is lauded as the gateway to upward mobility. Although this is usually portrayed as fine upstanding youth becoming leftist socialists, some of us remember the preppie phase where leftist parents lamented the materialistic impulses of their college offspring.
Does this have any meaning? Is it that the teachers at colleges and universities are brainwashing students? Does this reflect a cocoon where disagreement with the party line is squashed?
There are of course many divergences of current college students from GG’s time, but some continue to surprise a little. One such is that it seems nearly all college students don’t keep their textbooks.
We aren’t talking about the texts from breadth requirement courses (though GG kept his texts from those too), we are talking about the textbooks used in upper division major courses. One might think that these could prove helpful references in professional life, but many students sell their texts back to the bookstore. The odd thing is that many students want a textbook when taking a class (GG had dropped requiring a text in a 1000-level class years ago and students kept asking for a text, so he listed readings to go with the web materials provided). Publishers have noticed and now offer “rental” versions of e-book versions of texts that self-destruct after the course is over; similarly, campus bookstores are now aggressively seeking and marketing used books (and harassing faculty to let them know of future text requirements so they can try to only repurchase the texts that they might need).
This has to be more than simple economics–yes, books can be expensive, but they are still a fraction of the cost of the course and can still provide education long after the course is completed. Is this a case of penny-wise and pound-foolish? Or is it some feeling among modern students that the internet will always have what they need? Or that a textbook will go out of date in short order?
As a textbook coauthor (well, GG wrote software), GG is curious. Declining sales of texts will mean there will be fewer texts going forward and they will cost more, which presumably will result in fewer sales (the textbook death spiral).
Students often wonder about the cost of textbooks (well, so do authors and professors, to be honest). In some ways this parallels the arguments over the value of academic journal publishers. Texts usually are reviewed (first as a proposal, again when nearly completed); they have copy editors and graphic designers to try and make the book pleasant to read. And the potential financial reward for some flavors of text (mainly the intro texts–upper division geology texts are not huge money makers) encourages authors to work to make material more accessible and better illustrated than their own course notes. So there is real value added, though there is a question of that value versus the cost.
Maybe it is just that modern students are wiser. Of the texts GG has saved, probably half have never been reopened and most of the rest just a couple more times. But there are a few that are now falling apart from repeated use. Maybe modern students aren’t simply cheap, maybe they are better about divining which are those go-to books.
No, not the kind that power a remote, anti-aircraft batteries, like those used to shoot down enemy fighters. Why? A story in Boulder’s Daily Camera describes the current crop of university students and their parents, referring to the parents as “stealth-fighter parents.” It is hard to know who comes off worse in this story. Let’s start with the parents, described thusly:
These so-called stealth-fighter parents don’t just hover, they are directly involved in their child’s life and schoolwork, even in college.
They are expert researchers and will show up to meetings with loads of data. They’ll dig deep into a professor’s background.
“That parent will know what your dissertation was and they’re going to call you out if something is wrong in the classroom with their child and expect you to fix it,” Gonzales said.
They’re less likely to argue with a professor for hours, and instead will find a loophole or confront that person’s supervisor.
Some of these parents will skip the discussion phase and take action immediately, such as filing a lawsuit or pulling their student out of school.
Dear stealth-fighter parents: do not try to pull this crap with GG. Any issues your progeny have with professors belong between the student and the professors. You say you paid good money and so have a right to carp at professors? No, you made an investment in your kids–arguably you gave your kids the money. They are the ones who have a right to carp at professors. You can try to prompt them with arguments and data, but you know what?Remember that your goal as a parent is to have your kids become self-sufficient adults. Just to be clear, GG is paying those same sky-high tuition bills for his kids and has not even thought of contacting a professor at those schools.
The good news is that GG has not seen a shred of this behavior. The only time GG has spoken with parents of students has been at commencement, and parents have uniformly been gracious and complementary.
OK, so that’s the parents. Just how does this article describe the students?
[M]illennials and Generation Z students believe they are special and that their feelings matter above all else. They have increasingly short attention spans.
They have high levels of anxiety and depression, in part because their parents are always looking over their shoulders. They grew up in an era of terrorism and intense economic pressure, and have likely never experienced personal failure.
They’re entrepreneurial and grew up surrounded by ethnic and racial diversity. They’re narcissistic and tend to have an inflated sense of self.
And then there are the selfies, Snapchat stories and other means of getting immediate validation from friends and peers.
Wow. This is how colleges and universities view their customers? And advice later is that we need to cater to them?
Well, maybe that isn’t quite the advice; maybe it is, this is what you have to work with. It isn’t entirely clear.
How many college students without military service have ever faced “personal failure”? Is that so new? High levels of anxiety is new? (GG has had students in his office pleading that they “just can’t fail this course or will lose their scholarship/be kicked out of school/lose their visa” pretty continuously his whole career).
Some of this does resonate. Students are disinclined to memorize anything; it takes real work to (1) convince them that some things are worth memorizing (not a lot, just some things) and (2) get them to demonstrate that they have memorized anything. There has been a steady drumbeat of requests for “how am I doing in this class”that maybe has increased over the past decade; it is hard to say. Student writings are often haphazard and loosely linked thoughts that border on stream-of-consciousness. Higher reasoning (indeed, the ability to make or attack a logically-based argument) seems to have lost out to simpleminded “gotcha” logic. But none of these have been large game-changers; students have always been lacking in some ways and part of our job is to recognize deficiencies and develop strategies to correct them.
GG’s take is that these kinds of generational summaries are exaggerations–at least that is the hope. Making such broad-brush stereotypes might insult some and suggest to others that they could behave differently and get away with it. For traditional students just coming from high school, college is and has been a place of transition from dependency to independence. The old saw is that respect is not given, it is earned, and college is a place to both learn how to earn respect and to succeed at it. Whining and letting others fight your battles does not earn respect. Parents should be there for emotional and often financial support, but they should not be students’ advocates in the school.
And students? Hey, it isn’t as bad as you think. The unemployment rate for folks who get through college is under half that of those who never get a degree. Buck up, and work on clever retorts the next time somebody shatters your “special feelings.”
G. K. Gilbert was arguably the best geologist America has produced. He received a classical education rich in languages and math and physics but poorer in other sciences. Much later, in advocating a particular style of intellectual attack suitable for geology, he reflected on college courses. “Gilbert [said] in an address, 20 years [after he graduated], that the important thing is to train scientists rather than to teach science, and that the ‘practical questions for the teacher are, whether it is possible by training to improve the guessing faculty, and if so, how it is to be done;’ thus implying not so much that, in his own experience, accurate observation is easy, but that successful guessing is difficult. It must also have been not his professor’s idea but Gilbert’s, prompted perhaps by a remembrance of an over-insistence on the names of things, that the content of a subject is often presented so abundantly in college teaching as to obstruct the communication of its essence, and that the teacher ‘might do better to contract the phenomenal and to enlarge the logical side of his subject, so as to dwell on the philosophy of the science rather than on its material.'” (Wm. Morris Davis, National Academy Memoir on Gilbert, 1928).
This comes up in part because of a recent New York Times op-ed in favor of lecturing that rather unfortunately seems to pit student-centered learning as practiced in the sciences against lecture-based learning in the humanities. Molly Worthen, in this column, argues that “Lectures are essential for teaching the humanities’ most basic skills: comprehension and reasoning, skills whose value extends beyond the classroom to the essential demands of working life and citizenship.” One has to wonder, are these skills limited to the humanities? Shouldn’t scientists also be learning how to comprehend and reason thoroughly?
This is the time of year when tenure and promotion cases start getting going, so this seems a timely moment to consider the question. The answer at the most basic level is twofold: (1) there is more to the job than teaching and (2) evaluating teaching is difficult. Many times this is simply misinterpreted as “teaching doesn’t matter,” which while true in some institutions is not really true.
Take the second part first. How do you measure success in classroom teaching? Most universities have relied on student surveys at the close of class. These are notoriously weak measures. For instance, it is trivial to skew the numbers by providing students higher midterm grades than final grades. These surveys usually coincide with the peak of student anxiety over their grades and so tend to accumulate a fair bit of vitriol.
A second measure that is reasonably common is peer review of teaching. To do this well would require a lot of effort on the part of the peer reviewer; frequently this is little more than examining a syllabus and watching a lecture or two. Even then, the perspective of a faculty member will often differ greatly from a student. The Feynman Lectures on Physics was once described as being a wonderful textbook so long as you already understood physics.
Efforts are being made, particularly in core-type classes, to identify material students should have mastered and have a standard assessment of their success. This is, for instance, a key part of the Science Education Initiative’s program here at CU, and this can help separate successful from unsuccessful approaches for courses where the material is clearly documented and the assessment is well-vetted. Even here, though, one can ask what the goal should be: is this to assure that the greatest number meet the lowest bar? That the class average be as high as possible? That the most successful are engaged and succeed beyond their initial position? Toss in the reality that developing valid course assessments is hard and any variation between classes of the same course in material covered and you find that this isn’t going to be a frequently useful tool.
But let’s return to that first part: classroom teaching is only part of the job.