The New York Times has swung its spotlight on Boulder once again, but this time with the somewhat implausible notion that CU is leading the way to end college football. The motivation for the piece is a pair of votes by two regents against approving the contract for a new football coach–not because of any objection to the coach himself, but to protest supporting a game that damages the brains of its players.
This arguably is the third strike against football here at CU, but don’t expect any changes. There was first a series of recruiting scandals that took out most of the university administration, then there continues to be an uproar over the amount of money collected and spent on football and how little goes to benefit players, and now we are recognizing the incongruity of higher education being the site for systematic brain damage leading to early death or suicide. Add them all up you’d think this would be the death knell for the sport at CU. Don’t hold your breath, (though it would probably end college admissions scams we’ve heard so much about recently)….
We’re number one according to US News and World Report! We in this case being Geosciences at the University of Colorado Boulder. Woohoo! Time for a press release!
We’re tied for number 19 by US News and World Report!? How did that happen? What does it all MEAN?
In light of the college admissions scandal unfolding, let’s be clear: these mean nothing. And to learn why, just a brief reminder of what these really are.
CU is number one as a university because it publishes a lot of earth science papers that get cited a lot. This has nothing whatsoever to do with undergraduate education and arguably is not a very solid predictor of a great graduate program. It reflects a large and productive research program (the numbers are not normalized by numbers of scientists).
CU’s Geological Sciences Department is number 19 because the rankings are a pure beauty contest from rankings from “surveys sent to academics.” These are always skewed towards programs that, ahem, have in the past generated academics now in a position to receive a survey (in the past, these were department chairs). How often do you think a successful academic will dis their alma mater? Even vaguer is what precisely the basis for the evaluation is. Research? Teaching? Groundskeeping? Collaborators? And CU suffers because a lot of earth science is not in this department.
How do either of these help you choose where to go to school? Simple: they don’t. For undergraduate work they are totally irrelevant. For graduate work, barely relevant. Probably the one evaluation for grad schools that would be most useful is now getting seriously out of date, an NRC report put out in 2012 that among other things actually asked questions about student environment and outcome.
Given the rather transparent limitations of these prominent rankings, it stands to reason that the similar rankings of undergraduate schools is equally misleading. Schools bend rules to make things look better. US News, for instance, counts classes as small if there are fewer than 20 students in them. Magically, the limit for a bunch of courses descends from 25 or 30 to 19.
There are differences between graduate programs, but it often comes down to the thesis advisor, access to tools for completing degree work, and the peer group of students one could interact with. None of these are in any rankings; you have to do it yourself.
A couple of news/opinion items the past week kind of coalesce around a peculiar notion: higher educational institutions are slow learners. This may not be obvious when you learn that the two items are an op-ed about how college isn’t for everyone, and the second about the use of student evaluations of teaching potentially being discriminatory.
Let’s take the last one first. The Boulder Faculty Assembly has now twice prompted the administration to revise how student evaluations are used in determining the teaching ability of professors. These assessments are made by students in the penultimate week of the term; in most cases only a fraction of the class actually completes the evaluation. At greatest issue are two questions on the questionnaire: rating the course, and rating the instructor–the two which are most commonly considered both by students considering which course to take and by promotion and tenure committees considering whether to promote a faculty member. For students, this is one of the few summary pieces of information available to them; for faculty committees though, this is a temptingly quantitative piece of information.
It has been patently obvious for decades (yes, literally decades) that these questionnaire results have little correlation with how much students learned. Read More…
In the movie Elf, the initial voiceover from “Papa Elf” (Bob Newhart) says that there are three main jobs for elves: baking cookies in an old hollow tree, making boots at night, and Santa’s workshop. When Will Ferrell’s human-adult-but-raised-an-elf character Buddy hits New York, his lack of useful skills outside the elf world becomes pretty apparent.
A report in Nature says that postdocs are kind of like elves, but without quite so many career options. The studies underlying this reporting basically find that employers are not so interested in the skills postdocs pick up, with the deadly quote from an employer being that postdocs “have all the academic science skills you don’t need, and none of the organizational skills that you do”. A solution mentioned is mentoring postdocs as entrepreneurs.
If not that, what are these postdocs doing?
By this GG means that postdocs should be writing grant applications supporting the science they wish to pursue (whether they get to be PI is a different matter). Plenty of businesses revolve around responding to proposal requests; this isn’t helpful?
Some postdocs are brought in to work on big projects, which is often to oversee work being done by grad students and undergraduates. Does this administrative responsibility have no use in the private sector?
Other postdocs work independently, which means to be successful they must be self-starters and persevere through challenges. Many times too they have to write up reports on what they have done and what progress they are making. This too has no use in the outside world?
GG is stuck; one of two things is happening: either “real world” recruiters are oblivious to the skills being picked up by postdocs (and postdocs are at a loss to express those skills), or postdoc advisors are treating their postdocs like graduate students, not sharing any of the responsibilities and freedom that such positions should include. Either way, tremendous intellectual capital is being squandered.
Once upon a time, having a “subscription” meant that things would come to you until either the term of the subscription ran out or you cancelled the subscription. The stuff that had already come, whether issues of Teen Vogue, the record of the month or volumes of an encyclopedia, were yours to keep. But in the world of the academic library, that model is vanishing, and with it potentially are large parts of the academic literature.
In the paper past, an academic library’s subscription to a professional journal meant that the library got paper copies of the journal that they could then place on shelves and allow people to read. As budgets might tighten or interests wane, libraries would cancel subscriptions–but those journals they had purchased remained on the shelves unless purged to make room for other material. This model is essentially dead.
Instead publishers have shifted to the software definition of “subscription”–which isn’t really a subscription at all. Just as to use Adobe’s Cloud package of software requires you to have an active subscription, so does getting access to all the issues of Science that you had subscribed to over the years. And if the journal decides to go to predatory pricing? Your options are nil. That money you poured into the journal all those years means nothing. In general, libraries are not allowed to make local copies of all the content they are subscribing to.
Arguably this is one of the best facets of a true open access policy: the freedom to copy materials means that there can be multiple archives. University archives can legally maintain and share copies of work produced at their institutions. Research groups can maintain thematic collections of articles relevant to their focus. (Note that current open access policies do not necessarily allow this: much as you can view some movies online so long as you watch the ads, some open access materials could require you to access the original portal and, perhaps, see advertisements there). In a sense, this can return libraries to their original function: instead of mere portals for providers, they return to being actual repositories of knowledge. So while we may have permanently lost the meaning of “subscription,” we can recover the true meaning of “library.”
Rather inadvertently GG has recognized a pattern in some recent grumpiness; oddly enough it took an article about self driving cars to really crystallize it. Now of course the specific article GG saw has vanished, but this article covers the same ground. Basically, when something becomes easy, we don’t pay as much attention. Which means the ability to do a task atrophies. For cars, we are looking less over shoulders if the car is looking in blind spots-which means a driver of a car equipped with such technology won’t look when renting a car lacking that tech.
Earlier GG complained about hikers who don’t take maps and scientists who can’t use library tools–and these seem examples of the same issue. Basically, humans are slackers. Find the easy path and take it. This has GG wondering about the way we teach.
First, students will always complain about doing things the hard way. Why did I have to work through that problem when I could just look up the answer? So courses that train students by making them work are always at risk of earning negative reviews, which can lead to administrators deciding that course should change somehow. Allowing current students to set a curriculum is a disaster in the making.
But what of new learning approaches? The “guide on the side” and the flipped classroom? A blanket condemnation would be unwise-student engagement in solving problems should indeed be helpful. GG has not flipped a classroom but has spoken to those that have and the word back is mixed. In some classes many students find that they can skip the preclass prep and walk in cold and get by, either by assistance from classmates or simply dragging the instructor to go over material the student should have already examined. Those students would get a punishing homework grade in a traditional classroom but don’t in this environment.
There is a similar bar-lowering going on with content. Courses using group work, in class exercises and flipped classrooms simply cannot cover as much material. For advocates of these systems, this is good news as in traditional classes content retention can be awful. But what consistently gets downplayed is that less stuff is covered. Now for a survey course for non-majors, this is hardly a calamity, but for major courses this can be serious trouble. As universities demand more core activities, time in major courses only stays level at best. Material gets dropped from the major. Employers will start to notice (that new guy didn’t know about XYZ! Can you believe it?). Universities are not votech, but certain core capability is necessary for employers to build on.
Another article GG can’t find at the moment noted that research into popular learning styles shows such styles of learning are fantasy. This business of catering to visual learning or aural learning or what not is, in the absence of real disability, total BS. Catering to such perceived variability only kills time and keeps a student from developing a more robust ability to absorb information.
Here’s the deal. Learning is hard, failing can be good. You do a total face plant in class, you will work hard to avoid it in the future. Struggle is part of learning. The trick will be to get students to buy into that without hitting stratospheric levels of stress. It could be the dreaded firehose beats a tepid trickle.
Many of you no doubt have heard of the lack of reproducibility studies in some scientific fields. This has led to condemnation of publications that have rejected or discouraged papers attempting to reproduce some observation or effect.
Now this is not such a big deal in solid earth science (and probably not even climate science, where things are so contentious politically that redoing things is viewed in a positive way). Basically, for most geological observations we have the Earth, which remains pretty accessible to pretty nearly all of us. Raw observations are increasingly stored in open databases (seismology has been at this for decades, for instance). Cultural biases that color some psychological or anthropological works don’t apply much in solid earth, and the tweaky issues of precise use of reagents and detailed and inaccessible lab procedures that have caused heartburn in biological sciences are less prominent in earth science (but not absent! See discussions on how fission track ages are affected by etching procedures, or look at the failure of the USGS lab to use standards properly). We kind of have one experiment–Earth–and we aren’t capable of reproducing it (Hitchhiker’s Guide to the Galaxy not withstanding, there is no Earth 2.0).
No, the problem isn’t failing to publish reproductions. It is failing to recognize when we are reproducing older work. And it is going to get worse.
AS GG has noted before, citations to primary literature are become more and more scarce despite tools that make access to primary literature easier and easier. This indicates that less and less background work is being done before studies are moving forward: in essence, it is easier to do a study than prepare for it. The end result is pretty apparent: new studies will fail to uncover the old studies that essentially did the same thing.
Reexamining an area or data point is fine so long as you recognize that is what you are doing, but inadvertently conducting a replication experiment is not so great. Combine this with the already sloppier than desired citation habits we are forming and we risk running in circles, rediscovering that already discovered without gaining any insight.