Well, recently Elsevier added another chapter to their ongoing saga of how little they value the people who generate all their income: they added a new layer of prohibitions on “green” open access versions of journal articles. This has the Open Access community up in arms…but then, they have noticed that Elsevier is fundamentally the Evil Empire crossed with a cable company, no? I mean, they bundle a lot of crappy journals in with the few that anybody would pay for in order to avoid real ala carte payments and also to be able to claim high circulation on their crappy journals. (Why does anybody keep publishing with these guys? ANY Open Access advocate who publishes in Elsevier should question their own stance).
Anyways, the thing that gets the Grumpy Geophysicist’s goat is that when you see the high dudgeon that Open Access advocates get into over this, you get the distinct impression that (1) all research is publicly supported, (2) all research should be free to publish and freely available and (3) publishers provide no value. Arguably all these are in error.
Just over a year ago this blog started off with a complaint about how students get so upset about numeric grades. (Hard to believe it was a year and a surprisingly large number of posts ago). Recently a column in the New York Times offered an explanation and a solution of sorts.
The column argues that there are lots of irrelevant things that color our perception of a fair deal, and so a lot of rational economics is based on a false premise. The example the author, Richard Thaler, provided, however, is what is of interest here. He had an exam with an average score of 72 and students were upset [he should see the response with an average of 60]. His solution was to simply make the next exam have a maximum score of 137: the average score of 96 represented a lower percentage average (70%) but the students were happy with the exam.
You see, this is a strategy GG simple would not have come up with, as he thinks that it would have been the percentage in any case that would matter. It is a case of an instructor with a strong quantitative background and an expectation of rational thought encountering others lacking those qualities.
It might be awhile before GG gets to test out this concept, but one wonders if that would hold for a whole semester or if the students would start to realize that a 90 was a low C. What this does remind us all of is that sometimes what we think is in a message is not what is heard at the other end. This is always an important point in giving and receiving criticism.
Sorry, but in the midst of writing NSF proposals from the wrong side of the Atlantic and some stuff is amazing.
Yes, you have to paginate a one page section of the proposal.
You have to count the number of thesis advisors you had. And list it. Next to the name(s) of said advisors.
You have to count the number of students you advised. And put that number somewhere near the list of those students.
Of course all of this is on top of previous demands for exactly how a cv should read, the proper point of view for project summaries, etc.
Look, GG gets the need to specify page limits and font sizes, but this stuff? Really?
The working hypothesis here is that the purpose of NSF is to support an army of support staff at both universities and in NSF to check on blindingly stupid requirements that have no relationship whatsoever to the merit of the science being proposed.
If you want to see the justification for the attack on earth science from the point of view of one of the main authors, you can find it in Lamar Smith’s op-ed in The Hill. He argues that priorities must be set and “taxpayers’ dollars should be focused on national priorities. The progress of science in the United States as well as our future economic and national security depends on making smart investments in science and prioritizing research.” All other research ought to be funded by charities.
We can hope that one day Smith goes back and rereads Vannever Bush’s document describing the need and justification for the National Science Foundation. Here’s the rub: who knows best how to set the priorities?
Remember, the National Science Foundation (NSF) is not the R&D arm of industry, nor of the Defense Department. While there is a good argument for removing Engineering from NSF, within science, where will money yield the most good? Here is what Bush recognized: you have no idea. The researchers themselves are pursuing concepts they themselves might have no idea how to apply. And yet out of all this oft times comes advances that far more than pay back that initial investment. The problem is recognizing the point where the marginal longterm return does not exceed the longterm value of the current investment. So, is it wise for Congress to plop itself down in the middle of this and say, we think you should study this and this and this and not that other stuff?
One of the mantras drilled into the heads of graduate students as they prepare their oral meeting presentations is “tell them what you are going to tell them, then tell them, then tell them what you told them.” The point being to make sure that the audience knows what you think is important. And at a meeting, this can be pretty significant as folks wander in and out of a room or are distracted. That first part tells them what they should really look for (and it helps to remind the student what they are emphasizing), the last is to reaffirm that the desired goal was in fact met.
But this is probably a lousy format for a colloquium talk and even lousier for a public talk. Think of the storytellers out there and how their stories go. Does Hans Christian Anderson tell you what happens to the Little Mermaid at the start of the tale? Would Grimm’s Fairy Tales be the same if they started with the fate of the children lost in the woods? Or even a regular joke–is it better knowing the punchline at the start? [Occasionally yes; the best of Steven Colbert’s The Wørd segments worked that way]. Basically, the farther you run from a specialist audience, the more you want your presentation to evolve like a story, one where there is some suspense and some reward. Instead, many science talks start with certainty, wander through observation, experimentation, and inference to come to a conclusion that all too frequently dissolves into a mist of “more research is needed.”
Basically you want some kind of plot line that makes sense, you want easily recognized key points and you want a climax. This is a far cry from normal science communication, yet most of the time such an arc is well known to the speaker, for the science was not found the way it is usually presented. Oftentimes there was some barrier that had to be breached, one whose breaching in the final paper is some “by the way” paragraph instead of the emotional milestone it was in the process of doing the work. A moment of epiphany is reduced to a logical result in a paper. These highlights of the process of doing science can be engaging elements of a talk even as their presence in a scientific paper would be a distraction.
Interestingly, one has to wonder if this might also be good advice for classroom lectures (which, despite the generally low opinion in which “sage on the stage” is held these days, is the most popular form of university teaching). GG has seen some of the best instructors work to have lectures that have an arc, that are stories, that are as much performances as lectures and not mere recitations of theories and facts, and admires the result (in contrast, GG is a failure at making such lectures). It is pretty hard work.
Anyways, something to chew on the next time you fall asleep in a science talk. What would have made that talk sing?
How should we reward the teaching component of university faculty’s work?
You might think this is obvious: you reward those who teach best. OK, great–more student learning is the goal. How do you measure that? Arguably the best approach is some kind of pre- and post-course concept inventory. This has been used to great effect in courses where the material is essentially static between instructors. Of course, you are then encouraging teaching to that test, but if the test is well-designed, that is a good thing. There are many courses, though, where the cost of developing a proper concept inventory is hard or the nature of the material can shift between instructors. Often then university faculty retreat to using peer evaluations and student feedback on standardized questionnaires, neither of which address learning.
The funny thing about the question is that others would argue that it isn’t how well we teach, it is how well we retain our customers, the students. Student retention is now a big buzzword, and of course part of this is motivated by the noble desire to not have students drop out of school burdened by high debt. We’ll return to that in a moment. A lot of it, though, is motivated by simple economics: students who are not dripping out are continuing to pay tuition.
Retention is trivial to achieve: give high grades to everybody. Teach less challenging material. Hand out lollipops and puppies.