I joined the Caltech faculty back in August of 2009. During the first year I frequently had the following exchange in conversations:
Person: "So, what do you teach?"
Me: "Nothing right now."
Person: "Wha? I thought you were a professor."
Me: "I am. I'm just not teaching right now."
Person: "So...what do you do all day?!"
What do I do all day? Professor stuff! Advising students. Writing proposals. Reducing data. Writing papers. In astronomy, nominally we're hired to be instructors. But in reality, we gotta pay the bills.
My tenure decision in about 6 years will be based primarily on how many papers I publish, how important those papers are to my field of research, how much grant money I bring in (Caltech keeps a percentage of all the grant dollars I raise, to keep the lights on, pay salaries, etc), how well I use Caltech's telescope facilities (my papers), and the quality of the work of the students I advise. Oh, and I'm sure they'll make a cursory check of my teaching evaluations. You know, to make sure the students don't hate me. 'Cause if they did, well, um...How much grant money did I raise?
I wanted to make sure I got off to a strong start at Caltech, so I negotiated a year off from teaching. Thus, my first year ended up being like a third year of a postdoctoral fellowship, with few responsibilities beyond my own research and some student mentoring, yet much higher pay!
This is not to say that I dislike teaching or that it's not important to me. First of all, I really enjoy doing it. I figured out that I spent half of my 14 semesters at UC Berkeley teaching, either as a TA or as the instructor of the IDL programming course I designed. That's just the Berkeley way.
Secondly, one of the most important things I've learned from one of my favorite scientist/educators, Bob Mathieu, is that teaching, advising and research need not be separate ideas. Your research can generate projects for your advisees, who quite often must learn from you as a teacher. That's one obvious route. A less obvious route is that your teaching in the classroom can generate research ideas and teach you, the advisor new tricks. And as I learned last quarter, the learning often comes directly from the advisees and students!
As I mentioned in my previous post, I decided to step well outside of my comfort zone and teach a brand new course on a subject in which I am a long way from mastering: Statistics. The official course title was "Statistics and Data Analysis in Astronomy," but my friend Jason refers to it as "Practical Astrostats." I really like this title and I think I'll go with it in the future. Here's a link to the course syllabus.
Observational astronomers do a few things on a day-to-day basis. Two of them are programming and data analysis. In many ways programming and analysis go hand-in-hand: you need to code up your analysis method. However these two topics are rarely found in the official course requirements of most astro programs. Indeed, the courses are rarely available at all, except through the computer science and statistics departments, respectively. And those courses rarely provide a direct link between the subject matter and astronomy (not to mention the tendency for most statisticians to be about as interesting as a tax form, yet somehow less engaging). Hence, Practical Astrostats.
I followed Sensei Mathieu's advice on several fronts. First, I blended the normally disparate concepts of instruction and evaluation. Usually the professor follows a rigid syllabus in a linear fashion, and evaluation occurs at set intervals during the semester/quarter. Students learn, learn, learn, and then they're tested. Then they learn, learn, learn. Then tested.
In my class I was constantly evaluating the students, and the feedback I received from the evaluation shaped what I was teaching. At the same time, the process of evaluating the students quite often helped me evaluate my own knowledge, and guided what I needed to learn better, so I could teach better, and then evaluate how well I did. As a result, the course ended up as only a shadow of my original plan as laid out in the syllabus.
I constructed this feedback loop by doing something somewhat crazy and unorthodox: I shut up and stopped lecturing. Lecturing is a one-way street. Even instructors with the best intentions can only get something like a 10% feedback rate. "Any questions? Anyone? Anyone?" This is because asking students to ask questions in front of a class of 10 to 100 other students is a very high-stakes proposition. We can insist that there are no bad questions, but let's face it: some people in the room get it, and if you're asking a question, you're not one of those smart, getting-it people. Yes, you might get clarity on what you don't understand, but only at the price of feeling like the dumb person in the room.
So instead of lecturing, I turned my statistics class into a lab. Students brought their laptops to class, I gave an intro mini-lecture, and I then distributed a worksheet (here's the first first worksheet). This allowed me to wander around the room evaluating their progress, and evaluating how well my worksheet was conveying the subject of the day. When students got stuck, they got stuck with a partner, and the two or three of them could ask me a question off to the side of the rest of the class, which greatly reduced the stakes of interaction. These questions were never bad, and by the second week they were comfortable asking them. After all, if the student asking the question was dumb, then so was their partner, and the probability of finding two dumb Caltech students in the same two-student group is the product of two small numbers.
Another method of evaluation came in the form of "rolling oral quizzes." Every class period my TA and I would pull individuals out of the classroom and have an informal conversation about some aspect of the course material. "Okay, let's suppose you had a photometer and measured to flux levels F1 and F2. What's the probability that the levels are equal? What's the probability that the flux is rising with time?" These conversations were great because they allowed me to evaluate how well the student was keeping up with the reading/HW/classwork. They also helped me evaluate how well I was teaching that material, identify what I should emphasize more, and test how well I understood the material. The latter was often humbling, but extremely useful.
The end result is that I taught a class of 14 (10 grads, 4 senior undergrads; huge by Caltech astro standards) in what may be the first statistics course in history with > 90% attendance and during which no one fell asleep. Not one sleeping student! My post-term student evaluation scores were consistently above the historical average for both astro courses in particular, and Caltech courses in general. Yes, I'm bragging :)
Oh, and that TA I mentioned? That was Tim, a fourth-year grad student working with me. He'll be taking his Ph.D. candidacy exam next month, and his thesis is focused on the statistics of exoplanets. Here's our first paper together. More to come soon!
Comments
The REAL evaluation will be when your students use this stuff in their papers and projects, completely understanding the concepts behind what they're producing. And I have a feeling all of you will pass with flying colours. Hurrah for practical professoring!!!!