Skip to main content

Professing for the first time

I joined the Caltech faculty back in August of 2009. During the first year I frequently had the following exchange in conversations:

Person: "So, what do you teach?"
Me: "Nothing right now."
Person: "Wha? I thought you were a professor."
Me: "I am. I'm just not teaching right now."
Person: "So...what do you do all day?!"

What do I do all day? Professor stuff! Advising students. Writing proposals. Reducing data. Writing papers. In astronomy, nominally we're hired to be instructors. But in reality, we gotta pay the bills.

My tenure decision in about 6 years will be based primarily on how many papers I publish, how important those papers are to my field of research, how much grant money I bring in (Caltech keeps a percentage of all the grant dollars I raise, to keep the lights on, pay salaries, etc), how well I use Caltech's telescope facilities (my papers), and the quality of the work of the students I advise. Oh, and I'm sure they'll make a cursory check of my teaching evaluations. You know, to make sure the students don't hate me. 'Cause if they did, well, um...How much grant money did I raise?

I wanted to make sure I got off to a strong start at Caltech, so I negotiated a year off from teaching. Thus, my first year ended up being like a third year of a postdoctoral fellowship, with few responsibilities beyond my own research and some student mentoring, yet much higher pay!

This is not to say that I dislike teaching or that it's not important to me. First of all, I really enjoy doing it. I figured out that I spent half of my 14 semesters at UC Berkeley teaching, either as a TA or as the instructor of the IDL programming course I designed. That's just the Berkeley way.

Secondly, one of the most important things I've learned from one of my favorite scientist/educators, Bob Mathieu, is that teaching, advising and research need not be separate ideas. Your research can generate projects for your advisees, who quite often must learn from you as a teacher. That's one obvious route. A less obvious route is that your teaching in the classroom can generate research ideas and teach you, the advisor new tricks. And as I learned last quarter, the learning often comes directly from the advisees and students!

As I mentioned in my previous post, I decided to step well outside of my comfort zone and teach a brand new course on a subject in which I am a long way from mastering: Statistics. The official course title was "Statistics and Data Analysis in Astronomy," but my friend Jason refers to it as "Practical Astrostats." I really like this title and I think I'll go with it in the future. Here's a link to the course syllabus.

Observational astronomers do a few things on a day-to-day basis. Two of them are programming and data analysis. In many ways programming and analysis go hand-in-hand: you need to code up your analysis method. However these two topics are rarely found in the official course requirements of most astro programs. Indeed, the courses are rarely available at all, except through the computer science and statistics departments, respectively. And those courses rarely provide a direct link between the subject matter and astronomy (not to mention the tendency for most statisticians to be about as interesting as a tax form, yet somehow less engaging). Hence, Practical Astrostats.

I followed Sensei Mathieu's advice on several fronts. First, I blended the normally disparate concepts of instruction and evaluation. Usually the professor follows a rigid syllabus in a linear fashion, and evaluation occurs at set intervals during the semester/quarter. Students learn, learn, learn, and then they're tested. Then they learn, learn, learn. Then tested.

In my class I was constantly evaluating the students, and the feedback I received from the evaluation shaped what I was teaching. At the same time, the process of evaluating the students quite often helped me evaluate my own knowledge, and guided what I needed to learn better, so I could teach better, and then evaluate how well I did. As a result, the course ended up as only a shadow of my original plan as laid out in the syllabus.

I constructed this feedback loop by doing something somewhat crazy and unorthodox: I shut up and stopped lecturing. Lecturing is a one-way street. Even instructors with the best intentions can only get something like a 10% feedback rate. "Any questions? Anyone? Anyone?" This is because asking students to ask questions in front of a class of 10 to 100 other students is a very high-stakes proposition. We can insist that there are no bad questions, but let's face it: some people in the room get it, and if you're asking a question, you're not one of those smart, getting-it people. Yes, you might get clarity on what you don't understand, but only at the price of feeling like the dumb person in the room.

So instead of lecturing, I turned my statistics class into a lab. Students brought their laptops to class, I gave an intro mini-lecture, and I then distributed a worksheet (here's the first first worksheet). This allowed me to wander around the room evaluating their progress, and evaluating how well my worksheet was conveying the subject of the day. When students got stuck, they got stuck with a partner, and the two or three of them could ask me a question off to the side of the rest of the class, which greatly reduced the stakes of interaction. These questions were never bad, and by the second week they were comfortable asking them. After all, if the student asking the question was dumb, then so was their partner, and the probability of finding two dumb Caltech students in the same two-student group is the product of two small numbers.

Another method of evaluation came in the form of "rolling oral quizzes." Every class period my TA and I would pull individuals out of the classroom and have an informal conversation about some aspect of the course material. "Okay, let's suppose you had a photometer and measured to flux levels F1 and F2. What's the probability that the levels are equal? What's the probability that the flux is rising with time?" These conversations were great because they allowed me to evaluate how well the student was keeping up with the reading/HW/classwork. They also helped me evaluate how well I was teaching that material, identify what I should emphasize more, and test how well I understood the material. The latter was often humbling, but extremely useful.

The end result is that I taught a class of 14 (10 grads, 4 senior undergrads; huge by Caltech astro standards) in what may be the first statistics course in history with > 90% attendance and during which no one fell asleep. Not one sleeping student! My post-term student evaluation scores were consistently above the historical average for both astro courses in particular, and Caltech courses in general. Yes, I'm bragging :)

Oh, and that TA I mentioned? That was Tim, a fourth-year grad student working with me. He'll be taking his Ph.D. candidacy exam next month, and his thesis is focused on the statistics of exoplanets. Here's our first paper together. More to come soon!


mama mia said…
I like your lab approach, professor, and although statistics at this level is way beyond me, your self-evaluation in the role of guide/instructor will serve you well. I really am happy you post about your work, because it helps me to get to know you even better. So happy to do so. Sending you much love from the beginning end of the classroom spectrum, witnessing little humans as they work in their "kinder" lab.
blissful_e said…
Tying the course to something the students will use ad infinitum... priceless!! Well done, John. :)

The REAL evaluation will be when your students use this stuff in their papers and projects, completely understanding the concepts behind what they're producing. And I have a feeling all of you will pass with flying colours. Hurrah for practical professoring!!!!
JohnJohn said…
Thanks guys! And thanks for inspiring me to write this up. I've been itching to do so ever since the end of the term (before Xmas). It was fun looking back and I'm stoked to do it again in the terms to come.
jackie said…
this makes me miss teaching. i have often thought as a student that unorthodox approaches to teaching are the ones that actually work. bravo.

Popular posts from this blog

On the Height of J.J. Barea

Dallas Mavericks point guard J.J. Barea standing between two very tall people (from: Picassa user photoasisphoto).

Congrats to the Dallas Mavericks, who beat the Miami Heat tonight in game six to win the NBA championship.

Okay, with that out of the way, just how tall is the busy-footed Maverick point guard J.J. Barea? He's listed as 6-foot on, but no one, not even the sports casters, believes that he can possibly be that tall. He looks like a super-fast Hobbit out there. But could that just be relative scaling, with him standing next to a bunch of extremely tall people? People on Yahoo! Answers think so---I know because I've been Google searching "J.J. Barea Height" for the past 15 minutes.

So I decided to find a photo and settle the issue once and for all.

I started by downloading a stock photo of J.J. from, which I then loaded into OpenOffice Draw:

I then used the basketball as my metric. Wikipedia states that an NBA basketball is 29.5 inches in circumfe…

The Long Con

Hiding in Plain Sight

ESPN has a series of sports documentaries called 30 For 30. One of my favorites is called Broke which is about how professional athletes often make tens of millions of dollars in their careers yet retire with nothing. One of the major "leaks" turns out to be con artists, who lure athletes into elaborate real estate schemes or business ventures. This naturally raises the question: In a tightly-knit social structure that is a sports team, how can con artists operate so effectively and extensively? The answer is quite simple: very few people taken in by con artists ever tell anyone what happened. Thus, con artists can operate out in the open with little fear of consequences because they are shielded by the collective silence of their victims.
I can empathize with this. I've lost money in two different con schemes. One was when I was in college, and I received a phone call that I had won an all-expenses-paid trip to the Bahamas. All I needed to do was p…

The GRE: A test that fails

Every Fall seniors in the US take the Graduate Records Examination (GRE), and their scores are submitted along with their applications to grad school. Many professors, particularly those in physics departments, believe that the GRE is an important predictor of future success in grad school, and as a result many admissions committees employ score cutoffs in the early stages of their selection process. However, past and recent studies have shown that there is little correlation between GRE scores and future graduate school success.
The most recent study of this type was recently published in Nature Jobs. The authors, Casey Miller and Keivan Stassun show there are strong correlations between GRE scores and race/gender, with minorities and (US) white women scoring lower than their white male (US) counterparts. They conclude, "In simple terms, the GRE is a better indicator of sex and skin colour than of ability and ultimate success."
Here's the key figure from their article: