The PhD Working Group at ESEC/FSE 2011 had an admirable goal: students would learn about current trends in software engineering research and summarize the results to the rest of the attendees, and throughout the process students would interact with more senior researchers. Unfortunately, it was organized in such a way as to benefit neither students nor researchers. On the plus side, none of my students took part in this fruitless exercise.
I'll summarize what I saw of the PhD Working Group and offer some
suggestions that might have produced better results.
Students were split into 7 working groups, each with its own topic such as
“Agile Development”. Each group was assigned to learn about this topic
from conference attendees. So far, so good.
The students presented a multiple-choice survey form and collected the
answers — or just asked attendees to take the survey online, but the
attendee invariably forgot to do so. So, the poor students would circle
like hungry sharks, asking for survey participants and even interrupting
conversations, and I saw some attendees trying to avoid anyone who looked
like they might ask survey questions. This made the relationship between
junior and senior researchers antagonistic, which prevented rather than
encouraged conversations (not to mention the time wasted with the surveys,
which could have been spent on meaningful communication instead).
A multiple-choice survey that is taken by both people expert and ignorant
of the topic — and the students emphasized that they wanted both types of
opinions — conveys nothing of value about current and future research
directions. Popularity polls may be favored by the evening news when they
do not want to do real reporting, but even there I don't see any value. I
would rather understand the justification for a particular conclusion than
just see that 27% of people agree with it. I hope none of the students
came away thinking that a public poll is a valid methodology to learn about
software engineering research. As was predictable, the student
presentations in a plenary session were a waste of time.
Another serious problem was ambiguous and nonsensical questions on the
survey form. I completed several surveys, but for one survey I gave up in
the middle. It was full of questions with answers that were non sequiturs
(they had nothing to do with the question), or that omitted choices that
would be preferred by any expert, or that I couldn't interpret at all. For
the most successful survey I took, the student interpreted the questions
and I dictated my answers, rather than me working alone ticking off the
multiple choice boxes. In fact, on several occasions the student changed
my answers when I remarked that the question didn't make sense — the
student said that my proposed alternative question is what they had meant
to say. So much for the answers meaning anything. I conclude that the
only redeeming result of the entire exercise is that a bright and
thoughtful student might have learned something about how not to do
questionnaire design; but proper questionnaire design should have been
taught from the beginning.
As I mentioned, the sentiment behind the PhD Working Group is a noble one.
Here is a different way it could have been run instead, which would have
avoided some of the pitfalls that befell it this time around. The
organizers could have given each group a list of 5 or so researchers at the
conference who were expert in that area. The students would interview
those people for 30 minutes or so — no one would get interviewed on more
than one topic — and the group would evaluate and synthesize the
responses, including adding their own opinions or justifications. With
this design, the students have meaningful interactions with senior
researchers, the students learn something, they provide a summary from
which others might learn something, and everyone spends less time, and is
interrupted less, than with the present model. There may be flaws in this
approach, too — feel free to discuss them, and how to correct them, in the
comments to this blog posting.