On Research (Part 1: An Impostor Is Born)

My research career began in the year 2000, when I nearly failed out of grad school.

I was in the middle of my third year of the PhD program in computer science at Carnegie Mellon University (CMU), and I had taken all of my required coursework except for the “Software Systems” course, which I finally took that spring semester of 2000. I only barely passed it — and probably shouldn’t have. Most of the course involved reading papers about operating systems and distributed systems design, papers which to my mathematically-oriented mind were often impossibly vague and informal. I couldn’t get much out of them. I also had no idea how to carry out the open-ended class project, because I had never done anything remotely resembling “systems programming”. I managed to squeak out a passing grade by pairing up with Kevin Watkins, a brilliant fellow grad student who at some point took over the project and did most of the work. (How ironic, then, that I am now a tenured faculty at the Max Planck Institute for Software Systems, and that I recently received a big grant to study the “logical foundations of safe systems programming”! So it goes.)

But as far as research was concerned, I had at that point done essentially none at all. For my first two and a half years at CMU, my advisor had been Peter Lee. Peter was a generously warm and comforting advisor. However, in my second year, he was mostly focused on his startup, Cedilla. (Peter has since gone on to become a Corporate Vice President at Microsoft Research.) Moreover, it turned out that what I needed was not comfort and assurance, but close hands-on guidance and firm deadlines. Because, you see, I had no idea what I was doing. I had no clue what it meant to do research. I flittered back and forth between various projects, some of which were glorified homework assignments, some extremely vague and open-ended. I was getting nowhere.

Realizing that things were not as they should be, I decided at the beginning of 2000 to look around for a different advisor and a more clearly defined project. I talked to several other professors in PL (programming languages), and the problem that seemed most interesting to me was one suggested by a brand new faculty member, Karl Crary: how to extend the ML programming language with recursive modules. ML has a very expressive “module system” — a sub-language for structuring large programs — but for a variety of technical reasons it prevents modules from being defined recursively, thus leading programmers to use unpleasant workarounds. This was a problem that had actually interested me from my first year, when I saw a talk at PLDI’98 by Matthew Flatt on his “units” system (a recursive module extension of the Scheme language), but I had never thought to pursue it. In the meantime, Karl, together with Bob Harper, had taken a very preliminary stab at the problem in their PLDI’99 paper, “What is a Recursive Module?”, which is arguably one of the most theoretical papers ever to appear in PLDI (not generally a theory-friendly conference). It’s an interesting paper, and it did introduce the novel concept of a “recursively dependent signature”, but as even the title suggests, the paper is much more about asking questions than it is about answering them. I took as my goal to flesh out the inchoate ideas of that paper into a more fully thought-out design.

Unfortunately, I still wasn’t getting (or even knowing that I needed to be seeking) much hands-on guidance from Bob and Karl, and by May 2000, I had made very little tangible progress. I received an e-mail from Peter while I was home for Passover vacation, not long before the end of the spring semester, informing me that, according to Bob and Karl, I was not in good standing. I would subsequently receive an “n-1” letter: a progress report from the CMU faculty after the semi-annual “Black Friday” meeting, indicating that I would have to produce a significant piece of research and write a paper about it by the end of the year, or else be ejected from the CMU grad program.

What on earth had just happened? One minute, I was embarking on a career in computer science research and just getting my feet wet (never mind that three years into the program was a bit late to be getting my feet wet). The next minute, I was being essentially told I was a failure.

This was the lowest point of my entire existence. I felt like a worthless piece of shit. That is a feeling you don’t soon forget. I certainly haven’t forgotten it — I relive it frequently, even now as a successful academic, although I’ve learned to mostly refrain from throwing small objects across the room and thinking suicidal thoughts in despair at my general shittiness and uselessness. I view myself as a reasonably smart person, but I always feel I’m in way over my head, surrounded by people much more talented than me, who work much harder than me, who actually know how to program, who actually understand type theory (which I’m supposed to be an expert in but I’m not), who actually know how computers work, who actually care about how computers work — people who are qualified to do research in computer science. Whereas I am not qualified: I am…an impostor.

Yes, this is a well-known problem among academics (and other “high-achieving individuals” as Wikipedia puts it), known as “impostor syndrome”. What is so insidious about impostor syndrome is that, although it is a subjective distortion of reality, it is usually grounded in an accurate and honest perception of one’s own limitations. I am surrounded by people who are much more talented than me, who work much harder than me, who actually know how to program, who actually understand type theory, who actually know how computers work, who actually care about how computers work. It’s just that I have some other strengths that those people lack, which I fail to take account of because I am too focused on my failings, and I probably romanticize the talents of others out of proportion. In other words, the impostor syndrome is a natural consequence of the tendency of smart people to be self-critical, competitive, and a bit mad. These characteristics are correlated with creativity and accomplishment, and are thus admirable and beneficial in appropriate doses, but they can easily be abused like a drug, justifying inaction and hopelessness, and leading to a spiral of self-hatred and depression.

Fortunately, after receiving my “n-1” letter, I did not spiral into self-hatred and depression because I am blessed with a loving and supportive family. My mother came and stayed with me for several weeks that summer, and gave me the moral support I needed in order to focus on making steady, positive steps toward completing a piece of research. She also helped put me on a new exercise and physical training regimen, which enabled me to lose a lot of weight and feel better about myself in general. That was spectacularly good timing, given that I met my future wife Rose Hoberman later that fall.

By the end of the year, I had drafted my first paper:

Looking back at this paper now, I am struck by a few things:

1. The paper is the very definition of preliminary work, an exploration of ideas without a clearly useful result. I can’t tell you how many papers I’ve reviewed for POPL this summer, written by big-name established researchers, that fall into the same category! But at least it is thoughtful, scientifically rigorous, and even, dare I say it, mildly interesting. Hiding within it are the seeds of ideas (in particular, a way of dealing with something I eventually termed the “double vision” problem) that I would later explore in a much more satisfactory way. It was worth writing, and the writing — though not up to my present standards — is not half bad. I guess I was always a decent writer, if I do say so myself. Nevertheless, the paper was far too preliminary to be worth publishing, and it would remain unpublished.

2. Some people are inventors; they invent brilliant new ideas and solutions that are radically different from anything that came before. Generally speaking, that is not my style. I have invented a few clever things, but I view myself primarily as a consolidator. I like to take other people’s good (but often somewhat rough) ideas, to explain them more clearly, to clean them up and pare them down to their essence, to rescue them from the ad-hoc, kludgy trappings of their birth and unify them into more general and elegant conceptual frameworks so that their latent potential can be revealed. On many of my papers, I let my collaborators do most of the inventing, and my role is primarily to extract the key high-level ideas that make their inventions work and explain those ideas so that others can appreciate and build on them. (Whether I succeed at that is a different story.)

This personal style of mine is already evident in my first paper. The paper is basically attempting a more refined analysis of two approaches to the semantics of recursive modules that were described in Karl and Bob’s PLDI’99 paper, which they called “opaque” vs. “transparent” recursive modules. In their PLDI paper, “opaque” and “transparent” were presented as different points in the design space, both of which had major limitations. In the first half of my paper, I showed how both could be seen as instances of a more general unifying rule for recursive module typechecking, which I called the Fundamental Property of Recursive Modules. This was a kind of neat and non-obvious observation. Unfortunately, it didn’t really go anywhere.  The second half of my paper proposed a hand-wavy design for recursive modules, supposedly combining the best aspects of opaque and transparent recursive modules, but it’s an ad-hoc mess, and it leaves some of the most interesting parts of the problem (in particular, dealing with the interaction of recursive modules and data abstraction) for future work.

3. The title of the paper commits two cardinal sins: one of excessive sincerity, the other of shoddy salesmanship.

First, it starts with the word “Toward”. This is an outright admission that the work is going somewhere but has not arrived there yet. Too honest, kid! It makes the reader think: “OK, well, call me back when you get there.”

Second, it claims it is aiming at a “practical theory”, which is a dead giveaway that the author is a pure theoretician yearning to have practical relevance. Saying your theory is “practical” is like saying your restaurant serves “world-famous pizza”. If your pizza is really world-famous, there’s no need to advertise that fact. Sadly, I would repeat the mistake a few years later when I tried and failed to publish a completely different paper called “Practical Type Theory for Recursive Modules”. As that title suggests, the later paper still wasn’t remotely practical, but at least I had dropped the “Toward”, thus staking a claim that I had arrived where I was going. (Spoiler alert: I hadn’t.) And even now, my web page says: “I am working toward a ‘realistic’ theory of modularity.” I’m not sure that “realistic theory” is any more convincing than “practical theory”, but at least I had the good sense to put ‘realistic’ in scare quotes.

The truth is that “practical theory” — with all the theoretician’s inadequate yearning that that implies — is what I do. I aim to build beautiful formal foundations for reasoning about real-world systems in all (or at least some) of their filthy complexity. I am not particularly interested in solving mathematical problems for their own sake, nor am I interested in practical impact for its own sake. This tends to put me at odds with almost everyone: to pure theoreticians, I am an unprincipled hacker working on short-term problems, while to unprincipled hackers I am a pie-in-the-sky pointy-headed theorist. Oh well — I can’t change who I am.  And obviously I think there’s some merit in trying to do “practical theory” or I wouldn’t be doing it.  More about that some other time.

In the end, my first paper was not a very successful effort, but it was enough of a non-failure to convince my advisors to let me keep my head. For that, I am eternally grateful. It would take another year, though, for me to actually prove my worth to them, and in a way no one could have predicted…

To be continued…

On Science as Art

What do you know, man?  A stereo’s a stereo.  Art is forever!

— Cheech Marin, last line of Scorsese’s After Hours

We were having dinner the other night with Anusha Gummadi, the wife of a colleague of mine, and she asked me if I hoped my soon-to-be-born child would become a scientist. My kneejerk reaction was: “Hell, no! Science is what you do when you can’t get a real job — you know, as an artist.”

Now, although I was (as usual) being half-facetious, the immediacy and vehemence of my retort took me a bit by surprise. After all, I am a scientist, and I (mostly) really enjoy what I do. There are many excellent reasons to become a scientist: intellectual stimulation, impact on society, useful skill set offering increased chance of employment, etc. There are also excellent reasons not to become an artist, the most important one being that it is an extremely low-percentage occupation. Most artists, even the ones who are actually talented, barely manage to eke out a living. Only a very lucky few make a lucrative career in the arts, and in my experience there is only a weak correlation between success and talent.

And yet…I wasn’t entirely joking.

When I was a kid, I enjoyed math, but my heart was in the arts — music and film, primarily. If you had asked me what I wanted to do for a living when I was a teenager, I believe “composer” and “filmmaker” would have been at the top of the list. Indeed, when I was a pre-teen, I sang in the now-defunct New York City Opera Children’s Chorus (with a few bigger solo parts), and I also spent a lot of time tap-dancing. Some of my fondest memories as a child were taking the train up from Long Island to Manhattan with my mother, and then the subway down to the bowels of the New York State Theater in Lincoln Center, where we rehearsed La Boheme, Carmen, Tosca, and more in a windowless room with Mildred Hohner, the matriarchal coach of the children’s chorus. Or over to an underground loft space in Soho, where I learned about rhythm and jazz from the members of the American Tap Dance Orchestra. (I had my bar mitzvah there!) That loft space was only a few minutes’ walk from a bunch of arthouse theaters showing movies from everywhere in the world. My favorites were Film Forum and Theatre 80 St. Mark’s.

And yet I ended up as a professor in computer science. How did that happen? Well, I was good at song and dance, but not that good. And I never demonstrated any particular aptitude for what I really wanted to do, which was composition. I took piano lessons, but was too lazy to ever become proficient at the instrument. I enjoyed watching films and listening to music — the weirder, the better — but I didn’t have any particularly interesting ideas about films or music that I wanted to create. I wrote poetry, mainly in a sort of Dadaist vein, but I spent a lot more time thinking about what would be a good title for a movie/poem/whatever than what it would actually be about. In short, I was all style and no substance, and what little style I had had been more or less done to death in the 1920s.

So when I went to college (at NYU), I decided pragmatically to study math and computer science. I always loved math, computer science less so. I enjoyed the CS classes I had taken as a child and in high school, but they were pretty basic (literally: BASIC and Pascal programming). And I was very far from a computer nerd: my idea of fun, as I said, was singing, dancing, listening to weird classical music or offbeat films — definitely not hacking. Originally I just wanted to major in math, but my parents convinced me to study CS as well, mainly because it would improve my job opportunities.

Boy, was that a good call. I learned after a few years at NYU that (a) computer science was actually more interesting than I had thought, and (b) I was good at math, but not that good. I think what really hit point (b) home for me was when I ended up becoming friends with some Eastern European students (Yevgeniy Dodis and Ioana Dumitriu, among others), and they won the Putnam Competition (meaning they were top-5 finishers). For those who don’t know, the Putnam Competition is an annual North American undergraduate math competition. I don’t know how my friends prepared for it — all I know is that I got straight A’s in math, but when I took a look at past years’ Putnam questions, I had absolutely no clue where to begin. I’m sure if I looked at them now, I’d have the same reaction. Yevgeniy is now a professor in theoretical CS (crypto) at NYU, and Ioana is a professor in math at University of Washington. They are mathematicians, I am not.

As for point (a), I would credit two classes with getting me really excited about computer science. One was Alan Siegel’s graduate Algorithms course, which I took either in my first or second year of college. The other was Ben Goldberg’s graduate Programming Languages course, which I took in my third year. The Algorithms course was my first experience reasoning about programs mathematically (if not exactly formally). The PL course was my first experience with functional languages (SML and Scheme, and a little bit of Haskell). Both of these were truly eye-opening, and ultimately convinced me that I wanted to pursue graduate studies in computer science, studying PL or algorithms or both.

OK, fast-forward 18 years (I’ll discuss some of those years in a different post). I ended up doing research in programming language theory for a living. This has been incredibly rewarding, but clearly based on my answer to Anusha’s question, it’s not quite what I would choose to be doing if I could pick any career.

Well, join the club, Derek! Almost no one ends up living out their childhood fantasy. More to the point, I think there’s an extent to which I am living out my childhood fantasy.

What do I do? I get paid to think about whatever I want, so long as I eventually write my ideas up, convince a few of my peers that there is something exciting about the ideas, and then if I succeed at that, I have to present those ideas to an even broader audience in a talk at a conference. And thinking all by myself is really very tiring, you know, so I get to hire extremely bright people to think along with me, do most of the actual work for me, and then I get to help them figure out how to explain their hard work and their cool ideas to other people.

Basically, in cinematic terms, I get to be a cross between a screenwriter and an executive producer. This is pretty incredible. What science has done is to fix that core problem I had as a child when I was writing poems — all style and no substance — by providing the thematic content around which to spin interesting stories. And as a result, I’ve managed, pretty much accidentally, to make science work for me as a means of artistic expression, as an art form, or at least as close to an art form as I’ll probably ever get.

Of course you could argue that relatively few people who go into science end up becoming professors. This is true. However, the percentage is still far greater than the percentage of aspiring artists who make a career out of their art. (I don’t actually have any evidence to back this up: that would require too much actual science.)

This view of science as art pretty accurately reflects my perspective on CS research. Art can be simple or complex, art can be easy or hard to understand, but there should be something about it that touches you, something that is elegant, something distinctive. Similarly, a research paper can present a really simple and retrospectively obvious idea that anyone can make sense of, or it can present a challenging extension to a well-studied proof technique that only three people in the world understand — I’ve written papers of both sorts — but it should strive in either case to illuminate and inspire, to demonstrate an original and memorable way of thinking about the problem at hand. I would not say that I succeed in doing this all (or even most) of the time, but it is always my goal.

And this is what I look for when I read other people’s papers, too. Maybe it’s why I have such high standards, and a reputation for being a harsh critic: poorly written papers really bum me out. (Btw, this reputation is totally undeserved — for example, on a recent program committee on which I served, my average review score was one of the highest of any of the PC members.  Just saying.)

On the flip side, well-written papers bring me great joy. One that comes immediately to mind is the paper “Ribbon Proofs for Separation Logic” by Wickerson, Dodds, and Parkinson from ESOP’13. It presents a graphical (yet formally defined!) language in which to write separation-logic proof outlines, and the tool implementing this language produces gorgeous “ribbon proofs” of program correctness. Aside from being lucidly written, the paper lends a distinctive visual representation to the core ideas of separation logic. It’s a pretty literal example of science as art, and what can I say, it made me very happy.

Screen Shot 2014-12-31 at 6.21.09 PM

Anyway, getting back to my soon-to-be-born child and the important life lessons I hope to impart to them — umm, maybe I could misquote the Dread Pirate Roberts from The Princess Bride: “Have fun, kiddo. Create, explore, sleep well. You’ll most likely do science in the morning.”

Or even more simply: “As you wish.”

On the Four Children (Part 1: Afikomen vs. Teeth-Blunting)

My favorite part of doing research is explaining it to other people.  This is probably because it is something I think I am pretty good at.  Why?  Well, I often feel like I have a very small brain in comparison to the people I collaborate with, and I am in awe of the complex technical work they are capable of.  (Perhaps you think I am being disingenuously modest here, and I don’t mean to imply that I am not clever, but I can assure you that many of the people I work with are much cleverer than I am.)  So I have turned this limitation into an advantage: I figure that if I can figure out how to get my collaborators to explain what they are doing to me, then maybe, just maybe, we have a shot at figuring out how to explain it to everyone else.

So how do you take complex technical work and explain it to a broader audience?  There is no simple answer to this question, but let me tell you what I think is the most important general concept.  It is the concept of the “four children” from the Passover Haggadah.  Jews will know already what I am talking about, but almost no one else will.  Bear with me.

Passover is arguably the most important Jewish holiday and the only one that my secular Jewish family has celebrated regularly since I was a child.  On this holiday we remember the exodus of the Jews from Egypt, but in fact we do more than that: we meta-remember it.  That is, we spend a lot of time talking about how our ancestors remembered it, and how best to keep remembering it.

Now, there is a section in the Passover Haggadah (the book that we read on Passover) about the “four sons”.  (There is of course no earthly reason in the modern world for this story to be so male-centric, so we will get with the program and generalize here to the “four children“, as some progressive Haggadahs do.)  Like many aspects of the Passover Haggadah, this section is indeed very “meta”: it’s not about the Passover story per se, but rather about the act of explaining the Passover story.  In other words, it’s about communication skills!

Specifically, the “four children” passage is about how you explain the story of Passover to young people.  The point of the story is that you have to tailor your explanation of Passover to the person you’re explaining it to, and take into account what they are capable of hearing.  The same principle will apply to writing a paper or giving a presentation: you have to think about your audience.

The Haggadah depicts your audience in the form of four children to whom you are explaining the Passover story:

  1. The Wise Child: This is the kid who is totally into Passover, is excited to learn, already knows a lot, and wants to know more.  Or at least that’s sort of implied.  Actually, the kid asks you a fairly boring technical question, something like: “What are the laws and statutes that God has commanded you to follow?”  You answer them in a correspondingly technical and obscure way, by randomly bringing up the fact that one is not supposed to eat any dessert after the Afikomen.  (What’s particularly hilarious and ironic is that this is a terribly depressing answer.  The Afikomen is not like one of those little farewell bon-bons you get at the end of a Michelin-starred meal — no, it’s a dry, cardboardy piece of unseasoned matzah, which represents the Passover sacrifice, but which you do not want to be the last food you put in your mouth unless you fancy choking to death while singing “Next Year in Jerusalem”.  At my house, we specifically ate dessert after the Afikomen: teiglach, honey cake, chocolate-covered matzah, macaroons, you name it.  We would eat anything, quite frankly, to get rid of the taste of the damn Afikomen.  But then again, we had other priorities.  End of rant.)
  2. The Wicked Child: This kid is not buying into the whole Passover thing at all, and doesn’t understand what meaning Passover could have for them.  The kid asks you, essentially: “Who gives a shit?”  You “blunt his teeth” (marvelous phrase, that) and tell him: “I do, you punk.  I’m remembering what God did for me.  If you had been there, you ungrateful bastard, God would not have done anything for you!”  So there.
  3. The Simple Child: As the name suggests, this kid is kind of simple and asks a pretty dumb question, literally: “What is this?”  You give them the less-than-30-second elevator pitch about Passover.
  4. The Child Who Is Unable to Ask Questions: This represents a very young child, perhaps one who is not yet able to even speak or pay attention.  With this kid, you have to start the conversation.

[Below is a beautiful illustration by Siegmund Forst from the 1959 Haggadah shel Pesach.  Here, the wicked child (upper-left) is depicted as Leon Trotsky, and the simple child (lower-right) is depicted as an everyman smoking a cigar and reading the sports pages.  I swear this picture is highly reminiscent of certain audiences I have spoken before at conferences.]

four-sons

In case it’s not obvious, my above characterization of the “four children” is not 100% accurate, but it’s really not that far off.  There is a lot to be said about the four children, Talmudically speaking, such as the specific texts that are quoted in this passage of the Haggadah and why they are quoted, etc., which I am not going to do here.  But from my perspective, the most relevant thing is that the four children are a wonderful way of thinking about your audience.  I know that when I write a paper or give a talk, I think quite a bit about how I’m going to target my presentation to the different segments of my audience that these children represent.

Broadly speaking, the wise child represents your expert reviewers, your insiders, the people who really know a lot about your topic and will care about the technical details of your work.  The wicked child represents your detractors, the folks who don’t really care for your style of work or think the problem you’re pursuing is pointless.  The simple child represents people who are familiar with the goals of your broad research area, but are not up to speed on the latest research or the specific technical problem you’re working on.  The child who is unable to ask questions represents the rest of the world.

In my opinion, the “four children” are a fantastic analogy for the primary segments of your audience.  That said, I would not recommend following the specific prescriptions the Haggadah gives about how to talk to them.  In particular, it would be wonderful if you could just shower your expert audience members with all the boring technical details of your work (aka the “afikomen”), and if you could “blunt the teeth” of your wicked detractors, but in practice I don’t think this is the most effective communication strategy.

Why?  Stay tuned.

To be continued…

On Rejection (Part 2: Valley of the Voodoo Dolls)

Previously in this series: On Rejection (Part 1: The Descent)

Research is fundamentally a human endeavor: it is about the communication of ideas between human beings, engaged in an eternal quest to understand the world around us and make it better. There is therefore nothing worse than putting all your heart and soul into writing a paper in an effort to communicate your latest and greatest ideas to others, only to receive the following message in your inbox:

On behalf of the TopCon 2015 program committee, I regret to inform you that your paper #123 has not been selected to appear in the conference.  Your reviews are included below.  We hope you find them useful in revising your paper for publication in another forum.

Yes, this message is signed by another human being.  Yes, that human being appears to be generous, offering you feedback in a sincere effort to help you improve your work.  But no, in fact, this is not a message from another human being, and there is nothing sincere about it.  It is a form letter, filled out by a machine, a machine that says coldly, “You are not worthy.”

Of course, we all know that the mechanical nature of that rejection is a façade, a charade, an ultimately futile attempt to conceal the identities of the true culprits who are responsible for the mistreatment of your work: the reviewers.  Those bastards have deliberately misunderstood your contribution for their own twisted ends, and the machine is covering for them.  But not for long!  It is time to go down to the cellar, switch on the lonely light bulb, and dust off…the voodoo dolls.

Ah, the voodoo dolls.  They have been with you, like the stuffed animals they are, from the very beginning, since your first rejection.  Some are old and tattered, with repeated wounds from your early years as a budding researcher.  Others are new and pristine — you may not even remember acquiring them.  But they are all there, one for each of your colleagues: no one is above suspicion.

You begin to read over the TopCon reviews again as if you were a Talmudic scholar scouring the Torah for obscure clues to God’s true intent.  Except here you are looking for anything — a turn of phrase, an absurd claim, an idiosyncratic style of capitalization — anything that might tip you off to who has done this terrible thing to you, so that you can mete out the Old Testament justice the evildoer so richly deserves.

Let’s see: Reviewer B’s obvious self-citation was a dead giveaway of their identity, but you have never actually met this person and don’t know much about their work, so it’s hard to get much satisfaction out of skewering them.  I mean, maybe their paper from <noname conference> <last year> was actually interesting.  Probably not, but who knows.  You certainly can’t be bothered to go read it now and find out.

As for Reviewer C: they did give you a weak accept after all, but at the same time they had the temerity to suggest that your paper might be better off as a journal article.  A journal article?!  Nobody, and I mean nobody, reads journal articles.  [Note: If you are not a computer scientist and this sounds like a bizarre comment to you, please stay tuned for a future post on conferences vs. journals in CS.]  The whole point of your submitting to TopCon was so someone would actually read your paper, and now Reviewer C is telling you that, because your paper is too deep and important to explain clearly in the space of a conference paper, you should go submit it to a venue where no one will read it?  What bullshit!  You start to rile yourself up into a lather of rage, until you realize that unfortunately the review was bland enough to have been written by absolutely anybody, so you’ll have to direct your bloodlust elsewhere.

Your glance returns to the first review.  There aren’t too many people Reviewer A could be.  First of all, this person seems to be actually familiar with the recent work in your area and has made intelligent comments in the “detailed” section of their review that suggest they know to some extent what they’re talking about.  Second, the reviewer is clearly on the PC of TopCon and is not conflicted with you.  That narrows it down to about two people, an American and a Brit.  And of those two, you know it has to be the Brit, since they complimented you on your “jolly good use of colour”.

The identity of the culprit is not surprising, but it’s rather disappointing.  This person is someone you actually respect.  They write good papers, they study important problems, they have high standards.  How could they have come to the conclusion that your work is too “complex”?  You clearly stated otherwise in the introduction to your paper!  “Our approach simplifies and unifies many existing approaches, as well as extending their expressive power.”  Did they miss that sentence?  Plus, haven’t they read their own papers?  In this field, everybody’s work is super-complicated.  It’s a hard topic, and simple solutions don’t scale.  What are they complaining about?

Anyway, it’s time for justice to be served.  You prepare the implements of torture, and after a brief moment of silence for your fallen paper, you grab hold of the effigy of Reviewer A with determination.  You are finally face-to-face with the murderer of your masterwork, and ready to exact vengeance!

Only, at the last second, there’s one thing that gives you pause: you’re a reviewer, too.  What if you rejected someone else’s paper recently and they figured out it was you?  What if they’re about to attack a voodoo doll of you in their cellar?  In fact, given what a hardass reviewer you are, and how shamelessly you cite your own papers when writing reviews, this is almost certainly the case.  Come to think of it, you have been feeling some sharp pains in your abdomen lately.

“Ah, that’s foolishness.  No one believes in this voodoo stuff anyway!” you think to yourself.  At this point, you notice that you are holding a creepy-looking doll in one hand and a power tool in the other.  You put them down, turn off the light, and go upstairs.

To be continued…

On Work-Life Balance (Part 1: Lamentations)

These poems date back to the early 2000s, in the first few years when Rose and I were living together.  I believe Rose wrote the first, I wrote the second (you can tell I was hot for Dorothy Parker at the time), and the third we wrote together.  But in honor of the sacrosanct rules of academic authorship, we’ll just maximize our publication counts and say we co-wrote all of them.

Waiting on Work:

I’ve been anxiously awaiting
a young working man named Derek.
The length of time he’s been away
is nothing if not barbaric.
But it’s slow-going, I understand,
for his research is most esoteric.

Lament IV:

My man is so brilliant,
his work so profound,
his mind so resilient,
he’s never around.

A Fine Head (I Hope):

At twenty his hair started thinning.
Before long there emerged a spot.
In the next years the baldness was winning.
Soon whatever hair had been was not.

 

 

On Rejection (Part 1: The Descent)

Two months ago you submitted a paper to TopCon, the top conference in your branch of computer science. This paper is some of the best work you’ve ever done. As soon as it was submitted, you posted it on your website, so that your peers — who undoubtedly check your website every five minutes to find out what you have been up to — can download and print out your latest masterwork and read it on the beach during their vacation.

Periodically, over the next few months, when you are in the middle of doing nothing, you visit your own website and download the masterwork. You want to get a sense of what the experience is like to read it for the first time. You glance longingly at the title, you smell the abstract. That introduction which is so gentle, so economical in its summary dismissal of prior work, so clearly structured to guide the reader to the inevitable conclusion that what you are doing is important and groundbreaking — it’s guaranteed to make any reader swoon, even if you did write it at the very last minute before the submission deadline. Admittedly, the bulk of the paper is taken up with a technical section that gets a bit hairy, with a couple pages in there that even you can’t quite follow — you let your student write them — but this is not kids’ stuff we’re talking about here, this is cutting-edge research, so hey, at some point you’ve got to lose the reader. And anyway, that related work section at the end of the paper, also written at the last minute and unintentionally omitting a few key citations, brings everything into perfect perspective. No doubt.

At some point, you realize you have been downloading your own paper and admiring it a bit too often. Even by your own obsessive standards, you are acting a bit obsessed. You start to wonder if this is maybe a subtle sign that you are secretly worried about one niggling detail: the paper has not yet been accepted.

No, no, you’re not really worried about this — your paper is a clear accept — but there’s that fly in the ointment: the paper must be reviewed by a jury of your peers. And have you checked out your peers lately? They write terrible papers. When you review their papers, you think that the vast majority of them don’t seem to know what they’re talking about, or maybe they know what they’re talking about, but you certainly don’t know what they’re talking about. You look at the program committee of TopCon this year, and your worst fears are confirmed. The only person on the PC who properly understands your work is a frequent collaborator of yours — conflicted out of reviewing your paper — and the rest have probably not read your last 5 papers, a detailed understanding of which is necessary in order to truly appreciate the brilliance of your latest masterwork.

But wait a sec! Aren’t these the same peers you were hoping would download and read your paper on their vacation? You respect these people, in theory at least. Calm down. Your introduction is so clear, your abstract so fragrant, your related work so related, and of course your technical section so damn impressive…that there is nothing to worry about. The brilliance of the paper will shine through, even to the most Neanderthal reviewer.

Then you receive the reviews.

Reviewer A:

Wow, this is a BIG paper. It is tackling a hard problem, and throwing every trick in the book at it. The pieces all seem to fit together, but it is not clear exactly how or what the reader is supposed to get out of it, or how this work is to be differentiated from the many other recent papers in this line of work. Offhand, the approach taken here seems awfully complex, suggesting that the authors have not yet hit upon the right solution to the problem.

Reviewer B:

The paper presents an interesting thesis.  However, it seems to me the problem it is attacking is already largely solved by <insert reviewer’s name here> in their <noname conference> <last year> paper.

Reviewer C:

I think this is a solid paper, but I am not very knowledgeable in the topic.  The results seem good but I cannot verify correctness. The appendix submitted with the paper is very large, which suggests that maybe the paper would be better suited for journal publication.

Reviewer C was your only (weak) “accept”, and they gave themselves low confidence. Your masterwork has been rejected from TopCon — there will be blood.

Next in this series: On Rejection (Part 2: Valley of the Voodoo Dolls)