You can’t wake up someone who’s just pretending to sleep

Switzerland is not known for ostentatious electoral campaigns and, bar a few exceptions, doesn’t practice personality politics. But every few years, as federal elections approach long-standing social debates become more heated and more toxic. That’s how you know that it’s campaign season.

With parliamentary elections scheduled for October this year, politicians currently seem quite eager to publicly position themselves for or against particularly polarizing issues. One controversial issue – and frankly I cannot believe that I am writing these lines in 2023 – is the labor market participation of women or, more specifically, mothers. Because the thing is, mothers in Switzerland work much, much less than fathers. Sorry, let me rephrase that: mothers in Switzerland do on average work more hours per day than their male partners, however, they are paid for a much smaller fraction of these hours.

So far, Switzerland has shown a frightening lack of ambition when it comes to the labor market participation of mothers, viewing part-time work as the ultimate solution to reconcile work and family – as illustrated by this overly celebratory press release:

“In 2021, 82% of mothers in Switzerland were economically active. This high level of labour market participation goes hand in hand with a large proportion of part-time work. After the birth of their first child, one in nine economically active mothers leaves the labour market and the proportion of part-time work doubles. In Switzerland, the proportion of mothers participating in the labour market is higher than the European average.”

And when the problem is pointed out, the answer is invariably “But this is what women in Switzerland want, they freely choose to work part-time in order to spend more time with their kids!”. So once the election cycle is over, the issue is shelved and almost everyone goes back to sleep. Except for mothers.  


So what is actually going on? What do the data say?

Turns out the excerpt above lacks some of the details…

80% of working mothers in Switzerland work part-time, but only 40% of women without children do. Add to this the fact that around 20% of mothers do not work at all, this means that only around 16% of mothers in Switzerland work full-time or nearly full time (90 to 100%), vs. around 60% of women without children (sixteen vs. sixty, you read that right).

These 16% include single mothers, who tend to work more on average than mothers who live with a partner or spouse, simply because working part-time is a privilege you have to be able to afford.

However, if we zoom in on what the statistics office very elegantly refers to as “couple households”, the picture becomes even clearer: in around 54% of couples without children, both partners work full time; this proportion decreases to just 14.4% in couples with children whose youngest child is under 4, and then declines even further to 13% in couples whose youngest child is between 4 and 12, before increasing slightly to 19.3 in couples whose youngest child is between 13 and 24.

These numbers show three important things:

1. Women who leave or partially leave the labor market once they become mothers do not come back, even when their children are way, way, way old enough to take care of themselves – this is because taking “a few years off” and then “picking up where you left it” is much easier said that done. Women’s skills might no longer be up to date, they might face discrimination when applying for positions that are too junior for their age, they might have become estranged or excluded from important professional networks that they had built prior to having children, or they might simply have lost any self-confidence to even go out and apply for a job.

2. It is (a bit) easier to reconcile work and family when a child is under 4, than after they start school. This is simply due to the fact that child care provision actually becomes worse once a child starts school. For one, daycares are open most of the year except for 4-5 weeks in most places, while schools close around 14 weeks a year and often don’t offer any childcare during the school holidays (you may be shocked to find out that workers in Switzerland do not enjoy 14 weeks of holidays a year). This means that parents sometimes take their vacation weeks separately to spread them out over 8 to 10 weeks instead of 4 or 5. In addition, daycare accepts children full time (i. e. morning to evening every day of the week), while kids ages 4 and 5 in Switzerland generally only have about 10h of actual school per week. The specific schedule changes from class to class, which means that if you have 2 kids in school in different years, they will have different mornings and afternoons off. Thus, once they age out of daycare, kids will either have to be registered for after-school care, for which places are limited, or someone will have to constantly pick them and take care of them at basically random hours in the middle of the day (I was thinking of something witty to put in these brackets but I have nothing, absolutely nothing).

3. Having a child does not seem to affect the career of the majority of fathers in the slightest. And I think given point 2 above, this in itself is quite telling.

Situations where the male partner is working part-time or not working and the female partner is working full-time are basically non-existent in Switzerland: less than 3% of couples with kids fall into this category. Stay at home mothers still make up 13 to 16% of all mothers when the youngest child is under 12, i.e. they outnumber stay at home fathers 4 or 5 to 1 (which explains why I haven’t had the pleasure of meeting a SAHF in Switzerland so far).

And lastly, the statistics reveal that the most common model by far in Switzerland remains “she works part time” and “he works full time” model. While this situation only applies to 21% of couples without children, the proportion increases sharply after the birth of a first child: it applies to 47% of couples whose youngest child is under 4, 53% of couples whose youngest child is between 4 and 12 and still 53% of couples whose youngest child is between 12 and 24 (because the women don’t go back to work).

Reasons and consequences

To be crystal clear: working less to spend time with your children is an absolutely legitimate ideal, even though it is currently unfortunately a privilege that not everyone can afford.

I don’t believe that everyone should try to maximize the amount of time they spend working, on the contrary, I believe that it is high time for a 4-day work week, that we need to explore UBI and that the fact that full-time in Switzerland means “42 hours a week” puts us out of synch with most of Western Europe for not good reason other than “that is how it has been forever”.

Indeed, women working around 80% in Switzerland (i.e. around 34h!) are closer to a French “35 heures” than to a Swiss 50%. Calling it part-time is basically a misnomer. As such, part-time work has many faces and in terms of how couples manage the logistics of child care, there is ultimately not much difference between a couple where both work 100% and a couple where he works 100% and she works 80%.

What is shocking, however, is how gendered the decision to partially or fully leave the labour market after having children still is in Switzerland today: while 80% of mothers work part-time, only around 10% of fathers do, and for good measure, the statistics office lumps these fathers in with the “economically inactive” so there is no way of knowing how many fathers actually make a conscious decision to reduce their workload after having a child, and how many are simply unemployed or not able to work.

Furthermore, if unsolicited anecdotal evidence that I have involuntarily been collecting ever since becoming a mother is anything to go by, then mothers who work full time or nearly full time are regularly and repeatedly encouraged by people in their environment to reduce the amount of work they do whenever their child shows any sign of “acting up” or “being sad”.

People just tell you to consider reducing your work percentage.
Just to be sure.
Just out of concern for your well-being and that of your child.
Because what if something really bad happens to your child because you worked too much?
What then?

Oh, and in case it wasn’t clear: this kind of “advice” is generally not given to fathers.

Working part-time also has very real negative consequences for women’s career progression, their salaries and their pensions. I’m too tired to add sources, just google it or ask ChatGPT to write you an essay about pensions written from the perspective of a part-time working mother in Switzerland.

Men, however, often get a boost in their career once they become fathers and their career progression not only doesn’t slow down but often accelerates after having kids. Even though both these tendencies are well known, the Swiss Federal Court recently decided to “modernize” is jurisprudence, arguing that – in the name of equality, of all things – women could just go back to work after their divorce, thus freeing the husband of any obligation to pay them part of his salary or grant them part of his pension.

In other words, in 2021, the federal court decided to abandon its existing practice in order to become more progressive, while fully ignoring the fact that society does not seem to live up to this imaginary standard and that decades-long holes in one’s pension contribution cannot easily be plugged “after the fact”.

Clearly, creating a precedent that will put the fear of old-age poverty into the minds of women who stay at home or work small percentages is the best approach to usher in this era of progress – I cannot think of anything else that would be more effective.


So frankly, when I read that Swiss public television invites three men and only one woman to a discussion about better parental leave for both parents, or as recently as 2019 we still needed a scientific study to finally confirm that children who attend daycare do not have attachment problems compared to those who stay at home with mommy, or that people are surprised that simply reducing the cost of daycare would not magically lead to higher labour market participation for mothers (and seem to conclude that this means that “cost should stay as it is” and not that a single drop cannot turn the tides), it is hard to not feel discouraged.

Yes, women “freely choose” to work part-time, but they do so because society inexorably pushes them towards this decision: our tax system is optimized for one spouse working part-time, public childcare institutions in Switzerland lag far behind the demand, private ones are so expensive that working becomes “not worth it” for whoever has the lower salary in the couple (hint: the woman), there is basically no parental leave for fathers (yes, I know, we recently adopted 2 weeks of parental leave… amazing…), and society constantly sends the message to women that being a good mother means working part-time.

So, to whoever needs to hear this right now: whatever “category” you fall into, I know that it comes with sacrifices and that you are trying your best. You are a rock star. And if you think that maybe you could work towards turning the tide a little bit for all mothers in Switzerland, then more power to you.

And if you are planning to prevent the tide from turning, better grab a life vest.

You know, just to be sure.
I say this purely out of concern for your well-being.
Because what if something really bad happens to you when the tide finally turns?
What then?


The truth is that this was a bit of a rant. And that I wanted it to be more structured and better sourced. With pictures and memes and graphs.

But I’m also tired because I am part of that 16% which I didn’t know existed. And I have decided to cut myself some slack because we all know that adding a shiny graph isn’t going to wake up those who have been faking sleep for years just to avoid having to act.

Professionalizing Humanitarian Interpreters?

When I began training interpreters for the ICRC in 2010, I believed that the professionalization of humanitarian interpreting was merely a matter of training and resources. Twelve years later, my thinking on the issue has evolved quite a bit and I am no longer sure that “professionalization and training” is the right approach to humanitarian interpreting.

Why that is the case is explained in the video lecture below:

#MemorableMultilinguals: Africans*

I cannot count or recount the number of times that a European who is more or less closely involved with languages (translators, interpreters, sociolinguists, school teachers, …) and who has had an opportunity to visit “Africa” or interact with “Africans” (more on the scare quotes later), has told me in amazement that “Africans are naturally multilingual”.

I am deeply skeptical about any utterances that contain the word “naturally”, or “Africa/n” or “multilingual”, so imagine what a bummer it is for me to be confronted with these three words in one sentence, along with zero other redeeming content.

I suggest that we take it step by step and analyse this statement for what it is: a cliché which, like all clichés, also contains a kernel of truth. But that kernel is not necessarily where you think it is.

While the term “African” is sometimes used in a relevant way, it is most often a catch-all for a whole continent that is more diverse than this simplification suggests. So the first obvious problem with the above-mentioned statement is that it is unclear who these “Africans” are. Based on experience and precedent, I think it is quite safe to say that the people who start their sentence this way are not reminiscing about their last long week-end in Casablanca or their visit to the Pyramids in Gizeh. They are talking about “Sub-Saharan Africa”, i.e. “where black people come from”. This use of the term is of course widespread, including in African Studies, where people general focus on only that part of the continent (because hey, we are not doing Islamic or Middle Eastern Studies, which is where North Africa fits in…). International organizations speak of “Africa” and the “MENA” (Middle East and North Africa) region as two different entities as well, so including only sub-saharan Africa is not a problem per se. However, conflating “Africans” with “black people” is much more problematic: not all Africans are black and not all black people are African. The myth of multilingualism is, however, often applied to black people and their descendants, and often used as a gate-keeping mechanism

We all like to think of ourselves as “naturals” in one field or another. That is because we like to flatter ourselves and also (mainly!) because we lie to ourselves a lot. Most things that come “naturally” to us are the products of our socialization in a specific context, the result of a kind of learning that happens simply by virtue of existing in a given environment and often goes unnoticed by the learner herself. We internalize ideas about the world and our place in it and come to think of these as immovable features of the universe.

One of these ideas that each and every European in my generation (yes, myself included, absolutely!) has been exposed to simply by growing up in Europe and has internalized whether or not they are able to be honest about to to themselves is the inherent superiority of Europe, European culture and European civilization over all things “African”. And when a speaker who comes from that socialization tells me that Africans are “naturally” this, that or the other, then that word has a specific connotation that is problematic. Because on the one hand, “natural” means through no effort or higher processes of learning, through no structured quest or ambition, through nothing else than undeserved endowment from God or whatever else one worships. And on the other hand, “natural” also means that this is the way things are and that there is little one can do to change them, even if one wished to do so.

All in all, this is a lose lose situation for the “naturally multilingual African” – not only is her multilingualism not recognized as the intellectual accomplishment that it is, it is also something that is taken as a default feature of Africanness to the point that the absence of this feature is akin to a birth defect. Europeans, on the other hand, are expected to be monolingual by default (a lie, as we will see below) and any sign of multilingualism is thus “naturally” (see what I did there?) worthy of praise and recognition.

But what exactly does “multilingualism” mean in this context? What Europeans mean to say when they speak of “Africans” as “naturally multilingual” is that they understand that the language they used in order to communicate with the Africans they met is unlikely to be these individuals’ mother tongue. Thus, these people must speak another language. And because it is Africa we are talking about, that other language must be very, very, very different and very, very, very exotic, and very, very, very hard to learn. It can therefore only be spoken by those for whom it is “natural”.

This thinking frees the European from any pressure to engage with the local language and dispenses her from making even the slightest effort to learn it – and we know that there is hardly a European who comes back from a longer stay in Latin America without proudly showing off their Spanish, however rudimentary it might be. Another thing that is implicit here is that there is a hierarchy between languages. I do not think that there is an inherent qualitative difference between languages or that there are languages that are inherently more or less suitable to encapsulate the modern human experience. However, it is a fact that the opportunities that come with a language differ hugely from one language to another. English opens doors that simply cannot be opened with Gikuyu, Zulu or even Finnish, no matter how much one would like the opposite to be true. That is the reality of things.

The myth of African multilingualism, however, obscures the fact that there are still millions of Africans who are, in fact not multilingual in the common sens of the term: they speak only their mother tongue and barely a few words of the official language in their country. The politics, the education system and in many cases even courts and hospitals of their country remain out of reach for these individuals. The fact that the Africans Europeans interact with are often multilingual (because they have to speak the European language, duh) does not make this a universal “truth” about Africa.

Unless it does. I mentioned above that the term “multilingual” makes me queasy and that is because it implies that there is such a thing as a “monolingual” individual. I have never met one. Yes, there are people who master the elements of only one of the systems that we call “language” but even those individuals will speak very differently in different contexts, and leverage communicative resources that bear surprisingly little resemblance with each other. Is that not a form of multilingualism? Indeed, the statement about the multilingualism of Africans reveals the very problematic way in which many Europeans still look at language: as something with patterns and rules that must be learned, as different systems that co-exist with each other in a hierarchy and that are best kept apart and pure. And yet, the fact that we notice the most recent “anglicisms” when they crop up in German or French but consider yesterday’s Gallicisms in English as a normal part of the English language shows that purity is simply a matter of time. The time when languages used to be pure is roughly around the same time when America used to be great – and that time is not anywhere BC or AD but measured on a different scale: BS. So we can say that all Africans are multlingual, but only if we recognize that all human beings are actually multilingual and stop exoticizing and othering anything “African”.

And yet, the true reason Europeans cannot but notice the multilingualism of many sub-saharan African cities and towns is that people constantly switch and even mix (gasp!) languages and that this mixing is not generally frowned upon. So people are multilingual in one and the same sentence – and once again, like all things “African” – that surely cannot be the right way to be multilingual. But probably it is the natural way (this is true) but then again culture is specifically there to preserve us from nature.

It probably does not help that a surprising number of Europeans who travel to Africa are primary and secondary school teachers using their vacation time, which is much longer than for any other profession and thus allows for more extensive traveling, to do some volunteer teaching down South. After weeks of leading an uphill battle against groups of rowdy school children who are unwilling to do anything other than repeat full sentences uttered by the teacher in English or French, and who invariably switch back to another language during breaks, the only thoroughly positive and uplifting thing these teachers find to say when they come back is: “Africans are naturally multilingual.”

Bless their hearts, they mean well, I know they do.

*The attentive reader may now complain and say that the title of this post is deeply misleading. I have not told you much about the multilingual Africans I was advertising, just about the monolingual Europeans that describe them. Point taken. But would you have read a post about European multilinguals? Were you not “naturally” curious to learn more about the exotic African multilingualism?

#MemorableMultlinguals: the bilingual on the podium

The name does not matter because, if you are an interpreter or regularly participate in multilingual meetings, you have probably met one version or another of this person in the course of your career. When I think about them, I think about a guy because the majority of people on podiums still tend to be men and maybe also because men are often more eager to venture outside their area of competence. So for the sake of readability, let’s call this multilingual individual Peter but don’t get attached to the name because what matters is ultimately not him but the context that allows for someone like Peter to emerge.

My last encounter with Peter occurred during a bilingual meeting, where I was tasked with interpreting between German and French. As tends to be the case in Switzerland, the overwhelming majority of attendees were German speakers, and French speakers a tiny minority. The language distribution on the podium was even more skewed, since the first language of basically everyone up there was German. The interactivity of the meeting was low, i.e. most participants were not planning or expecting to intervene and had made the journey merely to receive information from their board and vote on different issues by show of hands. From an interpreting perspective, the linguistic setup was thus extremely imbalanced, more than 90% of utterances would have to be translated from German into French, and it was unclear what the distribution for the remaining 10% would be. Peter and his colleagues were sitting on the podium, ready to present an annual report about their different areas of expertise. Our French-speaking clients were sitting in their seats, clutching their headphones in the understanding that they would have to follow the entire meeting through their interpreters. This is where things get interesting.

The minoritized French speakers were very much aware that this was a multilingual meeting with interpretation. That awareness comes with being a minority and losing your communicative independence. The majority German speakers were, however, getting ready to attend a monolingual meeting. Barely any of them carried headphones to their seats, they took part in the meeting with the certainty of those who know that they will understand everything because that is just how the world works. That certainty, however, was shattered when a French speaker unexpectedly decided to take the floor and ask a question. This question was, of course, interpreted simultaneously into German, since that is the job we were recruited to do that day. However, we realized quickly that we were interpreting into the void given that none of the German speakers actually wore headphones, and just exchanged blank stares in horror, realizing all of a sudden that this was actually not a monolingual meeting at all.

Fortunately, Peter came to the rescue, taking the floor from the podium to hastily improvise a summary of the French speaker’s question in German. From an interpreting standpoint, the summary was neither complete nor particularly accurate. The main point the speaker had been trying to make fell flat. But balance had been restored, the German speakers had once again regained control over the situation. Not a single German speaking delegate got up to pick up headphones at the entrance of the room after this incident. They simply had not understood that the interpreters had also translated that part of the meeting, since the whole point of the interpreting provision was to cater for the (special?) needs of the minority.

To take on this task, Peter had to have an understanding of both the minority and the majority language, although he did not necessarily have to be fluent in both. Peters exist everywhere. Peters are a product of power asymmetries between groups of speakers. They exist because implicitly or explicitly, many speakers of the dominant language, whether English in international conferences or German in Switzerland, see interpretation as necessary to get their own message across, but not to hear the messages of the minority. They are surprised when put in a situation where they do not understand another speaker, used to being understood and heard wherever they go.

Peter’s presence points us to the limits of interpretation, and reminds me of what Bourdieu wrote nearly 30 years ago about “legitimate” linguistic competence: being able to make oneself understood is not the same as being able to make oneself heard. A message presented in the “wrong” language might be understood, yet not treated with the same care and not met with the same respect as a message presented in the “right” language. Bourdieu’s argument relates to speakers of the same language whose speech patterns (vocabulary, accent, prosody) do not have the same level of legitimacy, however, his thinking can be applied to multilingual settings as well. By jumping in to provide a consecutive summary, the resident bilingual ensures that a message can potentially be understood (or at least noticed) but this approach also signals to the speaker that their intervention is disruptive and amidst the commotion thus created, very unlikely to be heard.

While solving a communication problem in the short run, these bilinguals ultimately allow for a much bigger communication gap to continue unchallenged and for the majority language speakers to participate in what for them is essentially a monolingual meeting.

Professional interpreters might convince themselves that they see their role as making sure that a message uttered in one language is “understood” in the other language because that is what the principle of impartiality seems to dictate. However, I suspect that just like me, many colleagues have felt frustration or even mild anger when a delegate speaking “their” language makes a highly relevant point that is completely ignored by the other people in the room. So I guess that what we really want is for these messages to be heard, and when this is not the case, we feel poorly about our own performance and the relevance of our contribution.

It’s not Peter’s fault, really. He means well.

But we can probably do a better job of making clients aware of the consequences of his approach, so that next time, he can use his platform to gently remind everyone in the room to just wear their bloody headphones and select the correct channel in advance. So that for once, the burden is on the speakers of the dominant language.


Bourdieu, Pierre. 1991. Language and Symbolic Power. Cambridge, UK: Polity Press.

Why “Publish or Perish” is bad advice

Publish or perish sounds snappy and rings true, which is why we really need to ask ourselves whether it actually is. It is a phrase used by journalists and commentators to describe the current state of academia, and also passed on as advice from senior academics to their younger colleagues, and from junior academics to their peers. The argument that I will develop in this post is that it is not very good at either. Publish or perish is not in any way an accurate description of academia, nor is it sound advice for academics.

In fact, publish or perish is a meme that keeps many researchers stuck in what is inherently an abusive relationship with a system that gives them an illusion of agency that is just good enough to make them hang on.


Let me get the obvious one out of the way first: publish or perish has nothing to say about publication quality and instead seems to emphasize on quantity. After all, a high-quality publication takes time, sometimes years, and you are supposed to be publishing all the time. In many institutions there are formal or less formal publication targets and full time academic staff are expected to produce around 2 to 3 articles a year. This sounds like not much, ultimately, but it generally boils down to writing one high-level and several lower-level papers, or artificially splitting data sets from a single project into several subsets that can be published in separate papers. In many fields this has also led to a proliferation of second- and third-tier journals and an abundance of frankly rather mediocre articles. It also rewards academics for publishing basically anything, and a publication strategy that is based on writing few but very good publications almost looks like an act of resistance.

The way in which academic CVs are usually evaluated frankly does not help. Any prospective employer or funding body will argue that they will above all look at the quality of publications, not their quantity. But let’s be honest, they will not actually read your papers to see whether they are of good quality, they will use the impact factor of the journals you published in as a proxy for quality and that is deeply problematic. Not because the first-tier journals don’t publish quality – most of the time they do – but given the abundance of papers they receive (some journals reject over 95% of submissions), some excellent papers necessarily end up in the rejection pile, simply because they don’t fit with the stated aims of the journal or the preferences and interests of its editors. In addition, there are ways to get into high-level journals that might otherwise reject your paper, for example by applying for a Special Issue that is guest edited and comes with a pre-selected set of papers on a given theme. These papers will be published in the same journal, many authors might not mention “Special Issue” on their CV, and unless someone really takes the time to dig deeper, the impact factor of the journal is now on that author’s CV.

In addition, there are countless other variables that publish or perish fails to account for: different disciplines have different sizes, impact factors vary widely from field to field, editors and reviewers are only human and their decisions not always entirely fair or objective, and let me not get started on the politics of co-authorship and the order of authors on a paper and what that paper will then be “worth” on each of their CVs.

This is not to say that publishing is not good advice. I am infinitely grateful to those who encouraged me to just start publishing, even when I did not feel I had a legitimate voice within the discipline. Waiting until you feel that you have something important to say is not good advice – no discipline will accept a fundamental theoretical insight from someone who is completely unknown among her peers because she has never published a line of text before.

The problem with “publish or perish” is that it simplifies things to a fake binary distinction and glosses over the complexities that inhabit each of these three words. Speaking of binary systems…


Computers rely on different types of basic logic gates to establish relationships between two inputs, A and B. We can view these inputs as different conditions that can each either be met (1 – true ) or not met (0 – false), and that can furthermore interact in several different ways. These interactions are generally illustrated by a matrix as follows:

true true
true false
false true
false false

Each gate “opens”, i.e. turns a 0 into a 1, when there is a specific relationship between A and B.

  1. AND gates: A and B, have to be simultaneously true
  2. NAND gates: either one or both of the inputs, have to be false for the gate to open – the NAND gate is the opposite of the AND gate
  3. OR gates: either one or both of the inputs are true, i.e. condition A or B is met or condition A and B are met
  4. XOR gates: this gate is a true “or” condition, i.e. it opens only when inputs A or B are true not when both are true or when both are false.

So what about publish or perish? The linking word parades as an “OR” but is actually an “XOR” gate, creating a binary opposition between two conditions that cannot simultaneously be true: you publish (A) or you perish (B). Several implications can be derived from this initial statement, and all of them are, to put it mildly, pretty much total bullshit:

If you don’t publish, you will perish.
If you publish, you will not perish.
If you did not perish, it is because you published.
If you perish, it is because you did not publish.

These statements arguably all sound much less snappy than “publish or perish”, which is exactly why it became a meme that is passed from person to person and effectively circumvents our critical reasoning. It sounds right and that’s about it. But the “if statements” above show publish or perish for what it is: a shortcut that establishes a direct correlation where none exists.

This is not to say that publishing is not a necessary requirement to attain legitimacy in the academic field. It very much is. There are people who have achieved tenure despite a poor publication record but they are the exception and not the rule, often owing their early tenure to largely arbitrary lucky circumstances, like good timing, a good network within their institution or discipline, and the retirement of a professor in their field shortly after they obtain their PhD. These are not things one should ever count on or plan for so publishing is still better than not publishing. For each of these success stories of early tenure, I have heard at least two stories of such early tenure being expected and then prevented by the arrival of a better qualified candidate. So while entitlement is never good advice, it is healthy to keep in mind that luck and randomness also play a role in all of this, especially when professorships are awarded “for life” and retirements end up skipping several generations of academics altogether (who were academically too young when a post became vacant and are biologically too old when it becomes vacant again). This means that many people will simply not be eligible to apply for certain posts because of bad timing.

But even if publishing is necessary, it is not a sufficient requirement to get promoted, tenured or even just extended and this is one aspect we tend to regularly forget or conveniently deny. Many sharp minds have left academia despite a solid publication record, simply because the number of academics far outstrips the number of available posts, scholarships and stipends. That is the reality of things. The statistics are murky and hard to come by, but it is safe to say that only a minority of those currently trying to obtain their PhD will remain in academia upon graduating, and only a minority of those currently employed as post-doctoral researchers will get long-term contracts or tenure. In the US there are now as many Ph.D.s working in the private sector as in academia, and that number includes all generations of academics and non-academics currently in employment, which means that the proportion for younger generations is likely much higher.

Many of those have left academia by choice, in pursuit of higher salaries, better working conditions and more stability. Others have left academia with a heavy heart, simply because they have reached the conclusion that the field has no place for them. Some of them probably did not have a good publication record. But I would bet that, on average, they probably had about as many publications as their peers when they left academia. They perished even though they published.

There really is no logic gate linking publishing to perishing. You can publish and perish, not publish and not perish, publish and not perish, and not publish and perish. Not perishing in academia is as much about competence as it is about luck, networking and randomness. Ask those who spent the year 2019 putting together funding applications about Corona viruses or pandemics, thinking that they would once again get rejection after rejection because their research was not considered topical enough…

It is not flattering to think about our professional successes as owed in large part to randomness and we therefore don’t. We try to tell ourselves tales of competence and merit. But the truth is, for every person who holds a PhD, there exist thousands of other people with equally (or more) brilliant minds who never got a chance to engage in higher education. Our social positioning is the result of a complex web of factors and we only have a limited amount of control over a limited number of them.

It is easy to think of a system that puts you on top as a meritocracy. That does not make it true.


While some ‘doctors’ are working in the private sector, others have decided to continue to hang on to highly precarious academic ‘posts’ that are often nothing more than exploitative makeshift arrangements where you are paid to teach and ‘allowed’ to use the institution’s name as an affiliation for the research you publish in your own time and without pay. They are still in the running to get tenure. They have not “perished”.

The third word in our little meme is by far the most toxic. It encourages academics to remain in a system that can be exploitative and abusive by depicting the alternative as inherently worse. It comes from the same brand of reasoning that encourages women to stay in abusive relationships, and justifies gender-based violence as inevitable. Yes, really. Hyperbolic, much? Probably a bit. On a meta-level. Just to drive home the point that leaving academia and dying are two very different things. Leaving academia is an individual decision, or sometimes simply the result of circumstances beyond our control. It is not a form of “giving up”. You are not leaving academia because of an inability to fight hard enough to stay, you are leaving because you decided that you now want to fight a different battle altogether. And that is fine.

Many professional fields that apply rigorous entrance requirements – both academia and conference interpreting come to mind here – end up exerting a cult-like pull on their members. Leaving the field is viewed by its members almost as an act of treason. The parallels between conference interpreting and academia are quite staggering here. In both cases, people who leave the field are seen as intellectually lazy or not hard-working enough, in line with the myth of meritocracy that members tell themselves to allow the field to self-perpetuate with all its inequalities. And in both cases, the people who occupy entry-level positions in the field (recent graduates in interpreting, doctoral and post-doctoral researchers in academia) are the ones most actively questioning the rules of the game, making everyone else extremely uncomfortable in the process. Especially, I shall add, those who still want to believe in the ideal of a meritocracy and have been shielded from the limitations of their own agency-centric world view by a hefty dose of privilege.

The aura of meritocracy, together with a mismatch of hopeful candidates and available positions, do, after all, contribute to giving the field an aura of exclusivity, desirability, and importance, which all further enhance the symbolic capital of those occupying positions of power within it. Everyone else is expendable. Your struggle is not a bug, it is a feature.

Bottom line

If like me you enjoy research and writing: please continue publishing. Develop a publication strategy that suits your personality and your situation. But publish what you find relevant. Persevere to get your message out there, to be part of a discussion that you really care about. Enjoy the ride for its own sake.

However, do not publish merely to get promoted or tenured, to “not perish”. Because when that is the primary aim guiding your publication game, the time invested will not be time enjoyed but time stolen from yourself, your family and your friends. That time is not coming back.

The correlation between publishing and not perishing is spurious (and the internet has its very entertaining rabbit hole of those to go down on a rainy day) and the return on investment might therefore be disappointing. The only reward you might get for a publication is the process in itself and how it has contributed to your intellectual growth. It sounds cheesy and not at all snappy, but it is true and in itself an enormous privilege in today’s troubled times. Don’t mess it up by writing about stuff that you only marginally care about, just because you think it will get you somewhere professionally. Or do – I am not judging you, really. I am just trying to be mindful of what I spend my own time on.

Then again, don’t take advice from someone who has just spent a lot of time writing a blog post that has zero value on her academic CV.