Rule 5: get on board with tech

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

This is one of the simpler rules, but I still find it surprising that even young people don't seem to grasp the extent to which technology is absolutely central in every job of the future (and increasingly of the present). Not being able to write and read code, and to understand how the web and computers work, at a fairly good level, will increasingly be the same as not being able to read and write.

Part of the reason, I suppose, has to do with the fact that it's currently very popular to take the contrarian view - you can find op-ed pieces saying "don't learn to code". The best advice I can give is to completely ignore these pieces. If you bother looking up some of these articles, you will almost invariably find that they are written by people who have made a great career based on their very ability to code. It's really simple: those who understand and shape technology will lead, the rest will follow.

Of course, not everyone who can program will be a programmer, just like not everyone who can write will become a writer.

A slight extension of this rule is to fully embrace technology. I am not saying that all technology is always good, nor would I say generally that the more technology, the better. We can argue about this forever, but there is a clear historical pattern you must be aware of: there has always been more technology at time t+1, than at time t. Fully embracing  technology is the only way to be able to deal with it. Even if you come to the conclusion that a given technology is bad (for whatever reason), you will be much better equipped to criticize it if you fully understand it.

So, get on board with tech. It's not optional anymore.

Rule 4: Surround yourself with people who are better than you

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

There's a saying that I love: if you're the smartest person in the room, you're in the wrong room. 

As much as you will grow professionally on your own, I strongly believe that a large part - perhaps even the largest part - of your growth will come from the people you are surrounded by. 

One way to look at this is the following: imagine that you will become the average of the five people you are surrounded by the most. I don't think this way of thinking is too far away from the truth. As a consequence, if you are surrounded by people who are in some ways better than you, then that means that you will be able to learn a lot from them, and grow. The opposite is also true, hence the saying that if you are the smartest person in the room, you should really find another room.

This doesn't feel natural to most of us. It certainly doesn't feel natural to me. For most of us, the feeling of being the smartest person in the room gives us a cozy feeling; a feeling of being in control of the situation; a feeling that there is nothing to worry about. But in reality, you should actually be worried, because it means you are not growing as much as you could.

On the flip side, being the least smart person in the room can be quite painful (notice that I use smart here somewhat liberally, not necessarily to mean intelligent, but simply to be very good at something). In my experience, the ability to stand this pain is an extreme booster for anything you do. Whether it's personal development, scientific research, sports, arts: if you surround yourself with people who are better than you, you will grow. 

When I was younger, I had a phase where I was ambitious enough to become a decent squash player. At some point, one of my work colleagues at the time invited me to go and play squash with him. Never in my life was I so humiliated doing sports. I did not stand a chance against this guy. Nevertheless, it became obvious very quickly that I had never learned faster, and more, than playing against him. By playing against someone who was better than me, again and again, my own game improved dramatically. And ironically, my aspirations of becoming a decent squash player eventually came true (that was a long time ago ;-).

Another mantra that is relevant here, and that I am sure you have heard many times before, is to get out of your comfort zone. The idea here is exactly the same: by challenging yourself - truly challenging yourself so that it feels uncomfortable - you will build the resilience and strength that is important for growth. 

So don't be afraid to feel stupid. Feeling stupid is a sure sign that you are exposing yourself to things you don't know. Feeling stupid is an opportunity to learn. A great read in this regard is the timeless essay The importance of stupidity in scientific research.

Rule 3: Enthusiasm makes up for a lot

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

As mentioned in the first rule - do novel, interesting things - eighty percent of success is showing up, according to Woody Allen. Another famous quote is "success is 1% inspiration, and 99% perspiration" (attributed to Edison). Both of these quotes ring very true to me. And what you need in order to keep showing up, and to keep perspiring, is enthusiasm and drive.

Enthusiasm makes up for a lot. Not for everything, but for a lot. The best people I've worked with were deeply enthusiastic about the things they were working on. The vast majority of us are not born genius. But with enthusiasm, we can come as close as possible. Enthusiasm makes us continue in the face of difficulty, and failure. Enthusiasm keeps us going through the rough spots, which we will inevitably hit. Enthusiasm is contagious.

The advice here is not so much a simple "be enthusiastic", but rather, that if you don't feel deep enthusiasm for a particular thing, it's going to be very challenging. On the flip side, if you do feel deep enthusiasm for something, but don't feel you can compete with others in terms of brilliance, don't let that discourage you. By consistently showing up, and by continuing to work hard on it, you will eventually get farther than most.

Because enthusiasm is contagious, be sure to surround yourself with people that are truly enthusiastic about the things they're working on. Which brings us to next rule: if you're the smartest person in the room, you're in the wrong room.

Rule 2: If you can't decide, choose change

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

It took me about 30 years to figure this out, but ever since I stumbled on it, I've found it applicable to any situation. 

We need to make decisions every single day, and it seems that much of the career angst that befalls all of us from time to time is based on the fear that we could make the wrong decisions. Decisions are easy when the advantages clearly outweigh the disadvantages (or the other way around). Things get tricky when the balance is not as clear, and when the lists of potential positives and negatives add up to roughly the same. The inability to make a decision is one of the most dreadful feelings.

Whenever I am in such a situation where I can't decide because all options seem roughly equal, I choose the one that represents most change.

Here's why: on a path that is dotted with making decisions, you are inevitably going to have regrets down the line. There are two possible types of regrets; in the first one, you regret a path not taken; in the second, you regret having taken a path. My philosophy is to avoid the "path not taken" regret. It's the worse kind of regret. You will at times have regrets about having taken the wrong path - but at least you took the bloody path! It meant change, and it was probably exciting, at least for a while. Even if it turns out to have been the wrong decision, you've learned something new. You moved. You lived.

As far as we know, we only have this one life. Explore! Thus: when in doubt, choose change. 

Rule 1: Do novel, interesting things

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

This is possibly the most important rule, and thus I am putting it right at the start. The way this rule is phrased is partly inspired by Y Combinator's rule for startups: make something people want. If I were asked to condense career advice - specifically in academia, but I think also more broadly - into four words, it would be these: do novel, interesting things.

The rule is really composed of three sub rules: First, do something (surprisingly underestimated in my experience). Second, do something that is novel. And third, do something that is not just novel, but also interesting. Let's take a look at these, one by one.

Do something
I find it hard to overstate how important this is. I've met countless of brilliant young people who were clearly very smart and creative but had nothing to show for it. In academia, this is often phrased in the more negative "publish or perish", which I think is slightly misleading and outdated. What it should really say is "show your work that demonstrates your thinking, creativity, brilliance, determination, etc.". It doesn't have to be papers - it could really be anything. A blog. A book. Essays. Software. Hardware. Events you organized. Whatever - as long as it has your stamp over it, as long as you can say "I did this", you'll be fine. 

I need to repeat that it's hard to overstate how important that is. As Woody Allen famously said, "Eighty percent of life is showing up". This is why I urge anyone who wants a job in a creative field - and I consider science and engineering to be creative fields - to actually be creative and make things, and make them visible. The most toxic idea in a young person's mind is that they have nothing interesting to say, and so they shouldn't say it in the first place. It gets translated into not showing what you've done, or worse, into not even doing it. Don't fall into that trap (I've written a bit more about this in a post entitled The curse of self-contempt).

Do something novel
Novelty is highly underrated. This is a bit of a personal taste, but I prefer something that is novel but still has rough edges, over something that is a perfect copy of something existing. I suppose the reason most people shy away from novelty, especially early in their career, is that it takes guts: it's easy for others to ridicule novel things (in fact, most novel things initially seem a little silly). But especially early in your career is when novelty matters the most, because that's when you are actually the most capable to be doing novel things since your brain is not yet completely filled up with millions of other people's ideas. 

Novelty shouldn't be misunderstood as "groundbreakingly novel from every possible angle". It is often sufficient to take something existing and reinvent only a small part of it, which in turn may make the entire thing much more interesting. 

Do something that is also interesting
While novelty is per se often interesting, it's not a guarantee. So make sure that what you do is interesting to you, and at least a few other people. It's obvious that doing things that are interesting will be good for your career. This doesn't need a lot of explanation, but it's important to realize that what is interesting is for you to figure out. Most people don't think a lot about this, and go on doing what everybody else is doing, because that must mean it's interesting (when in reality it often isn't, at least not to you). The ability to articulate for yourself why something is interesting is extremely important. Practice it by pitching your ideas to an imaginary audience - you'll very quickly feel whether an idea excites you, or whether you feel like you're just reciting someone else's thinking.

10 rules for the career of your dreams ;-)

A few weeks ago, I was asked to give a short presentation at a career workshop for visiting international students at EPFL. The idea of the event was to have a few speakers who all shared an academic background to reflect on their (very different) career paths. As I started writing down the things that have guided me throughout the years, I began to realize that this would end up being one of those lists ("12 things to do before you die"), and naturally, I tried to come up with the tackiest title I could imagine, which is the title of this post.

Underneath the tacky title, however, is a serious list of rules that I've developed over the years. Some of these rules I've known to be helpful since I was a high school student. Others I've discovered much later, looking back on my path and trying to figure out, with hindsight, why I went one way, rather than the other.

Almost two years ago, I've received an email from a student asking for career advice. I answered, and decided to post the answer on this blog. That post - advice to a student - has been viewed a few hundred times since then, and I figured I should also share the career workshop talk, as a few people, today and in the future, may find it helpful. There is little use in uploading the slides since they were just the list of the rules. What I am going to do here instead is to expand on each of the rules a little more. This is a bit of an experiment to me, but hopefully, this will work out fine. Each of the ten rules will be its own post, and I will keep updating this post with links to each rule once they get published. So without further ado, here is the unfinished list of rules which I hope to complete over the next few weeks:

1. Do novel, interesting things

2. If you can't decide, choose change

3. Enthusiasm makes up for a lot

4. Surround yourself with people who are better than you

5. Get on board with tech

6. Say no

8. Be visible

9. Have alternatives

10. Be the best you can be, not the best there is

AI-phobia - a convenient way to ignore the real problems

Artificial intelligence - AI - is a big topic in the media, and for good reasons. The underlying technologies are very powerful and will likely have a strong impact on our societies and economies. But it's also very easy to exaggerate the effects of AI. In particular, it's very easy to claim that AI will take away all of our jobs, and that as a consequence, societies are doomed. This is an irrational fear of the technology, something I refer to as AI-phobia. Unfortunately, because it is so appealing, AI-phobia currently covers the pages of the world's most (otherwise) intelligent newspapers.

A particularly severe case of AI-phobia appeared this past weekend in the New York Times, under the ominous headline "The Real Threat of Artificial Intelligence". In it, venture capitalist Kai Fu Lee paints a dark picture of the future where AI will lead to mass unemployment. This argument is as old as technology - each time a new technology comes along, the pessimists appear, conjuring up the end of the world as we know it. Each time, when faced with the historic record that shows they've always been wrong, they respond by saying "this time is different". Each time they're still wrong. But, really, not this time, claims Lee:

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

Where is Wikipedia's [citation needed] when you need it the most? Exactly zero evidence is given for the rather outrageous claims that AI will bring about a wide-scale decimation of jobs. Which is not very surprising, because there is no such evidence.

Lee then goes into the next paragraph, claiming that the companies developing AI will make enormous profits, and that this will lead to increased inequality:

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

There are numerous problems with this argument. Technology has always helped to do something better, faster, or cheaper - any new technology that wouldn't do at least one of these things would be dead on arrival. With every new technology that comes along, you could argue that the companies that develop it will make huge profits. And sometimes they do, especially those that manage to get a reasonable chunk of the market early on. But does this always lead to an enormous wealth concentration? The two most recent transformative technologies, microprocessors and the internet, have certainly made some people very wealthy, but by and large the world has profited as a whole, and things are better than they have ever been in human history (something many people find hard to accept despite the overwhelming evidence).

What's more, technology has a funny way of spreading in ways that most people didn't intend or foresee. Certainly, a company like Uber could in principle use only robot drivers (I assume Lee refers to autonomous vehicles). But so could everybody else, as this technology will be in literally every car in the near future. Uber would probably even further lower their prices to be more competitive. Other competitors could get into this market very easily, again lowering overall prices and diminishing profits. New businesses could spin up, based on a new available cheap transportation technology. These Uber-like companies could start to differentiate themselves by adding a human touch, creating new jobs that don't yet exist. The possibilities are endless, and impossible to predict. 

Lee then makes a few sensible suggestions - which he calls "the Keynesian approach" - about increasing tax rates for the super wealthy, using the money to help those in need, and also argues for a basic universal income. These suggestions are sensible, but they are sensible already in a world without AI.

He then singles out the US and China in particular, and this is where things get particularly weird:

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

Yes, what about them? Now, I do not for a moment doubt that the US and China will have many successful AI businesses. The US in particular has almost single-handedly dominated the technology sector in the past few decades, and China has been catching up fast. But to suggest that these countries can tackle the challenge because they have AI businesses that will be able to fund "welfare initiatives via taxes" - otherwise called socialism, or a social safety net - is ignoring today's realities. The US in particular has created enormous economic wealth thanks to technology in the past few decades, but has completely failed to ensure that this wealth is distributed fairly among its citizens, and is consistently ranked as one of the most unequal countries in the world. It is clearly not the money that is lacking here.

Be that as it may, Lee believes that most other countries will not be able to benefit from the taxes that AI companies will pay:

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

Countries other than US and China beware! The AI train is coming and you will either be poor or become dependent slaves of the US and Chinese AI companies that will dominate the world! You could not make this up if you had to (although there are some excellent Sci-Fi novels that are not too far away from this narrative).

I am sure Kai Fu Lee is a smart person. His CV is certainly impressive. But it strikes me as odd that he wouldn't come up with better alternatives, and instead only offers a classical case of a false dichotomy. There are many possible ways to embrace the challenges, and only a few actually have to do with technology. Inequality does not seem to be a technical problem, but rather a political problem. The real problems are not AI and technology - they are schools that are financed by local property taxes, health insurance that is tied to employment, extreme tax cuts for the wealthy, education systems with exorbitant tuition fees, and so on. 

Let's not forget these problems by being paralyzed by the irrational fear caused by AI-phobia.

Lee closes by saying 

A.I. is presenting us with an opportunity to rethink economic inequality on a global scale.

It would be a shame if we would indeed require artificial intelligence to tackle economic inequality - a product of pure human stupidity. 

Gender diversity in tech - a promise

It doesn't take much to realize that the gender ratio in technology is severely out of balance. Whether you look at employees at tech companies, computer science faculty members, graduates in computer and information sciences, user surveys on StackOverflow, you find almost the same picture anywhere.

From personal experience, it seems to me that the situation is considerably worse in Europe than in the US, but I don't have any data to back this up.

If there is any good news, it's that the problem is increasingly recognized - not nearly enough, but at least it's going in the right direction. The problem is complex and there is a lot of debate about how to solve it most effectively. This post is not about going into this debate, but rather to make a simple observation, and a promise.

The simple observation is that I think a lot of it has to do with role models. We can do whatever we want, if a particular group is overwhelmingly composed of one specific phenotype, we have a problem, because anyone who is not of that phenotype is more likely to feel "out of place" than they would otherwise, no matter how welcoming that group is. 

The problem is that for existing groups, progress may be slow because adding new people to the group to increase diversity may initially be difficult, for many different reasons. Having a research group that is mostly male, I am painfully aware of the issues. 

For new groups, however, progress can be faster, because it is often easier to prevent a problem than to fix one. And this is where the promise comes in. Last year, I became the academic director of a new online school at EPFL (the EPFL Extension School, which will focus on continued technology education). This sounds more glorious than it should, because at the time, this new school was simply an idea in my head, and the team was literally just me. But I made a promise to myself, namely that I would not build a technology program and have practically no women teaching on screen. No matter how well they would do it, if the teachers were predominantly male, we would be sending once again, without ill intent, the subtle signal that technology is something for guys.

Today, I want to make this promise publicly. At the EPFL Extension School, we will have gender parity for on-screen instructors. I can't guarantee that we will achieve this at all times, because we are not (yet) a large operation, and I also recognize that at any point in time we may be out of balance, hopefully in both directions, due to the fact that people come and people go. But it will be part of the school's DNA, and if we're off balance, we know what we have to do, and the excuse that it's a hard problem once you have it won't be acceptable. 

Technology in public health: A discussion with Caroline Buckee

A few weeks ago, I came across a piece in the Boston Globe entitled Sorry, Silicon Valley, but ‘disruption’ isn’t a cure-all. It's a very short op-ed, so I recommend reading it. The piece was written by Caroline Buckee, Assistant Professor at the Harvard T.H. Chan School of Public Health. I know Caroline personally, and given that she has written some of the key papers in digital epidemiology, I was surprised to read her rant. Because Caroline is super smart and her work extremely innovative, I started to ask myself if I am missing something, so I decided to write to her. My idea was that rather than arguing over Twitter, we could have a discussion by email, which we can then publish on the internet. To my great delight, she agreed, and I am now posting the current state of the exchange here.

From: Marcel Salathé
To: Caroline Buckee
Date: 16. March 2017

Dear Caroline,

I hope this email finds you well. Via Marc I recently found you on Twitter, and I’m looking forward to now seeing more frequently what you’re up to.

Through Twitter, I also came across an article you wrote in the Boston Globe (the "rant about pandemic preparedness", as you called it on Twitter). While I thought it hilarious as a rant, I also thought there were a lot of elements in there that I strongly disagree with. At times, you come across as saying “how dare these whippersnappers with their computers challenge my authority”, and I think if I had been a just-out-of-college graduate reading this, excited about how I could bring digital tools to the field of global health, I would have found your piece deeply demotivating.

So I wanted to clarify with you some issues you raised there, and share those with the broader community. Twitter doesn’t work well for this, in my experience; but would you be willing to do this over email? I would then put the entire discussion on my blog, and you can of course do whatever you want to do with it. I promise that I won’t do any editing at all, and I will also not add anything beyond what we write in the emails.

Would you be willing to do this? I am sure you are super busy as well, but I think it could be something that many people may find worthwhile reading. I know I would.

All the best, and I hope you won’t have to deal with snow any longer in Boston!


From: Caroline Buckee
To: Marcel Salathé
Date: 16. March 2017

Hi Marcel,

Sure, I would be happy to do that, I think this is a really important issue - I'll put down some thoughts. As you know, I like having technical CS and applied math grads in my group, and in no way do I think that the establishment should not be challenged. We may disagree as to who the establishment actually is, however. 

My concern is with the attitudes and funding streams that are increasingly prevalent among people I encounter from the start up world and Silicon Valley more generally (and these look to become even more important now that NIH funding it going away) - the attitude that we no longer need to do real field work and basic biology, that we can understand complex situations through remote sensing and crowd sourcing alone, that short term and quick fix tech solutions can solve problems of basic biology and complex political issues, that the problem must be to do with the fact that not enough physicists have thought about it. There is a pervasive arrogance in these attitudes, which are ultimately based on the assumption that technical skill can make up for ignorance. 

As for the idea that my small article would give any new grad pause for thought, I hope it does. I do not count myself as an expert at this stage of my career - these issues require years of study and research. I believe I know enough to understand that a superficial approach to pandemic preparedness will be unsuccessful, and I am genuinely worried about it. The article was not meant to be discouraging, it was supposed to make that particular echo chamber think for a second about whether they should perhaps pay a little more attention to the realities, rich history, and literature of the fields they are trying to fix. (As a side note, I have yet to meet a Silicon Valley graduate in their early 20's who is even slightly deflated when presented with evidence of their glaring ignorance... but I am a bit cynical...!)

In my experience, my opinion is unpopular (at my university and among funders), and does not represent "the establishment". At every level, there is an increasing emphasis on translational work, a decreasing appetite for basic science. This alarms me because any brief perusal of the history of science will show that many of the most important discoveries happen in pursuit of some other scientific goal whose original aim was to understand the world we live in in a fundamental sense - not to engineer a solution to a particular problem. In my field, I think the problem with this thinking is illustrated well by the generation of incredibly complex simulation models of malaria that are intended to help policy makers but are impossible to reproduce, difficult to interpret, and have hundreds of uncertain parameters, all in spite of the fact that we still don't understand the basic epidemiological features of the disease (e.g. infectious duration and immunity).

I think there is the potential for an amazing synergy between bright, newly trained tech savvy graduates and the field of global health. We need more of them for sure. What we don't need is to channel them into projects that are not grounded in basic research and deeply embedded in field experience.

I would enjoy hearing your thoughts on this - both of us are well-acquainted with these issues and I think the field is quite divided, so a discussion could be useful.

I hate snow. I hate it so much!

Take care,


From: Marcel Salathé
To: Caroline Buckee
Date: 18. March 2017

Dear Caroline,

Many thanks for your response, and thanks for doing this. I agree with you that it’s an important issue.

I am sorry that you encounter the attitude that we "no longer need to do real field work and basic biology, that we can understand complex situations through remote sensing and crowd sourcing alone”. This would indeed be an arrogant attitude, and I would be as concerned as you are. It does not reflect, however, my experience, which has largely been that basic research and field work are all that is needed, and the new approaches we and others tried to bring to the table were not taken seriously (the “oh you and your silly tech toys” attitude). So you can imagine why your article rubbed me a bit the wrong way.

I find both of these attitudes shortsighted. Let’s talk about pandemic preparedness, which is the focus of your article. Why wouldn’t we want to bring all possible weapons to the fight? It's very clear to me that both basic science and field work as well as digital approaches using mobile phones, social media, crowdsourcing, etc. will be important in addressing the threat of pandemics. Why does it have to be one versus the other? Is it just a reflection of the funding environment, where one dollar that is going to a crowdsourcing project is one dollar that is taken away from basic science? Or is there a more psychological issue, in that basic science is worthy of a dollar, but novel approaches like crowdsourcing are not?

You write that “the next global pandemic will not be prevented by the perfectly designed app. “Innovation labs” and “hackathons” have popped up around the world, trying to make inroads into global health using technology, often funded via a startup model of pilot grants favoring short-term innovation. They almost always fail.” And just a little later, you state that "Meanwhile, the important but time-consuming effort required to evaluate whether interventions actually work is largely ignored.” Here again, it’s easy to interpret this as putting one against the other. Evaluation studies are important and should be funded, but why can’t we at the same time use hackathons to bring people together and pick each other’s brains, even if only for a few days? In fact, hackathons may be the surest way to demonstrate that some problems can’t be solved on a weekend. And while it’s true that most ideas developed there end up going nowhere, some ideas take on a life of their own. And sometimes - very rarely, but sometimes - they lead to something wonderful. But independent of the outcome, people often walk away enlightened from these events, and have often made new connections that will be useful for their futures. So I would strongly disagree with you that they almost always fail.

Your observation that there is "an increasing emphasis on translational work, a decreasing appetite for basic science” is probably correct, but rather than blaming it on 20 year old SiliconValley graduates, I would ask ourselves why that is. Translational work is directly usable in practice, as per its definition. No wonder people like it! Basic research, on the other hand, is a much tougher sell. Most of the time, it will lead nowhere. Sometimes, it will lead to interesting places. And very rarely, it will lead to absolutely astonishing breakthroughs that could not have happened in any other way (such as the CRISPR discovery). By the way, in terms of probabilities of success, isn’t this quite similar to the field of mobile health apps, wich you dismissed as "a wasteland of marginally promising pilot studies, unused smartphone apps, and interesting but impractical gadgets that are neither scalable nor sustainable”? But I digress. Anyways, rather than spending our time explaining this enormous value of basic research to the public, which ultimately funds it, we engage in pity fights over vanity publications and prestige. People holding back data so that they can publish more; people publishing in closed access journals; hiring and tenure committees valuing publications in journals with high impact factors much more than public outreach. I know you agree here, because at one point you express this very well in your piece when you say that "the publish-or-perish model of promotion and tenure favors high-impact articles over real impact on health."

This that is exactly what worries me, and it worries me much, much more than a few arrogant people from Silicon Valley. We are at a point where the academic system is so obsessed with prestige that it created perverted incentives leading to the existential crisis science finds itself in. We are supposed to have an impact on the world, but the only way impact is assessed is by measures that have very little relevance in the real world, such as citation records and prizes. We can barely reproduce each other’s findings. For a long time, science has moved away from the public, and now it seems that the public is moving away from science. This is obviously enormously dangerous, leading to “alternative fact” bubbles, and politicians stating that people have had enough of experts.

On this background, I am very relieved to see scientists and funders excited about crowdsourcing, about citizen science, about creating apps that people can use, even at the risk that many of them will be abandoned. I would just wish that when traditional scientific experts see a young out of college grad trying to solve public health with a shiny new app, that they would go and offer to help them with their expertise - however naive their approach, or rather *especially* when the approach is naive. If they are too arrogant to accept the help, so be it. The people who will change things will always appreciate a well formed critique, or an advice that helps them jump over a hurdle much faster.

What I see, in short, is that very often, scientific experts, who already have a hard time getting resources, feel threatened by new tech approaches, while people trying to bring new tech approaches to the field are getting the cold shoulder from the more established experts. This, to me, is the wrong fight, and we shouldn’t add fuel to the fire by providing false choices. Why does it have to be "TED talks and elevator pitches as a substitute for rigorous, peer-reviewed science”; why can’t it be both?

Stay warm,


PS Have you seen this “grant application” by John Snow? It made me laugh and cry at the same time…

From: Caroline Buckee
To: Marcel Salathé
Date: 18. March 2017

Hi Marcel,

First of all, I completely and totally agree about the perverse incentives, ridiculous spats, and inefficiencies of academic science - it's a broken system in many ways. We spend our lives writing grants, we battle to get our papers into "high impact" journals (all of us do even though we hate doing it), and we are largely rewarded for getting press, bringing in money, and doing new shiny projects rather than seeing through potentially impactful work. 

You say that I am probably right about basic science funding going away, but I didn't follow the logic from there. We should educate the public instead of engaging in academic pettiness - yes, I agree. Basic science is a tough sell - not sure I agree about that as much, but this is probably linked to developing a deeper and broader education about science at every level. Most basic science leads nowhere? Strongly disagree! If you mean by "leads nowhere" that it does not result in a product, then fine, but if you mean that it doesn't result in a greater understanding of the world and insights into how to do experiments better, even if they "fail", then I disagree. The point is that basic science is about seeking truth about the world, not in designing a thing. You can learn a lot from engineering projects, but the exercise is fundamentally different in its goals and approach. Maybe this is getting too philosophical to be useful. 

In any case, I think it's important to link educating the public about the importance of basic science directly to the arrogance of Silicon Valley; it's not unrelated. Given that NIH funding is likely to become even more scarce, increasing the time and effort spent getting funding for our work, these problems will only get worse. I agree with you that this is a major crisis, but I do think it is important to think about the role played by Silicon Valley (and other wealthy philanthropists for that matter) as the crisis deepens. As they generously step in to fill the gaps - and I think it's wonderful that they consider doing so - it creates the opportunity for them to set the agenda for research. Large donations are given by rich donors whose children have rare genetic conditions to study those conditions in particular. The looming threat of mortality among rich old (mostly white) dudes is going to keep researchers who study dementia funded. I am in two minds about whether this increasing trend of personalized, directed funding from individuals represents worse oversight than we have right now with the NIH etc., but it is surely worth thinking about. And tech founders tend to think that tech-style solutions are the way forward. It is not too ridiculous, I don't think, to imagine a world where much if not most science funding comes from rich old white dudes who decide to bequeath their fortunes to good causes. How they decide to spend their money is up to them, but that worries me; should it be up to them? Who should set the agenda? It would be lovely to fund everything more, but that won't happen - there will always be fashionable and unfashionable approaches, not everyone gets funded, and Silicon Valley's money matters.  

Public health funding in low and middle income settings (actually, in every setting, but particularly in resource-limited regions) is also a very constrained zero sum game. Allocating resources for training and management of a new mHealth system does take money away from something else. Crowd sourcing and citizen science could be really useful for some things, but yes, in many cases I think that sexy new tech approaches do take funding away from other aspects of public health. I would be genuinely interested - and perhaps we could write this up collaboratively - to put together some case studies and try to figure out exactly how many and which mHealth solutions have actually worked, scaled up, and been sustained over time. We could also dig into how applied public health grants are allocated by organizations to short-term tech pilot studies like the ones I was critical of versus other things, and try to evaluate what that means for funding in other domains, and which, if any, have led to solutions that are being used widely. This seems like it might be a useful exercise.

We agree that there should be greater integration of so-called experts and new tech grads but I don't see that happening very much. I don't think it's all because the experts are in a huff about being upstaged, although I'm sure that happens sometimes. If we could figure this out I would be very happy. This is getting too long so I will stop, but I think it's worth us thinking about why there is so little integration. I suspect some of it has to do with the timescales of global health and requirements for long-term relationship building and slow, careful work in the field. I think some of it has to do with training students to value get-rich-quick start-up approaches and confident elevator pitches over longer term investments in understanding and grappling with a particular field. I do think that your example (a young tech grad trying to naively build an app, and the expert going to them to try to help) should be reversed. In my opinion, the young tech grad should go and study their problem of choice with experts in the field, and subsequently solicit their advice about how to move forward with their shiny app idea, which may by then have morphed into something much more informed and ultimately useful...


PS :)

From: Marcel Salathé
To: Caroline Buckee
Date: 19. March 2017

Dear Caroline

My wording of “leads nowhere” may indeed have been too harsh, I agree with you that if well designed, then basic research will always tell us something about the world. My reference there was indeed that it doesn’t necessary lead to a product or a usable method. This is probably a good time where I should stress that I am a big proponent of basic research - anyone who doubts that is invited to go read my PhD thesis which was on a rather obscure aspect of theoretical evolutionary biology!

I actually think that the success distribution of basic research is practically identical with that of VC investments. Most VC investments are a complete loss, some return the money, very few return a few X, and the very rare one gives you 100X - 1000X. So is it still worth doing VC investments? Yes, as long as that occasional big success comes along. And so it is with basic research, except, as you say, and I agree, that we will never lose all the money, because we always learn something. But even if you dismiss that entirely, it would still be worth doing.

The topic we seem to be converging on is how much money should be given to what. Unless I am completely misinterpreting you, the frustration in your original piece came from the notion that a dollar in new tech approaches is a dollar taken away from other aspects of public health. With respect to private money, I don’t think we have many options. Whoever gives their wealth gets to decide how it is spent, which is only fair. I myself get some funding from private foundations and I am very grateful for it, especially because I am given the necessary freedom I need to reach the goals I want to achieve with this funding. The issue we should debate more vigorously is how much public money should be spent on what type of approach. In that respect, I am equally interested in the funding vs outcome questions you raised.

As to why there isn’t more integration between tech and public health, I don’t have any answers. My suspicion is that it is a cultural problem. The gap between the two worlds is still very large. And people with tech skills are in such high demand that they can choose from many other options that seem more exciting (even if in reality they end up contributing to selling more and better ads). So I think there is an important role for people like us, who have legs in both worlds, and who can at least try to communicate between the two. This is why I am so careful not to present them as “either or” approaches - an important part of the future work will be done by the approaches in combination.

(I think we’ve clarified a lot of points and I understand your view much better now. I’m going to go ahead and put this on the blog, also to see if there are any reactions to it. I am very happy to go on and discuss more - thanks for doing this!)


Self-driving cars: the public health breakthrough of the early 21st century

As readers of this blog know, I am a big fan of self-driving cars. I keep saying that self-driving cars are going to be the biggest public health breakthrough of the early 21st century. Why? Because the number of people that get injured or killed by humans in cars is simply astounding, and self-driving cars will bring this number close to zero.

If you have a hard time believing this, consider these statistics from the Association for Safe International Road Travel:

In the US alone,

  • each year 37,000+ people die in car crashes - over 1,600 are children, almost 8,000 are teenagers
  • each year, about 2.3 million people are injured or disabled

Globally, the numbers are even more staggering:

  • each year, almost 1,3 million people die in car crashes
  • each year, somewhere between 20 and 50 million people are injured or disabled
  • Car crashes are the leading cause of death among young people ages 15-29, and the second leading cause of death worldwide among young people ages 5-14.

If car accidents were an infectious disease, we would very clearly make driving illegal.

Self-driving cars will substantially reduce those numbers. It has recently been shown that the current version of Tesla's autopilot reduced crashes by a whopping 40% - and we're in early days when it comes to the sophistication of these systems. 

All these data points lead me to the conclusion stated above, that self-driving cars are going to be the biggest public health breakthrough of the early 21st century. 

I cannot wait to see the majority of cars being autonomous. I have two kids of age 4 and 7 - the only time I am seriously worried about their safety is when they are in a car, or when they play near a road, and the stats make this fear entirely rational. According to the CDC, the injuries due to transportation are the leading cause of death for children in the US, and I don't assume that this is much different in Europe.

In fact, the only time I am worried about my own safety is when I am in a car, or near a car. I am biking to and from the train station every day, and if you were to plot my health risk over the course of a day, you'd see two large peaks exactly when I'm on the bike.

If there is any doubt that I am super excited to see full autonomous vehicles on the street, I hope to have put them to rest. But what increasingly fascinates me about self-driving cars, beyond the obvious safety benefits, is what they will do to our lives, and how they will affect public transport, cities, companies, etc. I have some thoughts on this and will write another blog post later. My thinking on this has taken an unexpected turn after reading Rodney Brook's blog post entitled "Unexpected Consequences of Self Driving Cars", which I recommend highly.