tag:blog.salathe.com,2013:/posts Marcel Salathé's Blog 2017-11-29T03:56:43Z Marcel Salathé tag:blog.salathe.com,2013:Post/1207580 2017-11-24T19:02:32Z 2017-11-25T18:33:58Z Rule 6: Say no

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

Ask anyone in more advanced stages of their career about their biggest weaknesses in their professional lives, and you'll very often hear the phrase "I say yes too often".

So here is a simple rule: say no more often.

It sounds like bad advice. Shouldn't we be more open? Shouldn't we welcome new opportunities? Shouldn't we be excited if we are asked for input? Yes! Yes, we should be, but the more choices we have, the more selective we have to be.

Your time is very limited. Your time of full concentration is even more limited. The real problem is that for every yes, you're taking away your resources from other things that you also said yes to. If you say yes to too many things, you either won't be able to give your projects the attention they need, or you'll disappoint people you said yes to previously (or both). Either way, it's bad.

"But isn't my CV more impressive if it has lots of stuff on it? The more, the better?" you may ask, especially early in your career. The advice I'd give here is the same as the advice I'd give on how to prepare a presentation - most people will be able to take away one thing from it, a few may be able to walk away with 2-3 things. That's it. I think the same is true for a CV - after some basic vetting, you will be mainly associated with one thing that you did exceptionally well. The thing that truly stood out. The thing nobody else did.

This reminds me of one of the many great pieces of startup advice that YC gives: "We often say that a small group of customers who love you is better than a large group who kind of like you." I would argue the same is true when people decide whether to hire you or fund you. If everybody feels OK about your work, you're in trouble. If you have a few people who love one or two things you did extremely well, you'll be doing much better - they'll be your champions. I know this flies in the face of the advice some people give, especially in academia, which boils down to "just be sure to have all checkboxes ticked off, and don't show any weaknesses". All I can say is these people are wrong. Of course, if you're looking to spend your working life at incredibly boring places, you should follow these rules. Which, coincidentally, reminds me of yet another great piece of advice I heard around YC: When looking for brilliant people, look for the presence of strength, not the absence of weakness.

What does this have to do with saying no? Simple: you can't do something great unless you devote very large chunks of time on it. With every yes, you dilute yourself. So be careful when you say yes. Say yes only to things you can absolutely commit to, and no to everything else. Don't feel bad about saying no - you're really saying yes again to the things that you've already committed to.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1205179 2017-11-14T12:08:17Z 2017-11-25T19:52:58Z Rule 5: get on board with tech

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

This is one of the simpler rules, but I still find it surprising that even young people don't seem to grasp the extent to which technology is absolutely central in every job of the future (and increasingly of the present). Not being able to write and read code, and to understand how the web and computers work, at a fairly good level, will increasingly be the same as not being able to read and write.

Part of the reason, I suppose, has to do with the fact that it's currently very popular to take the contrarian view - you can find op-ed pieces saying "don't learn to code". The best advice I can give is to completely ignore these pieces. If you bother looking up some of these articles, you will almost invariably find that they are written by people who have made a great career based on their very ability to code. It's really simple: those who understand and shape technology will lead, the rest will follow.

Of course, not everyone who can program will be a programmer, just like not everyone who can write will become a writer.

A slight extension of this rule is to fully embrace technology. I am not saying that all technology is always good, nor would I say generally that the more technology, the better. We can argue about this forever, but there is a clear historical pattern you must be aware of: there has always been more technology at time t+1, than at time t. Fully embracing  technology is the only way to be able to deal with it. Even if you come to the conclusion that a given technology is bad (for whatever reason), you will be much better equipped to criticize it if you fully understand it.

So, get on board with tech. It's not optional anymore.



]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1190712 2017-09-12T15:12:32Z 2017-11-25T19:51:39Z Rule 4: Surround yourself with people who are better than you

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

There's a saying that I love: if you're the smartest person in the room, you're in the wrong room. 

As much as you will grow professionally on your own, I strongly believe that a large part - perhaps even the largest part - of your growth will come from the people you are surrounded by. 

One way to look at this is the following: imagine that you will become the average of the five people you are surrounded by the most. I don't think this way of thinking is too far away from the truth. As a consequence, if you are surrounded by people who are in some ways better than you, then that means that you will be able to learn a lot from them, and grow. The opposite is also true, hence the saying that if you are the smartest person in the room, you should really find another room.

This doesn't feel natural to most of us. It certainly doesn't feel natural to me. For most of us, the feeling of being the smartest person in the room gives us a cozy feeling; a feeling of being in control of the situation; a feeling that there is nothing to worry about. But in reality, you should actually be worried, because it means you are not growing as much as you could.

On the flip side, being the least smart person in the room can be quite painful (notice that I use smart here somewhat liberally, not necessarily to mean intelligent, but simply to be very good at something). In my experience, the ability to stand this pain is an extreme booster for anything you do. Whether it's personal development, scientific research, sports, arts: if you surround yourself with people who are better than you, you will grow. 

When I was younger, I had a phase where I was ambitious enough to become a decent squash player. At some point, one of my work colleagues at the time invited me to go and play squash with him. Never in my life was I so humiliated doing sports. I did not stand a chance against this guy. Nevertheless, it became obvious very quickly that I had never learned faster, and more, than playing against him. By playing against someone who was better than me, again and again, my own game improved dramatically. And ironically, my aspirations of becoming a decent squash player eventually came true (that was a long time ago ;-).

Another mantra that is relevant here, and that I am sure you have heard many times before, is to get out of your comfort zone. The idea here is exactly the same: by challenging yourself - truly challenging yourself so that it feels uncomfortable - you will build the resilience and strength that is important for growth. 

So don't be afraid to feel stupid. Feeling stupid is a sure sign that you are exposing yourself to things you don't know. Feeling stupid is an opportunity to learn. A great read in this regard is the timeless essay The importance of stupidity in scientific research.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1190705 2017-09-12T15:10:11Z 2017-11-25T19:44:21Z Rule 3: Enthusiasm makes up for a lot

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

As mentioned in the first rule - do novel, interesting things - eighty percent of success is showing up, according to Woody Allen. Another famous quote is "success is 1% inspiration, and 99% perspiration" (attributed to Edison). Both of these quotes ring very true to me. And what you need in order to keep showing up, and to keep perspiring, is enthusiasm and drive.

Enthusiasm makes up for a lot. Not for everything, but for a lot. The best people I've worked with were deeply enthusiastic about the things they were working on. The vast majority of us are not born genius. But with enthusiasm, we can come as close as possible. Enthusiasm makes us continue in the face of difficulty, and failure. Enthusiasm keeps us going through the rough spots, which we will inevitably hit. Enthusiasm is contagious.

The advice here is not so much a simple "be enthusiastic", but rather, that if you don't feel deep enthusiasm for a particular thing, it's going to be very challenging. On the flip side, if you do feel deep enthusiasm for something, but don't feel you can compete with others in terms of brilliance, don't let that discourage you. By consistently showing up, and by continuing to work hard on it, you will eventually get farther than most.

Because enthusiasm is contagious, be sure to surround yourself with people that are truly enthusiastic about the things they're working on. Which brings us to next rule: if you're the smartest person in the room, you're in the wrong room.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1188880 2017-09-05T04:39:44Z 2017-11-25T19:41:10Z Rule 2: If you can't decide, choose change

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

It took me about 30 years to figure this out, but ever since I stumbled on it, I've found it applicable to any situation. 

We need to make decisions every single day, and it seems that much of the career angst that befalls all of us from time to time is based on the fear that we could make the wrong decisions. Decisions are easy when the advantages clearly outweigh the disadvantages (or the other way around). When things get tricky is when the balance is not as clear, and when the lists of potential positives and negatives add up to roughly the same. The inability to make a decision is one of the most dreadful feelings.

Whenever I am in such a situation where I can't decide because all options seem roughly equal, I choose the one that represents most change.

Here's why: on a path that is dotted with making decisions, you are inevitably going to have regrets down the line. There are two possible types of regrets; in the first one, you regret a path not taken; in the second, you regret having taken a path. My philosophy is to avoid the "path not taken" regret. It's the worse kind of regret. You will at times have regrets about having taken the wrong path - but at least you took the bloody path! It meant change, and it was probably exciting, at least for a while. Even if it turns out to have been the wrong decision, you've learned something new.

As far as we know, we only have this one life. Explore! Thus: when in doubt, choose change. 


]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1188532 2017-09-03T18:19:39Z 2017-11-25T19:41:18Z Rule 1: Do novel, interesting things

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

This is possibly the most important rule, and thus I am putting it right at the start. The way this rule is phrased is partly inspired by Y Combinator's rule for startups: make something people want. If I were asked to condense career advice - specifically in academia, but I think also more broadly - into four words, it would be these: do novel, interesting things.

The rule is really composed of three sub rules: First, do something (surprisingly underestimated in my experience). Second, do something that is novel. And third, do something that is not just novel, but also interesting. Let's take a look at these, one by one.

Do something
I find it hard to overstate how important this is. I've met countless of brilliant young people who were clearly very smart and creative but had nothing to show for it. In academia, this is often phrased in the more negative "publish or perish", which I think is slightly misleading and outdated. What it should really say is "show your work that demonstrates your thinking, creativity, brilliance, determination, etc.". It doesn't have to be papers - it could really be anything. A blog. A book. Essays. Software. Hardware. Events you organized. Whatever - as long as it has your stamp over it, as long as you can say "I did this", you'll be fine. 

I need to repeat that it's hard to overstate how important that is. As Woody Allen famously said, "Eighty percent of life is showing up". This is why I urge anyone who wants a job in a creative field - and I consider science and engineering to be creative fields - to actually be creative and make things, and make them visible. The most toxic idea in a young person's mind is that they have nothing interesting to say, and so they shouldn't say it in the first place. It gets translated into not showing what you've done, or worse, into not even doing it. Don't fall into that trap (I've written a bit more about this in a post entitled The curse of self-contempt).

Do something novel
Novelty is highly underrated. This is a bit of a personal taste, but I prefer something that is novel but still has rough edges, over something that is a perfect copy of something existing. I suppose the reason most people shy away from novelty, especially early in their career, is that it takes guts: it's easy for others to ridicule novel things (in fact, most novel things initially seem a little silly). But especially early in your career is when novelty matters the most, because that's when you are actually the most capable to be doing novel things since your brain is not yet completely filled up with millions of other people's ideas. 

Novelty shouldn't be misunderstood as "groundbreakingly novel from every possible angle". It is often sufficient to take something existing and reinvent only a small part of it, which in turn may make the entire thing much more interesting. 

Do something that is also interesting
While novelty is per se often interesting, it's not a guarantee. So make sure that what you do is interesting to you, and at least a few other people. It's obvious that doing things that are interesting will be good for your career. This doesn't need a lot of explanation, but it's important to realize that what is interesting is for you to figure out. Most people don't think a lot about this, and go on doing what everybody else is doing, because that must mean it's interesting (when in reality it often isn't, at least not to you). The ability to articulate for yourself why something is interesting is extremely important. Practice it by pitching your ideas to an imaginary audience - you'll very quickly feel whether an idea excites you, or whether you feel like you're just reciting someone else's thinking.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1188531 2017-09-03T18:19:26Z 2017-11-29T03:56:43Z 10 rules for the career of your dreams ;-)

A few weeks ago, I was asked to give a short presentation at a career workshop for visiting international students at EPFL. The idea of the event was to have a few speakers who all shared an academic background to reflect on their (very different) career paths. As I started writing down the things that have guided me throughout the years, I began to realize that this would end up being one of those lists ("12 things to do before you die"), and naturally, I tried to come up with the tackiest title I could imagine, which is the title of this post.

Underneath the tacky title, however, is a serious list of rules that I've developed over the years. Some of these rules I've known to be helpful since I was a high school student. Others I've discovered much later, looking back on my path and trying to figure out, with hindsight, why I went one way, rather than the other.

Almost two years ago, I've received an email from a student asking for career advice. I answered, and decided to post the answer on this blog. That post - advice to a student - has been viewed a few hundred times since then, and I figured I should also share the career workshop talk, as a few people, today and in the future, may find it helpful. There is little use in uploading the slides since they were just the list of the rules. What I am going to do here instead is to expand on each of the rules a little more. This is a bit of an experiment to me, but hopefully, this will work out fine. Each of the ten rules will be its own post, and I will keep updating this post with links to each rule once they get published. So without further ado, here is the unfinished list of rules which I hope to complete over the next few weeks:

1. Do novel, interesting things

2. If you can't decide, choose change

3. Enthusiasm makes up for a lot

4. Surround yourself with people who are better than you

5. Get on board with tech

6. Say no]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1167622 2017-06-26T10:55:29Z 2017-06-28T06:01:26Z AI-phobia - a convenient way to ignore the real problems

Artificial intelligence - AI - is a big topic in the media, and for good reasons. The underlying technologies are very powerful and will likely have a strong impact on our societies and economies. But it's also very easy to exaggerate the effects of AI. In particular, it's very easy to claim that AI will take away all of our jobs, and that as a consequence, societies are doomed. This is an irrational fear of the technology, something I refer to as AI-phobia. Unfortunately, because it is so appealing, AI-phobia currently covers the pages of the world's most (otherwise) intelligent newspapers.

A particularly severe case of AI-phobia appeared this past weekend in the New York Times, under the ominous headline "The Real Threat of Artificial Intelligence". In it, venture capitalist Kai Fu Lee paints a dark picture of the future where AI will lead to mass unemployment. This argument is as old as technology - each time a new technology comes along, the pessimists appear, conjuring up the end of the world as we know it. Each time, when faced with the historic record that shows they've always been wrong, they respond by saying "this time is different". Each time they're still wrong. But, really, not this time, claims Lee:

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

Where is Wikipedia's [citation needed] when you need it the most? Exactly zero evidence is given for the rather outrageous claims that AI will bring about a wide-scale decimation of jobs. Which is not very surprising, because there is no such evidence.

Lee then goes into the next paragraph, claiming that the companies developing AI will make enormous profits, and that this will lead to increased inequality:

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

There are numerous problems with this argument. Technology has always helped to do something better, faster, or cheaper - any new technology that wouldn't do at least one of these things would be dead on arrival. With every new technology that comes along, you could argue that the companies that develop it will make huge profits. And sometimes they do, especially those that manage to get a reasonable chunk of the market early on. But does this always lead to an enormous wealth concentration? The two most recent transformative technologies, microprocessors and the internet, have certainly made some people very wealthy, but by and large the world has profited as a whole, and things are better than they have ever been in human history (something many people find hard to accept despite the overwhelming evidence).

What's more, technology has a funny way of spreading in ways that most people didn't intend or foresee. Certainly, a company like Uber could in principle use only robot drivers (I assume Lee refers to autonomous vehicles). But so could everybody else, as this technology will be in literally every car in the near future. Uber would probably even further lower their prices to be more competitive. Other competitors could get into this market very easily, again lowering overall prices and diminishing profits. New businesses could spin up, based on a new available cheap transportation technology. These Uber-like companies could start to differentiate themselves by adding a human touch, creating new jobs that don't yet exist. The possibilities are endless, and impossible to predict. 

Lee then makes a few sensible suggestions - which he calls "the Keynesian approach" - about increasing tax rates for the super wealthy, using the money to help those in need, and also argues for a basic universal income. These suggestions are sensible, but they are sensible already in a world without AI.

He then singles out the US and China in particular, and this is where things get particularly weird:

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

Yes, what about them? Now, I do not for a moment doubt that the US and China will have many successful AI businesses. The US in particular has almost single-handedly dominated the technology sector in the past few decades, and China has been catching up fast. But to suggest that these countries can tackle the challenge because they have AI businesses that will be able to fund "welfare initiatives via taxes" - otherwise called socialism, or a social safety net - is ignoring today's realities. The US in particular has created enormous economic wealth thanks to technology in the past few decades, but has completely failed to ensure that this wealth is distributed fairly among its citizens, and is consistently ranked as one of the most unequal countries in the world. It is clearly not the money that is lacking here.

Be that as it may, Lee believes that most other countries will not be able to benefit from the taxes that AI companies will pay:

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

Countries other than US and China beware! The AI train is coming and you will either be poor or become dependent slaves of the US and Chinese AI companies that will dominate the world! You could not make this up if you had to (although there are some excellent Sci-Fi novels that are not too far away from this narrative).

I am sure Kai Fu Lee is a smart person. His CV is certainly impressive. But it strikes me as odd that he wouldn't come up with better alternatives, and instead only offers a classical case of a false dichotomy. There are many possible ways to embrace the challenges, and only a few actually have to do with technology. Inequality does not seem to be a technical problem, but rather a political problem. The real problems are not AI and technology - they are schools that are financed by local property taxes, health insurance that is tied to employment, extreme tax cuts for the wealthy, education systems with exorbitant tuition fees, and so on. 

Let's not forget these problems by being paralyzed by the irrational fear caused by AI-phobia.

Lee closes by saying 

A.I. is presenting us with an opportunity to rethink economic inequality on a global scale.

It would be a shame if we would indeed require artificial intelligence to tackle economic inequality - a product of pure human stupidity. 

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1141048 2017-03-24T15:35:32Z 2017-11-22T14:22:35Z Gender diversity in tech - a promise

It doesn't take much to realize that the gender ratio in technology is severely out of balance. Whether you look at employees at tech companies, computer science faculty members, graduates in computer and information sciences, user surveys on StackOverflow, you find almost the same picture anywhere.

From personal experience, it seems to me that the situation is considerably worse in Europe than in the US, but I don't have any data to back this up.

If there is any good news, it's that the problem is increasingly recognized - not nearly enough, but at least it's going in the right direction. The problem is complex and there is a lot of debate about how to solve it most effectively. This post is not about going into this debate, but rather to make a simple observation, and a promise.

The simple observation is that I think a lot of it has to do with role models. We can do whatever we want, if a particular group is overwhelmingly composed of one specific phenotype, we have a problem, because anyone who is not of that phenotype is more likely to feel "out of place" than they would otherwise, no matter how welcoming that group is. 

The problem is that for existing groups, progress may be slow because adding new people to the group to increase diversity may initially be difficult, for many different reasons. Having a research group that is mostly male, I am painfully aware of the issues. 

For new groups, however, progress can be faster, because it is often easier to prevent a problem than to fix one. And this is where the promise comes in. Last year, I became the academic director of a new online school at EPFL (the EPFL Extension School, which will focus on continued technology education). This sounds more glorious than it should, because at the time, this new school was simply an idea in my head, and the team was literally just me. But I made a promise to myself, namely that I would not build a technology program and have practically no women teaching on screen. No matter how well they would do it, if the teachers were predominantly male, we would be sending once again, without ill intent, the subtle signal that technology is something for guys.

Today, I want to make this promise publicly. At the EPFL Extension School, we will have gender parity for on-screen instructors. I can't guarantee that we will achieve this at all times, because we are not (yet) a large operation, and I also recognize that at any point in time we may be out of balance, hopefully in both directions, due to the fact that people come and people go. But it will be part of the school's DNA, and if we're off balance, we know what we have to do, and the excuse that it's a hard problem once you have it won't be acceptable. 

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1139937 2017-03-19T11:10:54Z 2017-03-19T11:18:38Z Technology in public health: A discussion with Caroline Buckee

A few weeks ago, I came across a piece in the Boston Globe entitled Sorry, Silicon Valley, but ‘disruption’ isn’t a cure-all. It's a very short op-ed, so I recommend reading it. The piece was written by Caroline Buckee, Assistant Professor at the Harvard T.H. Chan School of Public Health. I know Caroline personally, and given that she has written some of the key papers in digital epidemiology, I was surprised to read her rant. Because Caroline is super smart and her work extremely innovative, I started to ask myself if I am missing something, so I decided to write to her. My idea was that rather than arguing over Twitter, we could have a discussion by email, which we can then publish on the internet. To my great delight, she agreed, and I am now posting the current state of the exchange here.

From: Marcel Salathé
To: Caroline Buckee
Date: 16. March 2017

Dear Caroline,

I hope this email finds you well. Via Marc I recently found you on Twitter, and I’m looking forward to now seeing more frequently what you’re up to.

Through Twitter, I also came across an article you wrote in the Boston Globe (the "rant about pandemic preparedness", as you called it on Twitter). While I thought it hilarious as a rant, I also thought there were a lot of elements in there that I strongly disagree with. At times, you come across as saying “how dare these whippersnappers with their computers challenge my authority”, and I think if I had been a just-out-of-college graduate reading this, excited about how I could bring digital tools to the field of global health, I would have found your piece deeply demotivating.

So I wanted to clarify with you some issues you raised there, and share those with the broader community. Twitter doesn’t work well for this, in my experience; but would you be willing to do this over email? I would then put the entire discussion on my blog, and you can of course do whatever you want to do with it. I promise that I won’t do any editing at all, and I will also not add anything beyond what we write in the emails.

Would you be willing to do this? I am sure you are super busy as well, but I think it could be something that many people may find worthwhile reading. I know I would.

All the best, and I hope you won’t have to deal with snow any longer in Boston!

Cheers,
Marcel


From: Caroline Buckee
To: Marcel Salathé
Date: 16. March 2017

Hi Marcel,

Sure, I would be happy to do that, I think this is a really important issue - I'll put down some thoughts. As you know, I like having technical CS and applied math grads in my group, and in no way do I think that the establishment should not be challenged. We may disagree as to who the establishment actually is, however. 

My concern is with the attitudes and funding streams that are increasingly prevalent among people I encounter from the start up world and Silicon Valley more generally (and these look to become even more important now that NIH funding it going away) - the attitude that we no longer need to do real field work and basic biology, that we can understand complex situations through remote sensing and crowd sourcing alone, that short term and quick fix tech solutions can solve problems of basic biology and complex political issues, that the problem must be to do with the fact that not enough physicists have thought about it. There is a pervasive arrogance in these attitudes, which are ultimately based on the assumption that technical skill can make up for ignorance. 

As for the idea that my small article would give any new grad pause for thought, I hope it does. I do not count myself as an expert at this stage of my career - these issues require years of study and research. I believe I know enough to understand that a superficial approach to pandemic preparedness will be unsuccessful, and I am genuinely worried about it. The article was not meant to be discouraging, it was supposed to make that particular echo chamber think for a second about whether they should perhaps pay a little more attention to the realities, rich history, and literature of the fields they are trying to fix. (As a side note, I have yet to meet a Silicon Valley graduate in their early 20's who is even slightly deflated when presented with evidence of their glaring ignorance... but I am a bit cynical...!)

In my experience, my opinion is unpopular (at my university and among funders), and does not represent "the establishment". At every level, there is an increasing emphasis on translational work, a decreasing appetite for basic science. This alarms me because any brief perusal of the history of science will show that many of the most important discoveries happen in pursuit of some other scientific goal whose original aim was to understand the world we live in in a fundamental sense - not to engineer a solution to a particular problem. In my field, I think the problem with this thinking is illustrated well by the generation of incredibly complex simulation models of malaria that are intended to help policy makers but are impossible to reproduce, difficult to interpret, and have hundreds of uncertain parameters, all in spite of the fact that we still don't understand the basic epidemiological features of the disease (e.g. infectious duration and immunity).

I think there is the potential for an amazing synergy between bright, newly trained tech savvy graduates and the field of global health. We need more of them for sure. What we don't need is to channel them into projects that are not grounded in basic research and deeply embedded in field experience.

I would enjoy hearing your thoughts on this - both of us are well-acquainted with these issues and I think the field is quite divided, so a discussion could be useful.

I hate snow. I hate it so much!

Take care,

Caroline  


From: Marcel Salathé
To: Caroline Buckee
Date: 18. March 2017

Dear Caroline,

Many thanks for your response, and thanks for doing this. I agree with you that it’s an important issue.

I am sorry that you encounter the attitude that we "no longer need to do real field work and basic biology, that we can understand complex situations through remote sensing and crowd sourcing alone”. This would indeed be an arrogant attitude, and I would be as concerned as you are. It does not reflect, however, my experience, which has largely been that basic research and field work are all that is needed, and the new approaches we and others tried to bring to the table were not taken seriously (the “oh you and your silly tech toys” attitude). So you can imagine why your article rubbed me a bit the wrong way.

I find both of these attitudes shortsighted. Let’s talk about pandemic preparedness, which is the focus of your article. Why wouldn’t we want to bring all possible weapons to the fight? It's very clear to me that both basic science and field work as well as digital approaches using mobile phones, social media, crowdsourcing, etc. will be important in addressing the threat of pandemics. Why does it have to be one versus the other? Is it just a reflection of the funding environment, where one dollar that is going to a crowdsourcing project is one dollar that is taken away from basic science? Or is there a more psychological issue, in that basic science is worthy of a dollar, but novel approaches like crowdsourcing are not?

You write that “the next global pandemic will not be prevented by the perfectly designed app. “Innovation labs” and “hackathons” have popped up around the world, trying to make inroads into global health using technology, often funded via a startup model of pilot grants favoring short-term innovation. They almost always fail.” And just a little later, you state that "Meanwhile, the important but time-consuming effort required to evaluate whether interventions actually work is largely ignored.” Here again, it’s easy to interpret this as putting one against the other. Evaluation studies are important and should be funded, but why can’t we at the same time use hackathons to bring people together and pick each other’s brains, even if only for a few days? In fact, hackathons may be the surest way to demonstrate that some problems can’t be solved on a weekend. And while it’s true that most ideas developed there end up going nowhere, some ideas take on a life of their own. And sometimes - very rarely, but sometimes - they lead to something wonderful. But independent of the outcome, people often walk away enlightened from these events, and have often made new connections that will be useful for their futures. So I would strongly disagree with you that they almost always fail.

Your observation that there is "an increasing emphasis on translational work, a decreasing appetite for basic science” is probably correct, but rather than blaming it on 20 year old SiliconValley graduates, I would ask ourselves why that is. Translational work is directly usable in practice, as per its definition. No wonder people like it! Basic research, on the other hand, is a much tougher sell. Most of the time, it will lead nowhere. Sometimes, it will lead to interesting places. And very rarely, it will lead to absolutely astonishing breakthroughs that could not have happened in any other way (such as the CRISPR discovery). By the way, in terms of probabilities of success, isn’t this quite similar to the field of mobile health apps, wich you dismissed as "a wasteland of marginally promising pilot studies, unused smartphone apps, and interesting but impractical gadgets that are neither scalable nor sustainable”? But I digress. Anyways, rather than spending our time explaining this enormous value of basic research to the public, which ultimately funds it, we engage in pity fights over vanity publications and prestige. People holding back data so that they can publish more; people publishing in closed access journals; hiring and tenure committees valuing publications in journals with high impact factors much more than public outreach. I know you agree here, because at one point you express this very well in your piece when you say that "the publish-or-perish model of promotion and tenure favors high-impact articles over real impact on health."

This that is exactly what worries me, and it worries me much, much more than a few arrogant people from Silicon Valley. We are at a point where the academic system is so obsessed with prestige that it created perverted incentives leading to the existential crisis science finds itself in. We are supposed to have an impact on the world, but the only way impact is assessed is by measures that have very little relevance in the real world, such as citation records and prizes. We can barely reproduce each other’s findings. For a long time, science has moved away from the public, and now it seems that the public is moving away from science. This is obviously enormously dangerous, leading to “alternative fact” bubbles, and politicians stating that people have had enough of experts.

On this background, I am very relieved to see scientists and funders excited about crowdsourcing, about citizen science, about creating apps that people can use, even at the risk that many of them will be abandoned. I would just wish that when traditional scientific experts see a young out of college grad trying to solve public health with a shiny new app, that they would go and offer to help them with their expertise - however naive their approach, or rather *especially* when the approach is naive. If they are too arrogant to accept the help, so be it. The people who will change things will always appreciate a well formed critique, or an advice that helps them jump over a hurdle much faster.

What I see, in short, is that very often, scientific experts, who already have a hard time getting resources, feel threatened by new tech approaches, while people trying to bring new tech approaches to the field are getting the cold shoulder from the more established experts. This, to me, is the wrong fight, and we shouldn’t add fuel to the fire by providing false choices. Why does it have to be "TED talks and elevator pitches as a substitute for rigorous, peer-reviewed science”; why can’t it be both?

Stay warm,

Marcel

PS Have you seen this “grant application” by John Snow? It made me laugh and cry at the same time… tinyurl.com/lofaoop


From: Caroline Buckee
To: Marcel Salathé
Date: 18. March 2017

Hi Marcel,

First of all, I completely and totally agree about the perverse incentives, ridiculous spats, and inefficiencies of academic science - it's a broken system in many ways. We spend our lives writing grants, we battle to get our papers into "high impact" journals (all of us do even though we hate doing it), and we are largely rewarded for getting press, bringing in money, and doing new shiny projects rather than seeing through potentially impactful work. 

You say that I am probably right about basic science funding going away, but I didn't follow the logic from there. We should educate the public instead of engaging in academic pettiness - yes, I agree. Basic science is a tough sell - not sure I agree about that as much, but this is probably linked to developing a deeper and broader education about science at every level. Most basic science leads nowhere? Strongly disagree! If you mean by "leads nowhere" that it does not result in a product, then fine, but if you mean that it doesn't result in a greater understanding of the world and insights into how to do experiments better, even if they "fail", then I disagree. The point is that basic science is about seeking truth about the world, not in designing a thing. You can learn a lot from engineering projects, but the exercise is fundamentally different in its goals and approach. Maybe this is getting too philosophical to be useful. 

In any case, I think it's important to link educating the public about the importance of basic science directly to the arrogance of Silicon Valley; it's not unrelated. Given that NIH funding is likely to become even more scarce, increasing the time and effort spent getting funding for our work, these problems will only get worse. I agree with you that this is a major crisis, but I do think it is important to think about the role played by Silicon Valley (and other wealthy philanthropists for that matter) as the crisis deepens. As they generously step in to fill the gaps - and I think it's wonderful that they consider doing so - it creates the opportunity for them to set the agenda for research. Large donations are given by rich donors whose children have rare genetic conditions to study those conditions in particular. The looming threat of mortality among rich old (mostly white) dudes is going to keep researchers who study dementia funded. I am in two minds about whether this increasing trend of personalized, directed funding from individuals represents worse oversight than we have right now with the NIH etc., but it is surely worth thinking about. And tech founders tend to think that tech-style solutions are the way forward. It is not too ridiculous, I don't think, to imagine a world where much if not most science funding comes from rich old white dudes who decide to bequeath their fortunes to good causes. How they decide to spend their money is up to them, but that worries me; should it be up to them? Who should set the agenda? It would be lovely to fund everything more, but that won't happen - there will always be fashionable and unfashionable approaches, not everyone gets funded, and Silicon Valley's money matters.  

Public health funding in low and middle income settings (actually, in every setting, but particularly in resource-limited regions) is also a very constrained zero sum game. Allocating resources for training and management of a new mHealth system does take money away from something else. Crowd sourcing and citizen science could be really useful for some things, but yes, in many cases I think that sexy new tech approaches do take funding away from other aspects of public health. I would be genuinely interested - and perhaps we could write this up collaboratively - to put together some case studies and try to figure out exactly how many and which mHealth solutions have actually worked, scaled up, and been sustained over time. We could also dig into how applied public health grants are allocated by organizations to short-term tech pilot studies like the ones I was critical of versus other things, and try to evaluate what that means for funding in other domains, and which, if any, have led to solutions that are being used widely. This seems like it might be a useful exercise.

We agree that there should be greater integration of so-called experts and new tech grads but I don't see that happening very much. I don't think it's all because the experts are in a huff about being upstaged, although I'm sure that happens sometimes. If we could figure this out I would be very happy. This is getting too long so I will stop, but I think it's worth us thinking about why there is so little integration. I suspect some of it has to do with the timescales of global health and requirements for long-term relationship building and slow, careful work in the field. I think some of it has to do with training students to value get-rich-quick start-up approaches and confident elevator pitches over longer term investments in understanding and grappling with a particular field. I do think that your example (a young tech grad trying to naively build an app, and the expert going to them to try to help) should be reversed. In my opinion, the young tech grad should go and study their problem of choice with experts in the field, and subsequently solicit their advice about how to move forward with their shiny app idea, which may by then have morphed into something much more informed and ultimately useful...

C

PS :)

From: Marcel Salathé
To: Caroline Buckee
Date: 19. March 2017

Dear Caroline

My wording of “leads nowhere” may indeed have been too harsh, I agree with you that if well designed, then basic research will always tell us something about the world. My reference there was indeed that it doesn’t necessary lead to a product or a usable method. This is probably a good time where I should stress that I am a big proponent of basic research - anyone who doubts that is invited to go read my PhD thesis which was on a rather obscure aspect of theoretical evolutionary biology!

I actually think that the success distribution of basic research is practically identical with that of VC investments. Most VC investments are a complete loss, some return the money, very few return a few X, and the very rare one gives you 100X - 1000X. So is it still worth doing VC investments? Yes, as long as that occasional big success comes along. And so it is with basic research, except, as you say, and I agree, that we will never lose all the money, because we always learn something. But even if you dismiss that entirely, it would still be worth doing.

The topic we seem to be converging on is how much money should be given to what. Unless I am completely misinterpreting you, the frustration in your original piece came from the notion that a dollar in new tech approaches is a dollar taken away from other aspects of public health. With respect to private money, I don’t think we have many options. Whoever gives their wealth gets to decide how it is spent, which is only fair. I myself get some funding from private foundations and I am very grateful for it, especially because I am given the necessary freedom I need to reach the goals I want to achieve with this funding. The issue we should debate more vigorously is how much public money should be spent on what type of approach. In that respect, I am equally interested in the funding vs outcome questions you raised.

As to why there isn’t more integration between tech and public health, I don’t have any answers. My suspicion is that it is a cultural problem. The gap between the two worlds is still very large. And people with tech skills are in such high demand that they can choose from many other options that seem more exciting (even if in reality they end up contributing to selling more and better ads). So I think there is an important role for people like us, who have legs in both worlds, and who can at least try to communicate between the two. This is why I am so careful not to present them as “either or” approaches - an important part of the future work will be done by the approaches in combination.

(I think we’ve clarified a lot of points and I understand your view much better now. I’m going to go ahead and put this on the blog, also to see if there are any reactions to it. I am very happy to go on and discuss more - thanks for doing this!)

Marcel

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1132083 2017-02-17T13:22:33Z 2017-02-17T13:22:33Z Self-driving cars: the public health breakthrough of the early 21st century

As readers of this blog know, I am a big fan of self-driving cars. I keep saying that self-driving cars are going to be the biggest public health breakthrough of the early 21st century. Why? Because the number of people that get injured or killed by humans in cars is simply astounding, and self-driving cars will bring this number close to zero.

If you have a hard time believing this, consider these statistics from the Association for Safe International Road Travel:

In the US alone,

  • each year 37,000+ people die in car crashes - over 1,600 are children, almost 8,000 are teenagers
  • each year, about 2.3 million people are injured or disabled

Globally, the numbers are even more staggering:

  • each year, almost 1,3 million people die in car crashes
  • each year, somewhere between 20 and 50 million people are injured or disabled
  • Car crashes are the leading cause of death among young people ages 15-29, and the second leading cause of death worldwide among young people ages 5-14.

If car accidents were an infectious disease, we would very clearly make driving illegal.

Self-driving cars will substantially reduce those numbers. It has recently been shown that the current version of Tesla's autopilot reduced crashes by a whopping 40% - and we're in early days when it comes to the sophistication of these systems. 

All these data points lead me to the conclusion stated above, that self-driving cars are going to be the biggest public health breakthrough of the early 21st century. 

I cannot wait to see the majority of cars being autonomous. I have two kids of age 4 and 7 - the only time I am seriously worried about their safety is when they are in a car, or when they play near a road, and the stats make this fear entirely rational. According to the CDC, the injuries due to transportation are the leading cause of death for children in the US, and I don't assume that this is much different in Europe.

In fact, the only time I am worried about my own safety is when I am in a car, or near a car. I am biking to and from the train station every day, and if you were to plot my health risk over the course of a day, you'd see two large peaks exactly when I'm on the bike.

If there is any doubt that I am super excited to see full autonomous vehicles on the street, I hope to have put them to rest. But what increasingly fascinates me about self-driving cars, beyond the obvious safety benefits, is what they will do to our lives, and how they will affect public transport, cities, companies, etc. I have some thoughts on this and will write another blog post later. My thinking on this has taken an unexpected turn after reading Rodney Brook's blog post entitled "Unexpected Consequences of Self Driving Cars", which I recommend highly. 


]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1113876 2016-12-08T15:54:03Z 2016-12-08T15:54:03Z Some News(letter)

(It appears as I was updating this website, I was accidentally sending out a test post - my sincere apologies. The digital transformation is hard. QED.)

I've recently deleted my Facebook account, once again. I'm pretty sure that it's final this time. I respect Facebook as a company and I can see how one can love their products. But it's not for me. It used to be great for staying in touch with friends, and seeing pictures and news of friends and family. But Facebook ended up being mostly ads and "news" about Trump. Then came the fake news story. Then came the long-held realization about micro-targeting for political purposes. Then came the censorship deal with China's government. And the realization that what I see on Facebook, and what my friends see about me, is entirely driven by an algorithm. And this algorithm is entirely driven to maximize profits for Facebook. None of this on its own is a game stopper, but in its combination, and the fact that there was little upside to being on Facebook, I had to quit. In a strange way, I felt dirty using Facebook - I knew I was the product that was being monetized, but I checked nevertheless multiple times a day, just to think each and every time "why am I doing this?".

I have seen a lot of "quit social media you'll better off" posts lately. For me, Twitter has been too much of a benefit, professionally and personally, to just walk away from it. It truly has become my major source of professional news. I try to keep the politics and personal stuff to a minimum, and appreciate if others do that as well. To me, Twitter is what LinkedIn wanted to be - a professional network. The fact that my Twitter client does not algorithmically filter the tweets is a great benefit. The fact that Twitter has remained true to its roots - short messages where you need to get to the point fast - is a great benefit. The fact that Twitter is public by default is a great benefit - it means that I think twice about posting rants (sometimes I fail at this). 

But I feel uneasy about Twitter too. The fact that I am delegating communication to a third party is troubling. What if Twitter gets sold to another company and becomes a horrible product? What if they change in a way I don't like? What if they go out of business? I cannot contact the people who follow me, without Twitter's blessing. If Twitter is gone, my contacts are gone. This is not good.

That is why I decided to start a newsletter called Digital Intelligence. I know there are some people who follow me on Twitter because I provide them with interesting bits of news, typically around anything digital - technology, education, academia, economy, etc. - that I find on the web and that I love to share. I think for many it would in fact be more efficient to subscribe to the newsletter instead. Email may not feel like the hip thing in 2017, but I think there are many benefits to email that are simply not there with Twitter. All of us already use email. It's easily searchable. It's not typically filtered by an algorithm (beyond the spam filter, of course). It allows us to go beyond 140 characters when necessary.

But ultimately, I want to be able to communicate with other people without the dependency of a third party platform. In the digital world, email is the only way to do that.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1063222 2016-06-14T08:42:14Z 2017-02-06T07:17:52Z Data of the people, by the people, for the people

About 150 years ago, the American president Abraham Lincoln gave a very short speech - only a few minutes long - on a battlefield in Gettysburg, Pennsylvania. The occasion was to honor the soldiers who died in a fierce battle at the height of the American Civil War. Despite the brevity of the speech, and the fact that almost nobody understood what Lincoln was saying, it is now perhaps the most famous speech in US history by a US president. It is only ten sentences long, but to condense it even further here, Lincoln essentially said that there is nothing anyone could do to properly honor the fallen soldiers, other than to help ensuring that the idea of this newly conceived nation would continue to live on, and that “government of the people, by the people, for the people, shall not vanish from the earth.”

Why is this such a powerful line? It’s powerful because it expresses in very simple terms the basic idea of democracy, that we the people can form government, and that we the people can make political decisions, which is in itself the best guarantee that the decisions are made in the best interest of us, the people.

So, what does all of this have to do with open data?

Fundamentally, government is about organizing power. The vast majority of us agrees that power should be distributed among the many, not the few. To quote John Dalberg Acton: “Liberty consists in the division of power. Absolutism, in the concentration of power.” That is what democracy is about. And that is the discussion we should have about data. Because data is power. And if liberty consist in the division of power, or in the divided access to power, then that means that liberty also consist in the division of data.

But what does it even mean to say that data equals power?

Data contains information, and information can be used for commercial gains. We all understand that. But the power of data is much more fundamental than that. To understand this, we need to reflect on where we are as humans, at this point in time. We have now entered the second machine age - an age where machines will not only be much stronger, physically, as they have been for centuries, but also much, much smarter than we are. Not just a little smarter, but orders of magnitudes smarter. Most of us have come to terms with the fact that machines will achieve human intelligence. But think about machines that are ten times smarter, a hundred times smarter. How do you feel about a machine that is a million times smarter than a human? It’s a question worth asking, because while we may not live to see such a machine, our children, or grandchildren, probably will. In any case, even a machine that’s 100 times smarter than us is something you wouldn’t want to compete against. You wouldn’t feel comfortable if such machines were controlled by a small elite group. However, if such a machine were an agent, at your service, and if everyone would have such agents, which they’d use to make their lives better, that would be an entirely different story. Thus, when AI - artificial intelligence - becomes very powerful, it would be a disaster if that power were in the hands of a few. We would go back to absolutism, and despotism. We therefore need to ensure that the power of AI is distributed widely. 

There are some efforts, like the non-profit organization OpenAI, that aim to ensure that this is the case. In fact, if you follow the field of machine learning a little bit, a field that is currently at the heart of many of the AI-relevant breakthroughs, then you would see that most organizations are now open-sourcing the code that’s behind these AI breakthroughs. That’s a good thing, because it helps ensuring that the raw machinery to build AI, the algorithms, are indeed in the hands of many.

But this is not enough - not nearly. It’s very important to recognize that the power of AI is not simply in the algorithms; it’s not simply in the technology per se. It’s in the data. AI becomes intelligent when it can quickly learn on large amounts of data. AI without data does not exist. The analog version, the human brain, can perhaps help us to understand this idea a bit better. A human brain, in isolation, can only do so many things. It’s when the brain can learn on data that the magic happens. We call this education, or learning more generally. The brain itself is necessary, but it is the access to data - in the form of knowledge, and education - that makes us the most intelligent individuals to ever walk the face of the earth; of such an intelligence that we can even create artificial intelligence. And to take this analogy one step further, if you learn on small, false, or just generally crappy data, your brain will consistently make the wrong predictions. Coincidentally, this is why science has been such a boon for mankind: the scientific method helps us ensure that our brains get trained on high quality data.

So this is the central idea here: 

The enormous power of AI is based on data. If we want everyone to have access to this power, we need widespread access to data.

Put slightly differently:

Broad open data access is an absolute necessity for human liberty in the machine age.

If we accept this, then the question immediately arises, how do we get there? The fact that AI power is derived from data also means that from an economic perspective, privileged data access is incredibly valuable. Market players with privileged data access have absolutely no interest in losing this privilege. This is understandable - in the information economy, being able to extract information from data that can be used commercially is a matter of life and death, economically speaking. Forcing these players to give up their privileged access to data, which they generally collected themselves, would likely have severely negative economic consequences. It would also be highly unethical - for example, I’d be very upset if we forced Google to open up their data centers where anyone could have access to my data. There has to be another way.

I would like to offer a suggestion for another way. Access to personal data should be controlled by those who generate the data, not by those who collect it. The data generator is the person whose data is collected. In order for the data generator to be able to control access, the collector needs to provide the person a copy of the personal data.

Let’s make an example. Let’s say you use a provider’s map on your smartphone to drive from A to B. As you’re driving, GPS data of your trip is collected by the app maker. The app maker uses this kind of data to give you real-time traffic information. Great service - but you’ll never be able to access this data. You should be able to access this data, either in real time or with some delay, and do whatever you please to do with it, from training your own AI to sharing or selling it to third parties.

Another example. Let’s say you track your fitness with some device, you always shop for food at the same grocery store, and you also took part in a cohort study where your genome was sequenced, with your permission of course. The fitness device maker may reuse your data to make a more compelling product; the grocery store may direct ads at you for new products that fit your profile; and the cohort study will use your DNA data for research. All good - but is it easy for you to combine these three data sources? Not at the moment. You should be able to access all three data source - your fitness data, your nutrition data, and your DNA, without having to ask anyone for permission, for whatever reason. If you’re now asking, “why would anyone want that data”, you are asking the exact wrong question. It’s not anyone’s business why you would want that data - the point is that you should be able to get it with zero effort, in machine readable form, and then you should be allowed to do with it whatever you want to. It's your data. 

In some situations, we’re already close to this scenario. For example, when you open a bank account, of course you will be able to access every last detail of any transaction at any point in time, whenever and wherever you want to, without having to ask anyone. Any banking service without this possibility would be unthinkable. Why isn’t it like this with any service? If I can have my financial data like that, why can I not have the same access to my health data, my location data, my shopping data?

Once our own data is easily accessible for us, then it will be possible for us to let others access the data, provided we allow it. We can for example give the data to third parties such as trusted research groups, not-for-profit-organizations, or even trusted parts of the government or trusted corporations. At the moment, this sounds very futuristic. But imagine, for example, a trusted health data organization, perhaps a cooperative, where hundreds of thousands or even millions of people share their health data. This would be an enormous data pool that could be studied by public health officials to make better recommendations. It could be investigated by pharmaceutical companies to design new drugs. And, to bring this back to the original thought about AI, anyone could use this data to improve the artificial intelligence agents that will increasingly make health decisions on our behalf.  

Today, we’ll hear many excellent arguments that make the case for open data, highlighting social, political, economical and scientific aspects. My argument is that human liberty cannot exist in the machine age that is run by algorithms, unless people have broad access to data to improve their own intelligent agents. From this perspective, it makes no sense to be concerned about “smart machines”, or “smart algorithms” - the major concern should be about closed data. We won’t be able to leverage the phenomenal power of smart, learning, machines for the public good, and for distributed AI - for distributed power, really - if all the data is locked away, accessible only to select few. We need data of the people, by the people, for the people. 


]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1043141 2016-04-28T20:59:32Z 2016-04-28T21:01:06Z 1 Year Apple Watch

It's now been roughly one year since I started wearing Apple Watch. I must say I find it quite a compelling device. I use it primarily as an activity tracker, and I also really love the calendar. Generally, its tight integration with Apple's ecosystem makes it a real winner for me.

It's hard to objectively justify a device. For sure I have objectively become more aware about how much I move and exercise per day. I put it on more or less first thing in the morning, but when I forget to do that, I quickly have the feeling that something is missing. In other words, it has become part of my daily routine, something only a few devices have managed to do. By comparison, I've tried pretty much all iPads since version 1 and none of them ever managed to become irreplaceable.

I do hope Apple will use its "win by continuous improvements" strategy for the watch as well. It's clear we're at very early days in the wearable space. I'm excited to see what's ahead.

I have only one very urgent request: please fix Siri. ("I'm sorry Marcel, I did not get what you said").

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/889407 2016-03-25T11:46:21Z 2016-03-28T20:35:15Z The curse of self-contempt

(I wrote this post almost 9 months ago, but never published it, for reasons that now escape me. Realizing this omission today, I decided to publish it since I haven't changed my mind about the issue).

This morning, a friend shared an article on Twitter, originally published in the Guardian, with the title "Sophie Hunger: Sadly, I don't need a history to be able to exist somewhere". Sophie Hunger is a Swiss musician who, as the daughter of a diplomat, spent large parts of her live abroad (in other European countries). The article is about authenticity, home, and identity. In it, she writes:

"I can't be proud to be Swiss, although I'm predestined to have these kind of feelings. I'm afraid, I'm not an entirely humble person, but I do have the typical European extra dose of self-contempt. Yet, I discipline myself not to feel proud about my country because I know it is a dishonourable kind of feeling. What have I done to be Swiss, and why should it be an achievement? You see, there's a philosophical problem there."

Eight years ago, I left Switzerland - where I was born and raised - to travel the world a bit, and then to permanently move and live in the US. When I left the country I grew up in, I had the exact same feelings that Sophie expressed in her article. Now, having just returned, I see these feelings in an entirely different way: as part of the root of European angst, perhaps the root of European arrogance, and to some extent as a terrible curse: the curse of self-contempt.

Before I go on, let me clarify that I don't think all Europeans are angsty or arrogant. But while in the US, I have often been astounded by the arrogance of some Europeans, criticizing everything - especially the ones who were either just visiting, or hadn't been in the country for long. Even more surprisingly, Americans would usually take it lightly, laugh with the visitors, which in turn infuriated them even more - did they not get that they were being criticized (stupid!), or were they making fun of them (arrogant!)?

The curse of self-contempt is almost entirely absent in the US, at least compared to Europe. Instead, Americans are brought up to be proud of what they achieved, and full of hope for where they can go. It's widely known that American students are off the charts when it comes to self-esteem and self-confidence. And it's easy for the European critic to laugh this off, especially when the stats on important measures like reading ability and math skills are much more average. But I now believe that it's better to be overconfident about yourself, than under- confident. Extremes in both direction are harmful. But in the long run, modest chronic under-confidence is much more harmful than modest chronic overconfidence.

In the culture I grew up, I was taught that "Eigenlob stinkt", which literally means that "self-praise stinks". And that feeling is still part of the national identity - just two days ago, it was the headline of a paragraph in one of Switzerland's major newspapers (NZZ), in an article about the Swiss National holiday. Think about this expression for a moment. It quite clearly states that it is very bad if you praise yourself. How dare you praise yourself? What have you done that is worthy of praise? Let others be the judge to decide who is worthy of praise.

This is the curse of self-contempt - the inability to be content with yourself, or to praise yourself. It is almost unavoidable that arrogance follows. And as Sophie Hunger's paragraph shows, not only are you not supposed to praise yourself, but don't dare to be proud of your country, because it is dishonorable too. After all, you have not done anything to be Swiss, so how dare you be proud? 

This is no critique of Sophie Hunger (the irony would be unbearable). As I mentioned, I had the exact same feelings, and I am grateful that she expressed those complex feelings in a few clear sentences. I am merely pointing out that I now consider these feelings harmful to any one person, and certainly harmful to a society. Of course it's crucial to live in a society where critical thought is possible and even encouraged, and an occasional dose of self-reflection and self-criticism is certainly healthy too. But not to be allowed to be praising yourself, or to be proud of the place you grew up in, that strikes me as highly destructive to the development of healthy people and a healthy population. 

I have no internal conflict feeling proud of what the Swiss, my ancestors, have achieved, while at the same time feeling disgust at some of the dark historical moments, and some current developments. Just like I don't have any conflict to feel happy with myself, without ignoring, and working on, my darker sides. On a higher level, I can also feel proud to be European - part of a continent that has inflicted so much harm to others, and to itself, until 60 years ago, but that has been remarkably peaceful and resilient in the past few decades, and that managed to keep its cool when others (cough, USA, cough) lost their temper for a decade. And I can do this perfectly well while being alarmed by the current inability to find good solutions to deep economic and social problems. Feeling love for something and retaining the ability to see and point out problems, with the goal to improve - these things are not exclusive, but rather depend on each other. 

It is my hope that I can retain this attitude for as long as possible. Just as living abroad for many years has changed me, being back at home will probably change me again over the years. Perhaps this is why I feel compelled to write this, so I can remember in the future. 

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/1008906 2016-03-08T16:08:14Z 2016-03-08T16:17:16Z Jack of all trades

The Swiss National Science Foundation just published an interview with me, in the form of an article (you can read the article in english, french, or german). The last paragraph reads as follows:

So he wears the caps of scientist, entrepreneur, author and musician. Can he manage them all? "I envy those scientists who spend all of their energy on a single pursuit. Being active in a number of different research fields sometimes leads you to think that you lack depth in a number of them. But given that modern science is interdisciplinary, becoming involved in areas outside of one’s comfort zone is also an asset. After all, why choose one approach over another?"

I could probably write an entire book on the idea expressed in this paragraph. Interdisciplinary research has fascinated me from the beginning of my career as a scientist. Doing interdisciplinary science is hard. It's hard because, despite best efforts by the various institutions involved in science, the cards are stacked against you:

  1. A truly interdisciplinary research project is hard to get funded; experts in one discipline won't understand - or worse, trivialize - the challenges in the other disciplines.
  2. A truly interdisciplinary research project is hard to execute; different domains speak different languages, have different theories, consider different issues relevant.

  3. A truly interdisciplinary research project is hard to get published; they don't fit in the neat categories of most journals that are rooted in their disciplines, and there are only a few multidisciplinary journals. Also, point 1.

  4. A truly interdisciplinary research project is hard to get noticed; there are almost no conferences, prizes, recognitions, societies, etc. for interdisciplinary work.

These challenges are increasingly recognized. Unfortunately, there is almost nothing substantial that is being done to address them. And it's not for the lack of trying. It is just simply a very, very hard problem to solve. Disciplines may be arbitrary, but they do exist for a good reason. 

But the key point I tried to address in the interview - and which led to the highly condensed last paragraph cited above - is that the biggest hurdle for doing interdisciplinary science is found in oneself. At least, that is my experience. Doing interdisciplinary science means spending much time trying to understand the other disciplines. You can't do interdisciplinary science without having a basic grasp of the other disciplines. The more you understand of the other disciplines, the more interesting your interdisciplinary research will be. 

And here's the catch: all this time you spend keeping up with understanding at least superficially what's going on in the other disciplines, is time you'd normally spend keeping up with your own field. As a consequence, you are constantly in danger of becoming a "jack of all trades, master of none". I highly recommend reading the Wikipedia entry on the etymology of this term. When it first emerged, it was simply "jack of all trades", meaning a person who was able to do many different things. The negative spin "master of none" was only added later, but it's deeply engrained in our culture. The fact that similar sayings exist in all other languages, as listed on the Wikipedia page, speaks volumes. 

In science, not being perceived as an outstanding expert in one particular field is a real danger to one's career, especially in the mid-career stage. The incentive structure of science is hugely influenced by reputation, which is the main reason scientists are so excited about anything with prestige. At the beginning of your career, as a student, it's clear you're not an expert; at the end, it's clear you're an expert, which presumably is why you survived in the system for so long (exceptions apply). But in the ever growing stretch in between - especially the roughly ten years between PhD and tenure - you definitely do not want to be seen as a "jack of all trades, master of none"

Unless you don't give a damn, which, if you're like me, is what I advise you to do. 

I wasn't sarcastic when I said that I envy scientists who spend all of their time working on a single topic. Focus is something I strive for in everything I do. How marvelous to be consumed by one particular question! How satisfying it must be to point all one's neurons to a single problem, like a laser! What a pleasure to be fully in command of all the literature in your speciality! How wonderful to go back to the same conferences, knowing everyone by name, being friends with most of them. Alas, it is not for me.

I'm drawn to many different fields, just like I'm drawn to experiencing many different types of food. Goodness knows I can get obsessed about one particular food item, spending years trying to perfect it. But that doesn't mean I'm not intently curious at all the other things that surround me. In science, I've decided I find the space between disciplines too interesting to be focusing exclusively on one discipline. 

But this is the catch 22: you need to be able to deal with the fact that you're not as much of an expert in your main discipline as you could be. Are you able to deal with this? 

One advice that I would give, completely unsolicited, like everything on this blog, is to first become very very good in one particular field. Good enough that you find it easy to publish, get funding, get jobs, get invited to conferences, and so on. At this point, you'll be in a much stronger position to branch out. You'll still face all the negative incentives listed above, but at least you have a home base you can return to if things get too crazy.

And when everything goes haywire, always remember:

Specialization is for insects.
]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/990231 2016-02-10T08:17:47Z 2016-02-11T20:35:43Z Open Data: Our Best Guarantee for a Just Algorithmic Future

(Two days ago I gave a talk at TEDxLausanne - I'll post the video when it will become available. This is the prepared text of the talk.)

Imagine you are coming down with the flu. A sudden, rapid onset of a fever, a sore throat, perhaps a cough. Worried, you start searching for your symptoms online. A few days later, as you're not getting better, you decide it's time to go see a doctor. Again a few days later, at your appointment with the doctor, you get diagnosed with the flu. And because flu is a notifiable disease, your doctor will pass on that information to the public health authorities.

Now, let's pause for a moment and reflect on what just happened. The first thing you did was to go on the internet. Let’s say you searched on Google. Google now has a search query from you with typical flu-related search terms. And Google has that information from millions of other people who are coming down with the flu as well - 1 two 2 weeks before that information made it to the public health authorities. In other words, from the perspective of Google, it will be old news.

In fact, this example isn’t hypothetical. Google Flu Trends was the first big example of a new field called “digital epidemiology”. When it launched, I was a postdoc. It became clear to me that the data that people generate about being sick, or staying healthy, would increasingly bypass the traditional healthcare systems, and go through the internet, apps, and online services. Not only would these novel data streams be much faster than traditional data streams, they would also be much larger, because - sadly - many more people have access to the internet through a phone than access to a health care system. In epidemiology, speed and coverage are everything; something the world was painfully reminded of last year during the Ebola outbreak.

So I became a digital epidemiologist - and I wondered: what other problems could we solve with these new data? Diseases like the flu, Ebola, and Zika get all the headlines, but there is an entire world of diseases that regularly kills on a large scale that almost nobody talks about: plant diseases. Today, 500 million smallholder farmers in the world depend on their crops doing well, but help is often hard to get when diseases start spreading. Now that the internet and mobile phones are omnipresent, even in low income countries, it seemed that digital epidemiology could help, and so a colleague, David Hughes, and I built a platform called PlantVillage. The idea was simple - if you have a disease in your field or garden, simply snap a picture with your phone and load it onto the site. We’ll immediately have an expert look at it and help you.

This system works well - but there are only so many human experts available in real time. Can we possibly get the diagnosis done by a machine too? Can we teach a computer to see what’s in an image? 

A project at Stanford called ImageNet tried to do this with computer vision – they created a dataset of hundreds of thousands of images - showing things like a horse, a car, a frog, a house. They wanted to develop software that could learn from the images, to later correctly classify images that the software had never seen before. This process is called “machine learning”, because you are letting a machine learn on existing data. The other way of saying this is that you are training an algorithm on existing data. And when you do this right, then the end product - the trained algorithm - can work with information it hasn’t encountered before. But the people at Image Net didn’t just use machine learning. They organized a challenge - a friendly competition - by saying “here, everybody can have access to all this data - if you think you can develop an algorithm that is better than the current state of the art, go for it!” And go for it, people did! Around the world, hundreds of research teams participated in this challenge, submitting their algorithms. And a remarkable thing happened. In less than five years, the field experienced a true revolution. At the end, the algorithms weren’t merely better than the previous ones. They were now better than humans. 

Machine learning is an incredibly hot and exciting research field, and it’s the basis of all the “artificial intelligence” craze that’s going on at the moment. And it's not just academic: it is how Facebook recognizes your friends when you upload an image. It is how Netflix recommends which movies you will probably like. And it is how self driving cars will bring you safely from A to B in the very near future.

Now, take the ImageNet project, but replace the images of horses and cars and houses, with images of plant diseases. That is what we are now doing with PlantVillage. We are collecting hundreds of thousands of images from diseased and healthy plants around the world, making them open access, and we are running open challenges where everyone can pitch in algorithms that can correctly identify a disease. Imagine how transformational this can be! Imagine if these algorithms can be just as good, or perhaps even better, than human experts. Imagine what can happen when you build these algorithms into apps, and release those apps for free to the 5 billion people around the globe with smartphones.

It’s clear to me now that this not only the future of PlantVillage, but a future of applied science more generally. Because if you can do this with plant diseases, you can do this with human diseases as well. You can in principle do it with skin cancer detection. Basically, any task where a human needs to make a decision based on an image, you can train an algorithm to be just as good. And it doesn’t stop at images, of course. Text, videos, sounds, more complex data altogether - anything is up for grabs. As long as you have enough good data that a machine learning algorithm can train on, it’s only a matter of time until someone will develop an algorithm that will reach and exceed human performance. And here, we're not talking science fiction, in the next 50 years, we're talking now, in the next couple of years. And this is why these large datasets - big data - are so exciting. Big data is not exciting because it’s big per se. It’s exciting because that bigness means that algorithms can learn from vast amounts of knowledge stored in those datasets, and achieve human performance.

If algorithms derive their power from data, then data equals power. So who has the data?  Things may be ethically easy with images of horses, cars, houses, or even plant diseases -  but what about the data concerning your personal health? Who has the data about our health, data which will form the basis for smart, personalized health algorithms? The answer may surprise you, because it’s not just about your past visits to doctors, and to hospitals. It’s your genome, your microbiome, all the data from your various sensors, from smartphones to smartwatches. The drugs you took. The vaccines you received. The diseases you had. Everything you eat, every place you go to, how much you exercise. Almost anything you do is relevant to your health in one way or another. And all that data exists somewhere. In hospital databases. In electronic health records. On the servers of the Googles and Apples and Facebooks of this world. In the databases of the grocery stores, where you buy your food. In the databases of the credit card companies who know where you bought what, when. These organizations have the data on which to train the future algorithms of smart personalized healthcare.

Today, these mainly business organizations provide us with compelling services that we love to use. In the process, they collect a lot of data about us, and store them in their mostly secure databases. They use these data primarily driven by the potential of commercial gains. But the data are closed, not accessible to the public - we imprison our data in those silos that only a selected few have access to, because we are afraid of privacy loss. And because of this fear, we don’t let the data work for us.  

Remember Google Flu Trends that I mentioned a few minutes ago? Last year, Google shut it down. Why? We can only speculate. But what this reminds us of is that those who have the data with which they can build these fantastic services... can also shut them down. And when it comes to our health, to our wealth, to our public infrastructure, we should be really careful to think deeply about who owns the data. I applaud Google for what they have done with Google Flu Trends. I am a happy consumer of many Google services that I love to use. But it is our responsibility to ensure that we don’t start to depend too strongly on systems that can be shut down any day without warning, because of a business decision that's been made thousands of miles away. 

So, how we can strike the right balance between protecting individual privacy and unleashing big data for the good of the public? I think the solution lies in giving each of us a right to a copy of our data.  We can then take a copy of our data, and either choose to retain complete privacy - or we can choose to donate parts of these data to others, to research projects, or into the public domain to pursue a public good, with the reassurance that these data will not be used by insurance companies, banks and employers to discriminate against us.  

Implementing this vision is not going to be easy, but it is possible. It has to be possible. Why? Two reasons (at least). First, our data is already digital, stored in machines somewhere and hence eminently hackable. We should have regulations in place to manage the risks of the inevitable data breaches. Second, we are now running full speed into a 2nd machine age where machines will not only be much stronger than us - as they have been in the past decades - but also much, much smarter than us. We need to continue to ensure that the machines work in our common interest. It’s not smart machines and artificial intelligence we should be concerned about - they are smart and intelligent because of the data. Our concern should be about closed data. We won’t be able to leverage the phenomenal power of smart, learning, machines for the public good if all the data is locked away.

Open data is not what we should be afraid of - it's what we should embrace. It’s our best guarantee that we remain in control of the algorithms that will rule our digital world in the future.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/922011 2015-11-11T08:47:00Z 2015-11-12T18:17:58Z Forty

Today I'm forty. 

40.

I'm starring at this number in disbelief, even though I've seen it coming for, say, 35 years. It's not because I feel old (I don't). It's not because I can't believe how fast it's gone (I can). 

It's because I can't believe how lucky I've been. I've lived a life of privilege from the day I was born. 

I was born in what is today the world's wealthiest country by almost any measure. I was born to loving parents who supported me in every decision I've made, who didn't pressure me into any particular direction. I was lucky to meet many wonderful friends across the decades and across the continents, some of whom I've only known for a few months or a few years. As far as I can tell, none of them has ever betrayed me. I have a wonderful wife who supported me in all my decisions, who stuck with me in my low times, and who is simply the best partner and mother to our children I can imagine. I have two wonderful children who make me smile every single day. With the exception of the regular childhood sniffles, they have never been seriously ill.

I have never been in a hospital as a patient in the past 40 years. I would love to keep this record going for the next 40 years.

I have a wonderful job, a tenured position in one of the world's best and most innovative universities. I'd like to think I had something to do with this - but it's hard not to see how everything has also simply lined up perfectly: a free educational system, wonderful mentors, perfect timing.

In other words, I was lucky beyond imagination.

I'm curious what's in store for me. Statistically speaking, I'm not even at half time. But ever since I've had children, I've been thinking about death more often. I'm not sure why, but there is something about seeing your children grow up that somehow reminds you that for new life to begin, old life must eventually end. I am now keenly aware that my heart could stop beating tomorrow, or that a car may run me over on my ride to work next week. So what? If anyone reads this after I've passed away, I hope they can see that I've had a wonderful life, and that I've been incredibly grateful for it. 

Perhaps I'll luck out even more, and I will see my children grow up to become independent and responsible adults. Perhaps I will see my wife make her dream come true of becoming a winemaker. Perhaps I can one day hold a grandchild in my arms. I will surely shed a tear then. Perhaps I can work on interesting projects for another two to three decades, and try to make the world a slightly better place than it was before. Perhaps I can see thousands of sunrises while marveling at the beauty of the universe, and my astronomical luck to be among the few bags of atoms that understand where they came from and why they are here. Perhaps I can drink thousands of wonderful bottles of wine, with friends new and old, laughing and crying about whatever it is life has thrown our way.

Perhaps not. Perhaps my luck runs out in a few days, a few months, a few years. So be it. Today, I am simply pausing, after having circled around the sun 40 times, somewhere in a distant corner of the universe, to be grateful for what I've had. 

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/927334 2015-11-05T14:22:00Z 2015-11-05T17:24:47Z Advice to a student

I was recently asked by a student about advice on what to do, how to plan a career. The student had just completed a degree in biology, but also discovered a love for programming. I wrote a response and decided to publish it here, with very light edits, in the hope it might be helpful for others who find themselves in a similar situation.

The description of your situation very much reminds me of my own, many years ago. So maybe the easiest way for me to give you advice is to tell you how I got out of the confusion at that time.

I became a biology student by the “exclusion principle”. None of my high-school teachers managed to kindle any kind of passion for any kind of subject. For a long time, I thought this was my mistake, that I was just an unexcitable person. Now I know of course that nothing could be further from the truth, and that my biggest challenge these days is to not get too excited about too many things, given that the day only has 24 hours. When it came to deciding what to study, biology simply seemed the least boring. There was something else, too - it seemed the most meaningful. This may sound weird, but in my early twenties, I was in a state of constant, undefined anxiety, about what to do with my life, and the meaning of it all. The only thing that seemed to give me inner peace was to be out in nature, in the woods, or in the mountains. So I ultimately decided to study biology, because that would allow me to spend my professional life out in nature. 

In my second year of studies, I had to take a class called “Evolution, Ecology, and Behavior”. It completely changed my life. First, by introducing me to Darwin’s theory of evolution, which I had heard of, but not grasped with all its implications. It connected my search for meaning and my love for nature in one beautiful idea that is underlying all of life. Second, by showing me that there is a field of biology where one can reason about the big questions of life, using simple math and computers. This was an enormous discovery for me, and to this day, I consider myself first and foremost a theoretical biologist. Indeed, writing “Nature, In Code” was like writing a love letter to this field, the kind of book I wish I had access to when I was younger. 

Despite this discovery, I took a break from university after the second year. I discovered something else that I found just as exciting: the Internet (this was in the late nineties). Not being a programmer, and not having programmed before, I was amazed by the endless possibilities of the Internet, and picked up some basic web coding skills (HTML and JavaScript). I started creating websites, first for friends, then for some businesses. With a high-school friend, I decided to seize the opportunity and start a company, so I quit university. We had a great time, and business was going fine, but after already a year, I realized that I very much missed my just-found true love, the theory of evolution, and I decided to go back to it. Thankfully, we were acquired by another web company in the area that was developing super exciting web tools, and I joined them. This allowed me to work in parallel on both areas of interest, web development and evolutionary biology. I was in heaven. 

Looking back, it’s very obvious to me that these were crucial years. At the company, I learned extremely valuable skills. I learned more coding, more web technologies, and I learned some of the business aspects, too. Most importantly, I learned how to anticipate the future, and how to get some basic working knowledge in domains I knew nothing about. At the university, I learned all about evolutionary biology, but was suddenly in a position where I could use my newly acquired coding skills to do better science. To this day, I benefit tremendously from this experience. And that is why, whenever students ask me if they should join a company, I say yes, especially if it’s a tech company. Even if you later decide that science is where your heart is at, the skills you learn during the time working in tech companies are invaluable.

The world has changed rapidly since then. We now live in a completely digital world, with MOOCs, smartphones, and social networks. It must be overwhelming to grow up in an environment where you have access to absolutely everything, for free. But here’s the catch. You shouldn’t lose sleep over which courses you take, which MOOCs you take, which phones you use, how many followers you have, which schools you went to. In other words, you shouldn’t care so much about what you’re consuming. What you should care about, deeply, is what you’re producing. This view may not be very widespread yet, but it will be. In Silicon Valley and similar innovation clusters, it most certainly is. Any innovative group, team, company I’ve interacted with, it’s the same thing. Your career opportunities will be defined by what you have given, not what you have taken. Put differently, your work is more important than your education credentials. Don't make the mistake thinking that this will only be true later on - it may be even more true at the very early career stage.

As a consequence, I cannot give you any specific advice on industries, research groups, or internships. The advice I would give you is to chose a place where you can start producing, tinkering, expressing yourself. Surround yourself by people who make things. People who put their work out in the open. Creators. Go to meetups, hackathons, coworking spaces, and start contributing. In no time, you’ll find there are more opportunities than you’ve imagined. Try to hang out with people who are smarter than you, and better at whatever it is you want to learn. Intimidation is for fools. If you’re the smartest person in the room, you’re in the wrong room.

If you start building things, your CV will very quickly start to look very impressive. When you learn new coding skills, for example during a MOOC, push your solutions to GitHub. Start working on side projects, and finish them (the latter part cannot be emphasized enough). Even if they seem crappy to you now - you can always go back later and improve on them. This repository of things you’ve done will become much, much more important than your CV. In fact, it will become your CV. Interesting companies and research groups will look at it and love what they see there, because they see that you make things happen, which is really all that matters. But what about the companies and groups that don’t care about that? Easy: ignore them - they are not interesting places anyways.

As a final note, let me just say that I think you are perfectly prepared at this stage for the future, even if you may not feel like it. You have a background in the natural sciences, you really seem to like coding, and you’re not shy to reach out for help when you feel stuck. Honestly, one can’t be in a much better position than that. The world is your oyster. 

Let me close with at least one concrete advice: ask yourself, what would I passionately like to work on during the next 5 - 10 years? It’s good to put yourself in a situation where life may not last much longer after that, to create a sense of urgency. Then ask yourself, where would be the most amazing place to do this kind of work? If you can’t answer the second question, go figure it out - talk to people in the relevant fields, read up on the topic online, go to events. Then, once you’ve found 2 or 3 places that would be amazing, go and ask to join them. If they decline, ask them why, and then go and fix whatever was missing. Keep at it until you are there. 

I hope this is helpful, in some small way, and at some point in time. I wish you all the best.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/913985 2015-10-30T17:00:00Z 2016-11-22T14:38:53Z Creating a European Culture of Innovation

(This is are the prepared notes for a speech I gave at Lift Basel on October 30, 2015)

Good evening.

It's a great pleasure to be here with you. You are among the brightest and most creative innovators in Europe. We meet at a critical juncture. At stake is the future of Europe. And we, the innovators, entrepreneurs, scientists, activists, and artists, need to step up and take ownership of this future. Because if we don't, Europe will continue its downward trajectory that it's currently on, and become what it many places it already has transformed into - a museum of history.

Let's not fool ourselves - things are not looking too great. We only need to look at the situations in some of the southern European economies. In many countries in Europe, unemployment rates are still above 10%, seven years after the crisis. But what's worse, you can go almost anywhere in Europe and talk to young people, and with a few exceptions, you will hear them pessimistic about the future of Europe. Many would take the opportunity to leave if it came along. The youth unemployment rates are now so staggeringly high that the problem seems unsurmountable. But is it? Let's take a look a the data.

The information and communication technology sector is now the dominant economic driver of growth. Think Apple, Google, Facebook, Amazon, Uber. Noticed something? Not a single European company. Only 1 out of 4 dollars in this sector are made by European companies, and all the indicators for the future are pointing down. Some numbers are even more dire: when you list the top 20 global leaders of internet companies that are public, you know how many are European? Zero. And among all publicly listed companies in the digital economy, 83% are American, and a mere 2% are European. 2%!

There's something that's even more worrisome. Software is eating the world, and all sectors of the economy are being disrupted by software, but the software section is where the predictions for Europe are most dire. This is a real problem. The disruptive process is so fast nowadays, if you are not careful and miss a key technological development, you are basically gone within less than a decade. Remember Nokia? The hotel industry is being disrupted by AirBnB, an american software company. The transportation industry is being disrupted by Uber, an American software company. The Swiss watch industry is now being disrupted by Apple, an American software and hardware company. Again, let the numbers speak for themselves: the 200-500$ segment of Swiss watch sales has collapsed by 20% in the past three months. Other domains where Europe has been strong are now existentially threatened. Volkswagen may not survive the disaster they created for themselves. But already before the Volkswagen scandal, it was clear that the most exciting technology for cars doesn't come from Europe anymore, but from the US; from Tesla, from Google, apparently from Apple, and certainly also from Uber. In the meanwhile, we have the new Volkswagen CEO on the record last month saying that autonomous vehicles are a hype. 

Ok, so some of the old European companies are not nearly moving fast enough. What about the new ones, the upcomers, the startups? Let's look at the top ones, the ones whose valuation is more than 1 Billion dollars - the so-called unicorns. Europe has now 40 of them, and the number is growing, which is good. They have a collective value of 120 Billion dollars, which is also good. But that is still less than half of the value of Facebook alone. Put differently, the value of all European high-value tech companies is less than half of your mum's favorite social network. We clearly have a long way to go. And we need to go much faster - to catch up, and then to take the lead.

So where's the problem? Some say it's VC funding, which is only partially true. Yes, the culture of VC funding is probably less mature in Europe than it is in the US, especially for stage A, B and C funding. But money will find its way into good ideas and market opportunities one way or another. Others say it's simply the European market, and European regulation. I think that is an illusion. Look at AirBnB, the US startup that now has a valuation of over 25 Billion dollars. It was started as a three person startup in California's Y Combinator, but it now gets over half(!) of its revenues from within Europe. And by the way, San Francisco is probably one of the worst regulatory environments you can find yourself in. AirBnB is currently facing huge battles in San Francisco, and a Californian judge recently ruled Uber drivers employees, causing a minor earthquake in the booming sharing economy. Indeed, California is probably one of the most regulated of the American States, and yet it does exceedingly well.

I think that the problem is actually quite simple. But it's harder to fix. It's simply us. We, the people. We, the entrepreneurs. We, the consumers. I have lived in the San Francisco Bay area for more than three years. What's remarkable about the area is not its laws, or its regulations, or its market, or its infrastructure. What's truly remarkable is that almost everyone is building a company in one way or another. Almost everyone wants to be an entrepreneur, or supports them. Almost everyone is busy building the future. Indeed, you can almost physically feel that the environment demands it from you. When someone asks you about what it is you are doing professionally, and you don't respond by saying that you're building a company, they look at you funny, as if to say, "then what the hell are you doing here"?

Needless to say, for entrepreneurs, this is an incredibly exciting and stimulating environment. You are one of them. You are part of the ecosystem. And like most of them, you will often fail in one way or another. But since everyone is failing all the time, it's completely normal. You just keep going, you fail better next time, until that one time when you don't.

It's not a trivial point I think. The other day, I was in Turin in Italy, and I desperately needed a coffee. I walked into the next random coffeeshop, where I was served a heavenly cappuccino, with a chocolate croissant that still makes my mouth wet when I just think about it. Was I just lucky? No - all the coffee shops there are that good. Because the environment demands it. Sure, you can open a low-quality coffee shop in Turin if you want to, but you'll probably have to file for bankruptcy before you have the time to say buongiorno. The environment will simply not accept bad quality. In another domain, I had the same personal experience when I was a postdoc at Stanford. Looking back, all of my best and most cited papers I wrote there. I don't think it's coincidence. Every morning, as I was walking across campus to my office, I could sense the environment demanding that I do the most innovative work - if I didn't, then what the hell was I doing there?

So this is my message to you. I'm asking you to create those environments, both by doing the best and most innovative you can, but also by demanding the same from everyone else around you. These two things go together; they create a virtuous circle. Since everyone knows what doing the best and most innovative work means, allow me to share some thoughts on the latter part - demanding it from everyone else. 

It means that we are not accepting the status quo. It means that we continuously ask ourselves, our collaborators, our co-workers, and our leaders, how we can do better. It means that we put our wallet where our mouth is - that we stop buying products from companies that don't innovate, and that we support those that do. That we embrace technology, quickly adopt new tools, buy from startups. It means that we speak up, in person, on social media, on blogs, etc., when we see conservative, backwards thinking. And let us not be afraid to speak up even when we seem to be alone in our views. In fact, that may be the most important time to speak up. Recall the child, in the emperor's new clothes, who ridiculed the king for not wearing any clothes at all - her initial remarks sparked a chain reaction, with everyone else suddenly feeling comfortable pointing out the obvious.

It also means letting go, and disassociating from people who are not moving forward as fast as you'd like. This may sound harsh, but really, it isn't. In my career as both a scientist and as an entrepreneur, I have seen the painful moments when people realize much too late in the game that they were on the wrong track. You can undo many things in life, but you cannot roll back time. Last year, I started a company with a friend from Stanford, and we got into the famed Y Combinator program. We failed so hard, I can't even begin to describe it. We didn't even get traction. The partners at YC dropped us like a hot potato. At the time, I felt incredibly frustrated - I thought it was their role to help us get on track. But in hindsight, I realized that they were doing us an incredible favor, and letting us fail fast, instead of letting us morph into one of those Zombie startups that end up going nowhere, but suck up all the energy and time of their founders for too long. Time they could spend exploring the next idea. Time they could spend with family and friends.

And this brings me to another point about demanding excellence and innovation. I hope it was clear that I've spoken exclusively about work. I do not for a moment believe that excellence and innovation demand making inhuman sacrifices. They do demand very hard work, dedication, and commitment, for sure. But what do you want to be written on your graveyard? "He created a billion dollar company" or "he was the most wonderful father and friend anyone could imagine"? Ideally both, of course. But you are going to have regrets. Choose them wisely. 

There is also something that we shouldn't do, but unfortunately is a very common European tradition. It's to complain, and wait for politics to step in. To think that if only we had the right political conditions, we would be just like Silicon Valley. Don't get me wrong - there is much that can be improved. Politics can make things harder, and it can make things easier. But it cannot on its own create innovation. It can support it a little bit, and it should. And if you are working in the political sphere, working to make life easier for startups and innovation, I say: all power to you! I support you, I will vote for you. But to the rest of us, I say: just do it. Don't ask for permission, ask for forgiveness if necessary. If you are waiting for permission, you will wait for the rest of your life. Most rules exist for a simple reason: to protect incumbents. Don't ask for permission, just go and do it. 

My final point on this topic is probably the most important one, and it is about you. I strongly believe in role models, and I believe that the only way for Europe out of the current mess is to have strong role models. Role models are absolutely essential. Take a look at Facebook. It's easy to quantify the economic value that Mark Zuckerberg has created through the company. But think about the entire generation of high schoolers and college kids he inspired. Think about all the companies that were, and will be, started by young people who want to be the next Zuckerberg. The next Sergei or Larry. The next Steve Jobs. They were all in their early twenties when they started their companies. I bet the value of simply having these role models - measured by the companies they inspired - approaches or exceeds the value of the companies they themselves created. 

I've had my own share of role models, right here in this very town. I used to work in two software startups -  before they were called startups - that have both gone on to become very successful. The first was called Bidule, which later morphed into Day Software. Its technical founder, David Nüscheler, is perhaps the closest equivalent to Mark Zuckerberg that we have in Switzerland. An ETH dropout, David co-founded the company and brought it to IPO in the year 2000. A decade later it got acquired by Adobe for over a quarter billion dollars. The second company was called obinary, which later turned into Magnolia. In 2003, the two co-founders Pascal Mangold and Boris Kraft realized their company was faltering, and so they decided to take a huge risk and develop a new product which has gone on to do extremely well. They just moved into a brand new building specifically designed for the company, which now employs close to 100 people. 

What these founders have in common is that they were smart, and took a chance when they saw an opportunity. They didn't wait for a more mature VC market, for better political conditions, or for anyone's permission. They just went a built these companies. We need to talk more about them. We need to celebrate them. We need to make sure they want to work here, not anywhere else. Many people who have worked for these two companies have since left and started their own companies. And that's of course how you build a thriving startup environment. Not by government mandate, but by letting smart people innovate, by letting them get back on their feet when they stumble, and by celebrating them when they succeed, because if they do, we all do.

And so we need you to be our role models. We need you to show what's possible, beyond what we believe is possible. It may not be easy, because you may still find yourself in an environment that does not demand innovation - or worse, even in an environment that actively dismisses it. I am here to ask you, please keep going, keep creating the future. When you are being dragged down by the doubters, by the know-it-alls, but the why-do-we-need-thisers, please know that there are many like me, cheering you on, waiting for you to be even more daring. Not everybody gets to be at this podium, not everybody gets the chance to have your attention. But I'm telling you now, we are here, and we need you to pull us into the future.

It is not going to be easy. As George Bernard Shaw so pointedly said: "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." Paul Graham, Y Combinator co-founder, advises to "Live in the future, then build what's missing." It is probably the shortest and best recipe of innovation I've ever come across. But living in the future means you will appear a little crazy, and a little arrogant. You are living in the future, after all! And what's worse, many of your contemporaries don't just want to live in the present, they'd prefer to live in the past. The history of humanity is one long fight by those living in the future against those living in the past. To some extent, it is remarkable that we have actually come as far as we have. Every single human invention had to struggle - even things that everyone now takes for granted. From creating timezones to washing our hands between operations, to communicating through phones: Every single human invention that you see around you was once considered stupid, useless, unnecessary, dangerous. But those unreasonable inventors kept going, because they lived in the future, and saw what was missing.

So please, let us all live in the future and build what's missing - here in Europe. I am worried sick that the easiest way for me to live in the future is to buy a ticket to San Francisco. Just like the easiest way for Americans to relive the past is to buy a ticket to Europe, rich in history. I'm asking you to become even more ambitious, more daring, and more demanding, both of yourself, but most importantly also of your environment.

Thank you very much.

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/912514 2015-10-03T20:29:34Z 2015-10-04T07:35:47Z Rules

"You know, these are the rules".

How often have you heard someone say this? Me, too many times. I have nothing against rules per se. I am certainly not fond of breaking them, even if it is sometimes necessary. But there is one thing that people often forget about rules.

They are manmade.

This sounds trivial, but it is has profound consequences. The most important is that because rules are manmade, they can be changed. People often cite rules as if they were some ancient wisdom, or even natural law. They are not - somebody made them, and it's up to us the change them. If the rules work against you, you must change them. If you can't change them, you are in the wrong system, and you should leave (e.g. your workplace). It's harder if the system is political, and the rules are the law, but it's possible - see this fascinating 16 min talk about voice and exit


The second important consequence is that most people may actually disagree with a rule, in which case it might not be enforceable (again, this depends on the context). Think about a rule - any rule. Do you know when it was established? No matter how far back in time the rule was put in place, we now know much more about the world than people did back then. Thus, it's quite possible that the rule doesn't make sense anymore, based on the data we have now. It's for this reason that I have some respect for rules: when they were put in place, they must have made a lot of sense, and we should always assume that there was a good reason back then. At the same time, we should never stop questioning whether the rule makes sense today - and change it if the answer is no.

This brings me to the third consequence: the rule was put in place to defend and protect someone's interest. This may have been the interest of the majority of the public; the interest of a company; the interest of a lobbying group, etc. It may not be your interest. The rule may not be there to make your thrive, and in that case you need to change it.

Steve Jobs, with his uncanny ability to simplify complexity, said it best:

“When you grow up you tend to get told that the world is the way it is and you're life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. That's a very limited life. Life can be much broader once you discover one simple fact: Everything around you that you call life was made up by people that were no smarter than you. And you can change it, you can influence it… Once you learn that, you'll never be the same again.”

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/910073 2015-09-27T12:03:22Z 2015-09-29T12:38:30Z Dear Volkswagen: you are probably doomed (but not because of diesel)

What a week it's been. You've admitted to the world that you have sold 11 million vehicles that are much more polluting than what you advertised them to be. You've admitted that you've installed software in your cars that specifically detects when a car's emissions are being measured (i.e. the engine is running but the car is not moving) in order to fake the measurements. You've set aside 6.5 billion Euros to deal with this scandal, and have replaced your CEO.

This episode will go down in corporate history as a textbook example of large-scale industrial fraud for a number of reasons. First, there is the fact that the mechanism of deception was software-based. Second, you haven't just duped millions of people into buying a product they would never have bought had they known the truth - in the process, you've contributed to massive pollution that's affecting everyone's health, not just that of your customers. Third, the betrayal of trust is at an unparalleled scale - you've sold polluting machines specifically to a group that's sensitive to environmental issues. Many who bought a VW diesel bought it precisely because they assumed it to be less polluting than the competing products. Fourth, we're not just talking about selling candy - we're talking about products that cost tens of thousands of dollars. In other words, we're talking about tens of billions of dollars of products sold under completely false advertising.

Frankly, it's hard to see how you can come out of this alive. The fines you will have to pay will be in the tens of billions of dollars. The class action law suits will be equally high. Many of your top executives will go to jail. You currently have 30 billion euros in cash reservers, which probably won't even begin to cover your liabilities. And all of this is just the tip of the iceberg. Not only are the 11 million people you lied to less likely to buy another VW in the future, but you've probably also lost everyone else who was considering buying one. As someone who just recently bought at VW diesel - my first VW ever, and almost certainly my last - I can of course only speak for myself. But everyone I've talked to feels pretty much exactly as I do. And if after some more independent analysis, the numbers come back and they are even close to what we currently read in the press (10-40x more pollutants than advertised), I will sell the car immediately even if it is at an almost total loss.

Such a shame. You've spent all this energy over the years and decades to make people like me buy your products. And just as you seem to succeed, the world comes crashing down on you. I feel particularly sorry for the vast majority of your employees who are completely innocent, but who will lose their jobs over this. I feel sorry for every world-class German engineer whose reputation you are dragging down with you, and who doesn't deserve any of it. 

Some cynics say that's just the name of the game. In a couple of weeks, no one will talk about it anymore. They couldn't be more wrong. Every time VW diesel owners start their engines, they will think of how you betrayed them. I've noticed it myself - even though there is less press on the scandal with every passing day, I actually get angrier with every passing day. This is not just a temporary thing. For the first time in my life, I am willing to join groups who take legal action against a company.

But there's hope. As the saying goes, a crisis is a terrible opportunity to waste. 

There is one, and only one reason why I would consider buying a VW in the future: massively beat Tesla at their game. Abandon all fuel-powered development today and invest every single cent into long-range electric cars, and build the electric charging infrastructure throughout Europe and the US (and the rest of the world). In addition, the development of the self-driving car has to be your top priority. The car of the future has no human driver in it, and of course you know this (anyone at VW who doesn't, let go of them immediately). Lobby your government and that of the EU to change regulation that allows for safe self-driving vehicles the day they roll out of your factories. Since you are in an existential crisis, and with you the millions of jobs that indirectly depend on you, they'll listen. 

The strategy can be summarized in two words: batteries & software. Everything else is gone. It may be difficult for some of the more traditional, non-software engineers (like your ex-CEO), to embrace the very thing - software - that brought you down to your knees. But there is simply no alternative. Europe is already lagging behind in the software-powered technological revolution. Aggressively start hiring the most brilliant software engineers away from Tesla, Google, Apple, and Uber to make your new strategy come true. If any resources remain, give them to those universities that are building - or expand existing - world-class software engineering curricula.

So why the title of this post? Because all of this is probably not going to happen. The new CEO, Matthias Müller from Porsche, thinks autonomous vehicles are an unjustifiable hype. I wish I was kidding, but I'm not: the VW board thought that the best person to replace the guy who oversaw the cheating software scandal (or was unaware of it) is a guy who seems to have even less appreciation of the ongoing software revolution. As far as I can tell, now VW CEO Matthias Müller is the only CEO of a large car maker who has gone on record saying that autonomous vehicles are a hype. 

Unless Müller makes a 180 degree turn, it's quite obvious to me that VW is doomed. They may survive as the Foxconn of car making - a pure hardware manufacturer with extremely low margins and mostly terrible, low wage jobs. If that's what they want, they are on the right trajectory. But for Europe, this is extremely bad news, as another major economic player will go down the road of Nokia, and the economy will suffer badly.



]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/907126 2015-09-19T20:44:05Z 2015-09-20T14:27:30Z Serendipity in the Digital Age

I'm currently attending a meeting in Chamonix, France, and one of the questions that was discussed tonight was, what is going to happen to serendipity in the digital age? Serendipity means fortunate happenstance, and some people seemed concerned that the digital age would reduce serendipity, because in their view, it requires real face to face human interaction. I think this argument is weak and perhaps reflective of a certain generation that feels nostalgic for the olden days. But I do agree that serendipity is a good that needs to be preserved in the digital age, and this is no trivial task.

In evolutionary biology, serendipity is captured by the concept of beneficial mutations. DNA, the underlying chemical basis of all life on earth, mutates all the time. Most of these mutations are bad in that they either render a cell inviable, or cause some type of malfuction. Less often, the mutation is neutral - the encoded protein might not change, or change in a way that has no effect. Rarely, but occasionally, the mutation is beneficial, and the new version of the gene is better in the evolutionary sense, i.e. it directly or indirectly contributes to getting itself more successfully copied into future generations.

I would be surprised if the distribution of outcomes in random human interactions were very different. Most interactions are simply not interesting enough to be even started. Imagine the hundreds of people you bump into every week - even if you started a conversation with each single one of these people, you would readily find that most of them are not your type, boring, or just not interesting enough to talk to, beyond polite chit chat. A few people may be interesting enough for you to keep the conversation going for a while, but the relationships will fizzle out after some time. Rarely, but occasionally, you meet a stranger with whom you connect strongly, and they become colleagues, friends, or even lovers. Many years in the future, we will think of the serendipitous moments when we met those people, while conveniently forgetting about all the other interactions the led nowhere.

Can anything more be said about this somewhat vague comparison between genetic mutations and random human interactions? I think so. The interesting question is, how can we optimize the rate of random interactions in a way that ensures that we get the most out of it? Simply more randomness cannot be the answer, and indeed, that's not what we find in nature. In fact, mutation rates are generally extremely low, as low as only 1 mutation per billions of base pairs. But do not think for a moment that the mutation rates in nature are fixed - like everything else, they are subject to natural selection, and they do vary widely both among and -even more so - between species. Viruses, for example, have some of the highest mutation rates among living things (if you grant them the status of being "alive"). The reason for this high mutation rate is that they are constantly attacked by their hosts' immune system, and they need a way to escape from that threat. If they mutate, then the chance of not being discovered can increase substantially. Not mutating is simply not an option, or else the immune system will bring the infection under control before it had a chance to be passed on - an evolutionary dead end. Indeed, the genetic makeup of a population of viruses within a person infected with HIV can change drastically in just a few days.

To bring this back to human interactions - how much serendipity we want often depends on the circumstances in our lives. In moments of great stability, we may not be all that interested in too many random interactions. When we are alone, or looking to expand into new social circles, we may actively connect with as many new people as possible. Professionally, it's probably best to always maintain a relatively high "mutation rate", especially if you work in a fast moving environments (and don't we all these days?).

In the digital age, many of these random interactions occur online. On one hand, since the online community is global, you can now sample from a vastly larger group of people, which may or may not lead to more serendipity. On the other hand, your sample is increasingly driven by algorithms, and in my experience, these algorithms try to predict what I like based on what I already like, which is a kind of targeted serendipity. But there is also the danger that I am getting too much of what I already like, and the true, random encounter with a new person or a new idea becomes increasingly rare. I would love for these algorithms to allow me to fine-tune the level of random interactions.

Ultimately, it is too early to say what the digital age will do to serendipity. My sense is that it will be a good thing, but I guess that's why my friends call me a tech optimist. 

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/903903 2015-09-11T13:33:52Z 2015-09-12T15:08:24Z Europe's Technological Irrelevance: Time To Panic.

A few days ago, I was watching Apple's unveiling of new products in San Francisco. I've used and loved Apple products for a long time, and these events are always very exciting to me personally. I'm also heading a Digital Epidemiology Lab at EPFL, and whatever is happening in Silicon Valley is of great professional interest to me as well. Increasingly however, when Apple announces new products, I'm not just feeling excitement, but also dread. The dread of realizing that the mobile revolution, which dwarfs both the internet revolution and the PC revolution, occurs completely outside of Europe.

In a few days, I will be able to download a new, much improved operating system onto my Apple Watch. I'm essentially getting a completely new watch, with completely new functionality, for free. I was already amazed by Apple Watch 1.0, but the second iteration sounds just spectacular. There is no reason whatsoever for me to ever buy a watch that is not smart for the rest of my life. I see no reason why any of my children would ever even think about buying a watch that is not smart. And since the smartness of a watch is largely driven by its software, and its ecosystem of services it integrates with, Apple is going to dominate this space thoroughly. Perhaps there will be competition from other players that can challenge Apple's dominance. But whoever those players will be, they will be major software companies (such as Google). In other words, they will not be European.

I'm Swiss, and I have just recently returned to Switzerland after almost a decade in the US. So it's particularly painful to me to see the clear writing on the wall, which is that the watch industry will suffer drastically. Of course, there will always be people who are willing to spend hundreds of thousands of dollars for a beautiful mechanical watch. But this is not a big enough industry to build an economy on. Worse, it's not an exciting industry where innovation will come from. It's not the future.

And the entrance of Apple into the watch business is just one example of a repeating pattern of software companies getting into other markets - markets that were seemingly immune to competition by software - being able to offer a vastly superior product. It's happened before with typewriters and business machines; it's happened before with phones; it's happening now with watches; and it will happen again and again. The phrase "software is eating the world" is now so clichéd, trivial, and true, that you have to wonder why people are still debating it. The question is not if software companies will provide better cars, better drugs, better homes, etc., but when. The only companies that will survive this disruption are the ones that will manage to transform themselves into software companies as well. But time is running out very fast.

What software company is there in Europe to speak of, at the moment (I am not discounting the hopeful possibility that a major one is in the making right now somewhere in a European garage)? Where is Europe's Google, Europe's Apple, Europe's Facebook, Europe's Twitter, Europe's Uber? All we can see are the flames of cars burning in Paris, protesting innovation. All we can read about are Spain's efforts to ban Google News, misunderstanding how the internet works. Remember just a decode ago, when the mobile phone industry was largely a European affair? That's how fast things are moving now, and they will move even faster in the future. The time to feel dread is over. It's time to panic.

There are many smart people in Europe who see this. And they are working like crazy to steer the ship around. Everybody wants to build the next Silicon Valley. Now that it's becoming clear that nobody seems to be able to do that, everybody is trying to build at least some valley (health valley, drone valley, fintech valley, etc.). But how to do that? Marc Andreessen, Netscape founder and tech visionary, argues that de-regulation is a major component. If true, that would be bad news for Europe, which has a hard time with de-regulation. But I'm not entirely sure I'm buying the argument. The US is also a heavily regulated market, and its government is in my experience much more bureaucratic than the Swiss one, for example. In fact, California is probably one of the most regulated, if not the most regulated, of the states in the US. So something else must be going on. That something else is outstanding universities. 

If you look at the distribution of innovation hotspots in the US, there is a very clear correlation with outstanding academic institutions. It turns out that there is more to correlation than just location though. First Round Capital, a leading seed-stage VC firm with investments in Uber, Square, Mint, and others, has recently crunched their data of 10 years of experience. The largest effect of company success was having gone to a top school (Ivy league, Stanford, MIT, Caltech). Even without those data, the connection between innovation and top academic institutions is so obvious that it's completely non-controversial. Indeed, the source of Silicon Valley itself can be found at Stanford University. And this is the area where Europe can move the needle in the right direction relatively quickly.

Europe has a great system of universities, but has in the last century lost its dominance to US institutions. Nevertheless, there are still many European institutions that are at the very top. There should be more. And the way to achieve this is to provide more resources, and to ensure that these resources are allocated smartly (i.e. pick the right leaders who understand the software-driven mobile revolution). The correlation between the ranking and the endowment of a school is striking. Even if you are skeptical about rankings (as you should, since their methodologies are mostly flawed), it's not hard to see that the top schools have huge endowments, and schools with large endowments are more likely to be at the top. While correlation is not causation, recent data from Germany shows that more funds leads to better schools (no one working at a university would object to that - we could all do more and better with more resources).

One very obvious source to obtain more funding is from the wealthy. And by that, I mean pretty much everyone with a university degree in Europe. European Universities are a bargain. You can go to ETH or EPFL, two of the best technical schools in the world, and pay $600 a semester. Yes, you read that correctly. No, there is no zero missing. Other European institutions are even free. And this is as it should be. The one colossally bad idea we should not copy from the US is to raise tuition fees, and to put our students into debt when they want to get an education. The one good idea we absolutely should copy is that once they have received their education, and probably go on to get well-paying jobs, is to ask them to pay it forward to the next generation. Yes, they already pay some of if through taxes, but everyone does. Those of us who have benefited from a virtually free education should, and I think would, pay voluntarily more.

I graduated from the University of Basel in 2002. Since then, I have never been asked to contribute even a single dollar (or rather, Swiss Franc). This is insane. I would very gladly pay a decent amount of money to support my alma mater in its quest to become a world-leading institution. I would love the swag, and I would love to go to events, concerts, soccer games, and network with my peers. What a colossal non-use of potential resources! And it's also the fairest system: it asks us for our contributions at a time in our lives when we can actually afford to make them, unlike the system that asks us to pay tuition fees when we are essentially broke teenagers.

So please, European universities, stop playing in the second league when it comes to fundraising. Go out and ask your alumni for resources to help you build the next Stanford. Because the next Stanford will give rise to the next Google, the next Apple, and the next Uber, ensuring that we are the ones who are doing the eating, rather than being eaten. 


]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/879189 2015-07-10T12:52:46Z 2015-07-10T12:52:46Z Ch-ch-ch-ch-changes

Starting on August 1, 2015, I will begin my new position as Associate Professor at EPFL in Switzerland. This post is a reflection of this transition.

I'm incredibly excited about the opportunity at EPFL, for a number of reasons. First, it is an excellent research university (don't take it from me). Second, unlike many other european institutions, they are very enthusiastic about the potential of online education, as evidenced by their Center for Digital Education, dozens of MOOCs, and close to a million online students. They were also the first European partner of Coursera, pretty much immediately after it launched. Since I share the enthusiasm about the potential of online education, this is a very exciting environment for me. Third, they have created a great environment for startups over the years, and the Lausanne / Geneva area, where EPFL is located, now takes in the majority of VC funding in Switzerland

In addition to many other reasons, I'm also really excited to be back in the Swiss research environment. The Swiss, a small population in a landlocked nation without any natural resources to speak of, are well aware that education is their best bet to compete in a global economy, and correspondingly invest heavily on research and development (about 2.3% of GDP per capita, which is the 12th highest percentage internationally). In return, the results are impressive: output it terms of papers per capita is off the charts, and they win more scientific prizes per capita than most anyone else. These may not be most useful metrics, but they are at least to some extent indicative of a productive scientific environment. In addition, it's very exciting for me to be closer to some institutions that I've enjoyed visiting in the past few years, most notably the Institute for Scientific Interchange in Turin, Italy, where I will be a fellow next year. 

But before you get the wrong impression: I'm very sad to leave the US. I have had a wonderful time here, first as a postdoc at Stanford, and later as a faculty member at the Center for Infectious Disease Dynamics at Penn State. The "can do" attitude at these places is just phenomenal. It's no wonder that US institutions so thoroughly dominate the scientific enterprise. Professionally, I will try to be as American as I possibly can, by which I mean maintaining the fast-moving, risk-taking, stand-up-and-try-again, pioneering attitude that is so pervasive here. 

As much as I hope to bring that American spirit to my new European home, I am equally hopeful - but somewhat pessimistic - that US institutions will adopt a more European approach about access to higher education. I have never felt comfortable working at an institution that charges hundreds of thousands of dollars for an education. It's impossible to blame a single factor here, and there is obviously huge variance on both sides of the pond. But it's a relatively new phenomenon in the US, which is why I will observe the developments in Europe with heightened sensitivity. What's remarkable about the situation is that I haven't met a single person at any US institution who has individually thought that this is a good development. Everyone agrees that it's bad, and yet the ball keeps rolling in the wrong direction. The reason why I remain hopeful is that the US is highly adaptable, and those years that I've spent here are strong evidence of that (a black president, the health insurance reform, the gay marriage decision, etc.). 

Personally, I have had an absolutely fantastic time, and I will always remember it fondly. Actually, I can't quite believe I'm leaving. I'll miss the people, I'll miss the landscapes, I'll miss the open space (sooooooo much space), and I'll miss the attitude. But then again, I have much to look forward to. 

So long!

]]>
Marcel Salathé
tag:blog.salathe.com,2013:Post/878756 2015-07-08T18:49:22Z 2015-07-09T18:36:21Z A new digital home

This is my attempt to make a final attempt at blogging.

I've blogged on and off since around the year 2000, but have never managed to stick with one platform. For the past few years, I've greatly enjoyed Twitter as a micro-blogging outlet, but for some longer thoughts, it doesn't fit well.

After many attempts with different platforms, this is my attempt to make it final. Posthaven seems like a perfect solution for what I'm looking for. It's a simple platform, and promises to stick around for ever. It's not made by people who want to get rich(er), and you pay for it, ensuring long-term financial stability without ads. And the only reason why I believe all of this is because it has been created by Garry Tan. Of all the partners at Y Combinator that I had the chance to meet during my time there in 2014, he was by far the smartest, most creative, and most relaxed (a rare combination anywhere). If he puts his name on the pledge, then I'm buying.

I'll transfer this to salathe.com as soon as possible. (done)




]]>
Marcel Salathé