Rule 2: If you can't decide, choose change

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

It took me about 30 years to figure this out, but ever since I stumbled on it, I've found it applicable to any situation. 

We need to make decisions every single day, and it seems that much of the career angst that befalls all of us from time to time is based on the fear that we could make the wrong decisions. Decisions are easy when the advantages clearly outweigh the disadvantages (or the other way around). Things get tricky when the balance is not as clear, and when the lists of potential positives and negatives add up to roughly the same. The inability to make a decision is one of the most dreadful feelings.

Whenever I am in such a situation where I can't decide because all options seem roughly equal, I choose the one that represents most change.

Here's why: on a path that is dotted with making decisions, you are inevitably going to have regrets down the line. There are two possible types of regrets; in the first one, you regret a path not taken; in the second, you regret having taken a path. My philosophy is to avoid the "path not taken" regret. It's the worse kind of regret. You will at times have regrets about having taken the wrong path - but at least you took the bloody path! It meant change, and it was probably exciting, at least for a while. Even if it turns out to have been the wrong decision, you've learned something new. You moved. You lived.

As far as we know, we only have this one life. Explore! Thus: when in doubt, choose change. 


Rule 1: Do novel, interesting things

(This post is part of a bigger list of rules that I have found helpful for thinking about a career, and beyond. See this post for an explainer).

This is possibly the most important rule, and thus I am putting it right at the start. The way this rule is phrased is partly inspired by Y Combinator's rule for startups: make something people want. If I were asked to condense career advice - specifically in academia, but I think also more broadly - into four words, it would be these: do novel, interesting things.

The rule is really composed of three sub rules: First, do something (surprisingly underestimated in my experience). Second, do something that is novel. And third, do something that is not just novel, but also interesting. Let's take a look at these, one by one.

Do something
I find it hard to overstate how important this is. I've met countless of brilliant young people who were clearly very smart and creative but had nothing to show for it. In academia, this is often phrased in the more negative "publish or perish", which I think is slightly misleading and outdated. What it should really say is "show your work that demonstrates your thinking, creativity, brilliance, determination, etc.". It doesn't have to be papers - it could really be anything. A blog. A book. Essays. Software. Hardware. Events you organized. Whatever - as long as it has your stamp over it, as long as you can say "I did this", you'll be fine. 

I need to repeat that it's hard to overstate how important that is. As Woody Allen famously said, "Eighty percent of life is showing up". This is why I urge anyone who wants a job in a creative field - and I consider science and engineering to be creative fields - to actually be creative and make things, and make them visible. The most toxic idea in a young person's mind is that they have nothing interesting to say, and so they shouldn't say it in the first place. It gets translated into not showing what you've done, or worse, into not even doing it. Don't fall into that trap (I've written a bit more about this in a post entitled The curse of self-contempt).

Do something novel
Novelty is highly underrated. This is a bit of a personal taste, but I prefer something that is novel but still has rough edges, over something that is a perfect copy of something existing. I suppose the reason most people shy away from novelty, especially early in their career, is that it takes guts: it's easy for others to ridicule novel things (in fact, most novel things initially seem a little silly). But especially early in your career is when novelty matters the most, because that's when you are actually the most capable to be doing novel things since your brain is not yet completely filled up with millions of other people's ideas. 

Novelty shouldn't be misunderstood as "groundbreakingly novel from every possible angle". It is often sufficient to take something existing and reinvent only a small part of it, which in turn may make the entire thing much more interesting. 

Do something that is also interesting
While novelty is per se often interesting, it's not a guarantee. So make sure that what you do is interesting to you, and at least a few other people. It's obvious that doing things that are interesting will be good for your career. This doesn't need a lot of explanation, but it's important to realize that what is interesting is for you to figure out. Most people don't think a lot about this, and go on doing what everybody else is doing, because that must mean it's interesting (when in reality it often isn't, at least not to you). The ability to articulate for yourself why something is interesting is extremely important. Practice it by pitching your ideas to an imaginary audience - you'll very quickly feel whether an idea excites you, or whether you feel like you're just reciting someone else's thinking.

10 rules for the career of your dreams ;-)

A few weeks ago, I was asked to give a short presentation at a career workshop for visiting international students at EPFL. The idea of the event was to have a few speakers who all shared an academic background to reflect on their (very different) career paths. As I started writing down the things that have guided me throughout the years, I began to realize that this would end up being one of those lists ("12 things to do before you die"), and naturally, I tried to come up with the tackiest title I could imagine, which is the title of this post.

Underneath the tacky title, however, is a serious list of rules that I've developed over the years. Some of these rules I've known to be helpful since I was a high school student. Others I've discovered much later, looking back on my path and trying to figure out, with hindsight, why I went one way, rather than the other.

Almost two years ago, I've received an email from a student asking for career advice. I answered, and decided to post the answer on this blog. That post - advice to a student - has been viewed a few hundred times since then, and I figured I should also share the career workshop talk, as a few people, today and in the future, may find it helpful. There is little use in uploading the slides since they were just the list of the rules. What I am going to do here instead is to expand on each of the rules a little more. This is a bit of an experiment to me, but hopefully, this will work out fine. Each of the ten rules will be its own post, and I will keep updating this post with links to each rule once they get published. So without further ado, here is the unfinished list of rules which I hope to complete over the next few weeks:

1. Do novel, interesting things

2. If you can't decide, choose change

3. Enthusiasm makes up for a lot

4. Surround yourself with people who are better than you

5. Get on board with tech

6. Say no


8. Be visible

9. Have alternatives

10. Be the best you can be, not the best there is

AI-phobia - a convenient way to ignore the real problems

Artificial intelligence - AI - is a big topic in the media, and for good reasons. The underlying technologies are very powerful and will likely have a strong impact on our societies and economies. But it's also very easy to exaggerate the effects of AI. In particular, it's very easy to claim that AI will take away all of our jobs, and that as a consequence, societies are doomed. This is an irrational fear of the technology, something I refer to as AI-phobia. Unfortunately, because it is so appealing, AI-phobia currently covers the pages of the world's most (otherwise) intelligent newspapers.

A particularly severe case of AI-phobia appeared this past weekend in the New York Times, under the ominous headline "The Real Threat of Artificial Intelligence". In it, venture capitalist Kai Fu Lee paints a dark picture of the future where AI will lead to mass unemployment. This argument is as old as technology - each time a new technology comes along, the pessimists appear, conjuring up the end of the world as we know it. Each time, when faced with the historic record that shows they've always been wrong, they respond by saying "this time is different". Each time they're still wrong. But, really, not this time, claims Lee:

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

Where is Wikipedia's [citation needed] when you need it the most? Exactly zero evidence is given for the rather outrageous claims that AI will bring about a wide-scale decimation of jobs. Which is not very surprising, because there is no such evidence.

Lee then goes into the next paragraph, claiming that the companies developing AI will make enormous profits, and that this will lead to increased inequality:

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

There are numerous problems with this argument. Technology has always helped to do something better, faster, or cheaper - any new technology that wouldn't do at least one of these things would be dead on arrival. With every new technology that comes along, you could argue that the companies that develop it will make huge profits. And sometimes they do, especially those that manage to get a reasonable chunk of the market early on. But does this always lead to an enormous wealth concentration? The two most recent transformative technologies, microprocessors and the internet, have certainly made some people very wealthy, but by and large the world has profited as a whole, and things are better than they have ever been in human history (something many people find hard to accept despite the overwhelming evidence).

What's more, technology has a funny way of spreading in ways that most people didn't intend or foresee. Certainly, a company like Uber could in principle use only robot drivers (I assume Lee refers to autonomous vehicles). But so could everybody else, as this technology will be in literally every car in the near future. Uber would probably even further lower their prices to be more competitive. Other competitors could get into this market very easily, again lowering overall prices and diminishing profits. New businesses could spin up, based on a new available cheap transportation technology. These Uber-like companies could start to differentiate themselves by adding a human touch, creating new jobs that don't yet exist. The possibilities are endless, and impossible to predict. 

Lee then makes a few sensible suggestions - which he calls "the Keynesian approach" - about increasing tax rates for the super wealthy, using the money to help those in need, and also argues for a basic universal income. These suggestions are sensible, but they are sensible already in a world without AI.

He then singles out the US and China in particular, and this is where things get particularly weird:

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

Yes, what about them? Now, I do not for a moment doubt that the US and China will have many successful AI businesses. The US in particular has almost single-handedly dominated the technology sector in the past few decades, and China has been catching up fast. But to suggest that these countries can tackle the challenge because they have AI businesses that will be able to fund "welfare initiatives via taxes" - otherwise called socialism, or a social safety net - is ignoring today's realities. The US in particular has created enormous economic wealth thanks to technology in the past few decades, but has completely failed to ensure that this wealth is distributed fairly among its citizens, and is consistently ranked as one of the most unequal countries in the world. It is clearly not the money that is lacking here.

Be that as it may, Lee believes that most other countries will not be able to benefit from the taxes that AI companies will pay:

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

Countries other than US and China beware! The AI train is coming and you will either be poor or become dependent slaves of the US and Chinese AI companies that will dominate the world! You could not make this up if you had to (although there are some excellent Sci-Fi novels that are not too far away from this narrative).

I am sure Kai Fu Lee is a smart person. His CV is certainly impressive. But it strikes me as odd that he wouldn't come up with better alternatives, and instead only offers a classical case of a false dichotomy. There are many possible ways to embrace the challenges, and only a few actually have to do with technology. Inequality does not seem to be a technical problem, but rather a political problem. The real problems are not AI and technology - they are schools that are financed by local property taxes, health insurance that is tied to employment, extreme tax cuts for the wealthy, education systems with exorbitant tuition fees, and so on. 

Let's not forget these problems by being paralyzed by the irrational fear caused by AI-phobia.

Lee closes by saying 

A.I. is presenting us with an opportunity to rethink economic inequality on a global scale.

It would be a shame if we would indeed require artificial intelligence to tackle economic inequality - a product of pure human stupidity. 

Gender diversity in tech - a promise

It doesn't take much to realize that the gender ratio in technology is severely out of balance. Whether you look at employees at tech companies, computer science faculty members, graduates in computer and information sciences, user surveys on StackOverflow, you find almost the same picture anywhere.

From personal experience, it seems to me that the situation is considerably worse in Europe than in the US, but I don't have any data to back this up.

If there is any good news, it's that the problem is increasingly recognized - not nearly enough, but at least it's going in the right direction. The problem is complex and there is a lot of debate about how to solve it most effectively. This post is not about going into this debate, but rather to make a simple observation, and a promise.

The simple observation is that I think a lot of it has to do with role models. We can do whatever we want, if a particular group is overwhelmingly composed of one specific phenotype, we have a problem, because anyone who is not of that phenotype is more likely to feel "out of place" than they would otherwise, no matter how welcoming that group is. 

The problem is that for existing groups, progress may be slow because adding new people to the group to increase diversity may initially be difficult, for many different reasons. Having a research group that is mostly male, I am painfully aware of the issues. 

For new groups, however, progress can be faster, because it is often easier to prevent a problem than to fix one. And this is where the promise comes in. Last year, I became the academic director of a new online school at EPFL (the EPFL Extension School, which will focus on continued technology education). This sounds more glorious than it should, because at the time, this new school was simply an idea in my head, and the team was literally just me. But I made a promise to myself, namely that I would not build a technology program and have practically no women teaching on screen. No matter how well they would do it, if the teachers were predominantly male, we would be sending once again, without ill intent, the subtle signal that technology is something for guys.

Today, I want to make this promise publicly. At the EPFL Extension School, we will have gender parity for on-screen instructors. I can't guarantee that we will achieve this at all times, because we are not (yet) a large operation, and I also recognize that at any point in time we may be out of balance, hopefully in both directions, due to the fact that people come and people go. But it will be part of the school's DNA, and if we're off balance, we know what we have to do, and the excuse that it's a hard problem once you have it won't be acceptable.