My current plans

Author’s Note: These plans are no longer current. Many of my opinions and beliefs have changed between the writing of this post and now. I take no responsibility for inferences made from the content of this post. Let me know if you’d like to check any of the specifics.


Some people have recently asked me what my life plans are. I thought I’d write them up, for a couple reasons: I don’t want to keep giving bad ad-hoc answers; I want to improve my own thinking about them; and I want to run them by other people for comments.

There are many gaps in my plans as they currently stand. However, I thought I’d publish what I have now, for the sake of getting faster feedback and to avoid the perfect being the enemy of the good.

(Note: for things related to this post, I declare Crocker’s Rules. I’m not particularly attached to any part of these plans, except my terminal values. I won’t be offended if you disagree with parts or suggest alternatives. However, I’d ask that you don’t discuss cause selection in the comments. I’m currently undecided; I’ve already seen many arguments pro and con for each cause, and I don’t want useful comments about my plans to be drowned out by a rehashing of these points.)

Note II: this is quite long and written more for my benefit than for anyone else’s. If you do want to read it, you may not want to go linearly. I tried to make it at least somewhat skimmable.

The structure of this document is as follows. First, I’ll go over background knowledge and assumptions that my plans are based on. Then I’ll detail what I think my best few options are. Afterward I’ll go over my current thoughts and beliefs about these options, and what my next steps are. Where I’m aware of gaps or opportunities to gain more information, I’ve inserted italicized questions throughout.

1. Background

1a. Values and ethics

My goal with this plan is to maximize the amount of good that I can do in the world while maintaining high quality of life and life satisfaction.

i. Philosophical details

I’m a consequentialist. I wouldn’t call myself any particular flavor of utilitarian yet, because I’m still worried about some technical issues: how do we measure utility, given that people frequently have conflicting and inconsistent preferences? How do we aggregate it across lots of people? Are utilities between people comparable? To what extent?

However, I think the conclusion that we should be spending most of our resources to make the world better holds across many different ethical systems. And while I’m confused about the particulars of utilitarianism, the space of possible answers is small enough that most everyday ethical decisions are robust to those variations. So in practice, my ethical uncertainties aren’t a big issue.

Question: am I right in ignoring these technical details for now?

ii. Personal details

One might think that the goals of maximizing the good that I can do, and maintaining high quality of life and life satisfaction, would conflict: for example, given a fixed sum of money, I could spend it on charity (maximizing good) or a better computer (maximizing quality of life) or accordion lessons (maximizing life satisfaction).

However, this is frequently a false tradeoff. First, my life satisfaction is very much tied to how much good I do in the world, so I can spend much of my time achieving both of these goals simultaneously. Although I wish I could play accordion, knowing that I’ve helped materially improve many people’s lives would be much more satisfying for me. While trying to live a fulfilling life can sometimes lead away from the precise optimal thing to do, I don’t believe that this effect is large enough to worry about, at least for me personally.

Second, the amount of resources I can devote to doing good in the world is mostly limited by my quality of life. So if spending money would improve my quality of life noticeably, chances are it would increase, rather than decrease, the resources I could spend on charity. For example, if I buy a better computer, and the difference is noticeable, it will make me more productive, leading to me producing more resources to put towards improving the world. Of course, this line of thinking is prone to rationalization, which I have to be careful of, but that’s a problem with implementation, not with the goals of doing good and having a high-quality life actually being in conflict.

My non-altruistic goals are fairly prosaic. Most of these are the usual suspects: health, close friendships, respect, social approval, sex, fun, power, knowledge, solving interesting problems, etc. I don’t think any of these are particularly unusual, so I won’t discuss them too much.

Question: do I have any unusual personal preferences that I’m neglecting?

iii. Altruistic details

As I mentioned I’m currently undecided on what causes I think are the most effective. I’ve been donating to GiveWell’s recommended charities because it was the easiest point to start giving money. I intend to do more thinking about less certain causes such as animal suffering and the far future before I donate substantially more.

1b. My knowledge and skills

I know a lot of math, computer science, and physics, and some economics and statistics. I have basic knowledge in the humanities and almost none in the softer natural sciences (biology and chemistry). I haven’t studied any high-level humanities (anthropology, anything with “studies” at the end) or soft sciences beyond first-year economics (linguistics, sociology, psychology).

With respect to actual skills, my strongest comparatively is probably programming or math. (For calibration, I’ve interned at Fog Creek and Jane Street and scored in the Putnam top 200 last year.) I’m also a decent writer and musician.

I’m currently weak at interpersonal skills, logistics, and doing busy work.

Question: is my assessment of these as strengths and weaknesses correct? Am I missing others?

If it were necessary I think I could improve these skills somewhat, but I’m guessing that it’s better to do things that play to my strengths and leave the other stuff to other people, unless I come across a seriously under-resourced opportunity in one of my weak areas.

Question: is this intuition reasonable?

I’m self-motivated enough to run a student organization here (Harvard High-Impact Philanthropy), but somewhat poorly when I was running it on my own—it’s going much better now that there are two main organizers than it was when I was doing everything myself (a potential confounder is that I got much more enthusiastic about it and better-organized in the mean time). While I was working essentially by myself over the summer I sometimes found it hard to stay motivated (a potential confounder is that I thought that what I was doing was unlikely to succeed).

Question: where am I in terms of self-motivation and self-direction?

2. Options

I’ve identified a couple main things that I could do. First, I could earn a lot of money and donate it to effective charities; it seems like the best ways to earn a lot of money are finance or startups. Second, I could do research into important topics in effective altruism, like cause prioritization, identifying effective organizations, etc. Third, I could work for an effective organization in some non-research capacity.

Question: am I leaving out major options?

For choosing a plan are, the important considerations are:

  1. its altruistic value (on expectation)
  2. whether it’s sustainable for me to pursue (e.g., reasonably nice, not too much stress, won’t cause me to burn out)
  3. over what time horizon it pays off (if all the payoff is in 60 years, it’s less valuable because I’m likely to find a better opportunity before then and change plans)
  4. relatedly, its option value—how easily I could switch to a higher-value plan if I discover one later

Question: for how long should I expect to pursue the plan I choose?

2a. Earning to give in finance

i. Expectation

I’ve done a winternship at Jane Street Capital, and it seems likely that I could get a full-time job there. Given what I know about salaries at Jane Street and 80k’s data generally, I’d estimate my expected lifetime earnings somewhere in the double-digit millions pre-tax.1 This is based on the following assumptions:

Question: how likely is it that Jane Street will stay around in time for me to extract a significant chunk of that value?

If my earnings turn out to be on the low end of the distribution, I’ll be able to tell this fairly quickly (probably within two years).

ii. Personal factors

Jane Street plays well to my comparative advantages at math, programming, and (possibly) rationality. Additionally, my impression is that relative to most high-paying finance jobs (e.g. investment banking), and probably compared to doing a start up, the work is less stressful. As such, working at Jane Street seems quite good from a personal-preferences angle.

iii. Information

I’ve already done a winternship at Jane Street, so I have a fair amount of information about how they work. If I wanted to gather more information about working in finance the next step would probably be to interview at other firms and compare, or possibly do another internship.

Question: should I be looking into other finance firms as well? Which ones?

2b. Earning to give at a startup

i. Expectation

The expected value of this option is much hazier to me. According to 80k’s research, the risk-neutral expected value of a funded startup is $1.4 million per year. If I’m going to do a startup, the best plan seems to be to try for some set amount of time (a year, perhaps?) and fold if it can’t get funding or other signals of success by then. The previous paragraph relies on the following assumptions:

Question: are these reasonable assumptions?

ii. Personal factors

Personally, I have some aversive distaste for the startup “scene”, which might limit my ability to succeed and make it somewhat psychologically difficult. My impression is that compared to other options, doing a startup would be somewhat more stressful, somewhat less enjoyable, and give me a mildly danger chance of burning out, but this effect isn’t very large.

Question: to what extent is this aversive reaction a handicap?

Question: is my self-motivation high enough to do a startup?

Question: to what extent should I downgrade my estimate due to psychological factors (stress, burnout)?

iii. Information

Whether or not I have good indicators for startup success is less clear to me than my good indicators for Jane Street. I attempted something that was basically a startup over the summer and it did not go as well as hoped (we didn’t get very much done and it looks like it isn’t going anywhere, despite receiving an early indicator of success in the form of some competitive grant funding). However, there were some confounders: I was working alone for most of the time, we weren’t clear on our direction, we weren’t clear about expectations of commitment going forward, and our idea was too complex. I think the lessons I’ve learned from that experience would make a second attempt significantly more likely to get off the ground.

Question: do I have good enough indicators that I should seek more information?

If I wanted to gain more information, I could either intern for a small startup, or attempt a second one myself.

Question: which one would be better?

2c. Research

Research is an option I hadn’t considered until recently; I assumed that it was mostly replaceable (I could earn to give and fund several researchers) but this turns out to be at least somewhat false. As such, I’m now looking into effective altruism research as an option.

i. Expectation

I don’t know how to think about the expected value of EA research. I expect significant leverage, since strong research supporting underfunded charities could draw non-EA funding to them. This seems like it might be a good time to do some modeling. I know that Giving What We Can has been doing a bit of related work, but something more general might be useful.

Question: how do I think about comparing this to earning to give?

Anyway, my instinct is that there are fewer people who are suited to do research than to do earning to give, and there’s a lot of uncertainty about causes right now, so if I turn out to be a good fit to do research I should weight it more highly.

ii. Personal factors

The personal value of doing research probably varies a lot with where I do it, and I don’t have a good sense of what my options are. GiveWell would probably be between Jane Street and a startup in terms of intensity; academia could be as intense as I made it (and would have to be intense for a while if I wanted a tenured job). Either way, though, I don’t think personal factors would be a large concern if I chose to research.

iii. Information

I have fairly good indicators for research. I come from an academic family, get good grades in difficult classes, and have strong quantitative and analytic skills and good mathematical intuition.

I’m currently doing some programming work for GiveWell to see if I’m a good fit for working with them. If that goes well I might be able to intern with them and do some research. I could also intern with other organizations, or be a research assistant for a professor on campus.

I’m also currently taking a class on “Reasoning via Models” that should give me some information on how much I enjoy modeling work (although it’s more on the theoretical/philosophical side). I plan to take at least one more modeling course (in economics) before I graduate.

Question: how do I assess whether I’m comparatively better at earning to give or research?

2d. Other EA work

I could also do other kinds of work for other effective or potentially effective organizations (80,000 Hours, Giving What We Can, CFAR, MIRI, Leverage Research, etc.).

i. Expectation

Again, the impact here is hazier. It seems that many of these organizations have large potential upside and are relatively small right now, so the value I’d add by working there could be fairly large on expectation, but I don’t know how to compare this to other options. Many of these organizations are at least somewhat cash-limited in addition to people-limited, so even if I could work for them, earning to give to them could also be valuable.

Question: how do I think about this decision?

ii. Personal factors

I can’t think of any specific competencies I have that would make me a particularly good fit for one of these organizations in a non-research role. From my experience running Harvard High-Impact Philanthropy, I seem to be a competent but not exceptional organizer, fundraiser and communicator. As such, unless these organizations are in dire need of warm bodies, I’d rather leave those roles to other people.

Question: is this right? Are there options I’m an especially good fit for that I’m missing?

iii. Information

My current self-evaluation is based on my experience running a campus group, Harvard High-Impact Philanthropy, which involves fundraising, organizing events, doing logistics, writing communications, and recruiting new members, essentially learning as we go (the group is new enough that we don’t have much institutional memory).

There’s a lot of information I’m missing here:

Question: why aren’t the answers to the first two questions available somewhere publicly?

Many of these can be answered by doing research and speaking with folks from these organizations. I could also visit them in person or intern with them.

3. Current beliefs affecting my plan

Here’s some beliefs that I’ve acquired mostly from conversations with people and reading things. In the future I aim to talk to more people to get a better sense of whether these are true or whether there are things I’m missing.

  1. My expected earnings from working in finance are greater than my expected earnings from doing a startup.
  2. Effective altruism organizations are bottlenecked on both funding and people, and the difference in severity of the bottlenecks is close enough that my comparative advantage is the dominant factor.
  3. I might have a comparative advantage at research, and probably do not at any of the other relevant skills for an EA organization.
  4. Replaceability largely does not apply to any of the options I’m considering (finance companies, tech companies or EA organizations).

Question: are these correct? Are there other background beliefs I’m missing?

4. Next steps

I would like to acquire a lot more information before deciding to exploit one of these options. As such, my current plan essentially boils down to “1) get the most valuable information 2) make a better plan”. In more detail, some things that seem like good ideas are

  1. Get more information about how well I could do research by e.g. working for GiveWell
  2. Get more information about direct work by asking EA organizations about what resources they’re limited by and what they would do with extra
  3. Figure out how to think better about option value and time horizons better
  4. Model the effectiveness of research with a couple different models to get a sense for what the value might be

Question: what else goes here?

Anyway, that’s the plan. If you’ve made it this far, congrats! I hope you have some thoughts that you’ll tell me in the comments :)


  1. This used to name a specific number (30m) for concreteness, but I obfuscated it by multiplying it by a random factor and there wasn’t much evidence behind it in the first place, so I’ve removed that. ↩︎

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Eliezer Yudkowsky

Thoughts: Jane Street combined with earning-to-give to a truly effective charity sounds very hard to beat. $30M is on the rough order of my estimate for how much money MIRI might need over its whole existence, though definitely on the low side.

If you were giving to a charity that I consider ultimately unimportant to the fate of the galaxy (most obviously animal altruism though that’s setting the bar low) then you could probably do more good as a researcher at GiveWell.

Unless you feel particularly inspired to be a decision theorist your comparative advantage probably does not lie at MIRI.

Unless you are an extraordinarily good inventor of ways to teach rationality your comparative advantage probably does not lie at CFAR compared to earning-to-give there if CFAR were your chosen priority.

I cannot speak for how earning-to-give to Givewell, vs. working for Givewell, would compare; you’d have to ask Holden for that. My estimates of Givewell’s importance to the fate of the universe are overwhelmingly dominated by its ability to grow the effective altruist movement some of which will overflow into, you know, really effective altruism (defined as the sort of altruism which, as a class, might be regarded as having been of some deliberate importance to final outcomes, 200 million years from now).

If you feel aversive about startups then you probably should not do a startup. It might be worth trying to gain valuable information about this by working at a startup for 3 months or something. If you then decide that you feel good about startups but have no ideas, I can talk to you about that but only if you’ve chosen CFAR or MIRI as a donations target.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

Looks like you’re headed towards finance - good luck :) !

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Charles

First off, I don’t think your personality is well suited to running a startup. You don’t have much affinity for cleverly marketed bs / low quality work, which seems to be the staple for startups.

Between research and Jane Street, I would say Jane Street is almost certainly higher in expectation and the safer choice. “Progress” (i.e. money donated) is roughly linear. Assuming you enjoy the day to day work and the social atmosphere, you’ll have your ~$30 million.

Research could get frustrating. Especially in a field like EA that isn’t firmly grounded in anything, expect to see lots of handwaving. The problems you’re trying to tackle might seem hopelessly complex, and you might stop believing your research could possibly get anywhere. I think a decent predictor of how prone you are to burnout is to ask yourself: suppose your last 1000 cool ideas turned out to be rather disappointing upon further investigation. You just came up with something new to try in the shower. Would you still be excited by the prospect of discovering something big, or will you have the “probably won’t work” attitude? When you’re 70 and looking back at your life’s work, depending on your expectations, you may just think “well shit”. That said, the far tail for research is way more favorable than Jane Street (i.e. if you can come up with a better model / broadly improve efficiency). However, if you’re prone to seeing any outcome left of the Jane Street - research equivalence point as failure, you’ll be happier at Jane Street. You should also consider the social atmosphere of who you’ll be working with. Do you connect better with the GiveWell folks or Jane Street?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Eliezer: how much of that money would you have raised anyway? There are several people earning to give to MIRI already, right?

Anon: I’m not sure why you have that impression; I think all four options are competitive.

Charles: I don’t care about “safe”, I care about most valuable on expectation. I agree that research is higher variance, but the benefits aren’t accruing to me, they’re accruing to the world, and the world has no noticeable diminishing returns, hence no risk aversion.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Eliezer Yudkowsky

Ben, there are, but not enough. Humanity is not out of the woods on FAI research yet. Our current funding is on the order of $1m/year and that needs to go up to at least $3m/year to put together any kind of decent team, and there will be good use for marginal dollars beyond that (no sane world would seriously consider funding all their FAI effort at $3m/year).

CFAR is arguably an even more urgent disposition of marginal dollars due to their youth, though I am hoping that they can get those dollars from other sources and won’t need to funge against MIRI. (Obviously I must think this, since otherwise we could just tell some donors to transfer donations from MIRI to CFAR - we’re all effective altruists here.)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Eliezer Yudkowsky

(I should also remark that MIRI’s spending should be well south of $1m this year, $333k of the money is one-time gains from the sale of the Singularity Summit and this is being earmarked for a fund we can use to stably employ FAI researchers who may worry about stability of their funding. Also this money has not yet actually arrived.)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Nick Beckstead

Glad you made this post! Seems like you are thinking about good options.

I agree with your broad “next steps.” Spending time working with GiveWell seems like a great idea. I would favor more concrete steps like “spend more time working for GiveWell and/or working for another EA organization” over “model the effectiveness of research” or “figure out how to think about option value and time horizons better.”

You might also consider doing an 80,000 Hours case study. At the very least they could have the discussion with you about their money vs. talent constraints and we could put the results online. I’d be happy to participate/facilitate.

Re: your question about why the answers to these questions aren’t available publicly, 80,000 Hours has made a very clear post about its funding gap and GiveWell has made it pretty clear that they are more constrained by talent than money. See here: http://80000hours.org/blog/249-finance-report and here: http://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anthony

It would be beneficial to convert expected earnings into lives saved and lives improved. Even if you don’t have a precise utility function, a rough estimate should help you make a decision(for instance, half your projected income(15 million dollars) translates to 8333 lives at 1800* dollars per life.

*Assuming your income didn’t adjust for inflation and the cost per life does not outpace inflation

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Eliezer Yudkowsky

I automatically convert lives to time. 8333 lives is roughly 1.4 hours worth of the planetary death rate.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

But thats not a stable unit over a lifetime. Barring drastic medical improvements, the death rate is going to increase due to the relatively large young population aging. Even if you try to account for it, it still introduces more uncertainty into your calculation.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

What about personal life ambitions such like getting married, raising a family, etc?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Eliezer Yudkowsky

Eh, screw that.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

Well, I’d like to hear what OP thinks.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ted S

Here’s another data point for expected startup value: ycombinator companies are worth about $22.5 million on average [1]. I think you have the right profile for yc, so it’s not silly to assume you could perform on that level.

If so, a startup may be higher EV than Jane Street, since on average a startup is shorter than a full career at Jane Street.

Much of the value comes from companies that have done extraordinarily well (e.g. Airbnb, Dropbox, Weebly). This might have some implications, even if you’re entirely risk neutral:

[1] https://news.ycombinator.com/item?id=5773159

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Pablo Stafforini

Answers to some of your questions below.

Question: am I right in ignoring these technical details for now?

I think Eliezer’s ‘fragility of value’ thesis is relevant here. Two ethical systems that differ but slightly, such as negative and classical utilitarianism, might have very different practical implications. (Some people will dispute that these two theories do in fact differ so much, but that’s contingent on certain empirical claims, which even these people would agree are speculative.) One way to resolve this problem is to think harder about ethics, and hope that this will reduce the uncertainty. Another way is to think harder about the problem of moral uncertainty, and hope that this will reduce the second-order uncertainty about how to deal with moral uncertainty.

Question: do I have any unusual personal preferences that I’m neglecting?

This is not a direct answer to your question, but it bears on the issue of potential conflict between altruistic and self-interested goals. My experience is that many people have become less altruistic as a consequence of being in a romantic relationship. I won’t mention names because this is a sensitive topic, but my advice would be to become involved only with women that share your altruistic commitments, insofar as this is realistically possible.

Question: is my assessment of these as strengths and weaknesses correct? Am I missing others?

My impression is that you are actually very good at interpersonal skills, contrary to your self-assessment. I was, for instance, very impressed by a comment that you wrote in response to a Georgist a while ago, and I know of others that share my opinion (and none that don’t).

Question: is this intuition reasonable?

I am inclined to agree, though I don’t know you enough to determine how weak you are on logistics or doing busy work.

Question: where am I in terms of self-motivation and self-direction?

Many EAs (e.g. Adriano Mannino) have highlighted the importance of surrounding yourself with like-minded folk for staying motivated and preventing value-drift. This coheres with my own experience. Some psychological studies also seem to back this up.

Question: am I leaving out major options?

I don’t think you are leaving out any promising contenders.

Question: for how long should I expect to pursue the plan I choose?

This is an important question that is not often addresses explicitly. Many of the EAs I respect the most, such as Jason Gaverick Matheny or Carl Shulman, have made major career changes along the way. So I wouldn’t expect you to pursue your chosen plan for a very long time.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Thanks for the detailed reply, Pablo! I only have time to write a bit right now, but I’m thinking about all of them (and agree with most) :)

Question: do I have any unusual personal preferences that I’m neglecting?

This is not a direct answer to your question, but it bears on the issue of potential conflict between altruistic and self-interested goals. My experience is that many people have become less altruistic as a consequence of being in a romantic relationship. I won’t mention names because this is a sensitive topic, but my advice would be to become involved only with women that share your altruistic commitments, insofar as this is realistically possible.

(First, congratulations on correctly guessing my sexuality.)

Hmm. Perhaps I haven’t been hanging around effective altruists long enough, but I can’t think of any examples of this happening, and can think of examples of the reverse (e.g. Jeff Kaufman and Julia Wise, from my understanding). I also know at least a few folks on the EA group are in relationships with less altruistically committed folks and it doesn’t seem to have impacted them.

However, if this is happening, it sounds like a problem that we should try pretty hard to mitigate rather than just avoid, especially given the gender ratio issues that effective altruism has inherited from math and philosophy. The advice of “become involved only with women that share your altruistic commitments, insofar as this is realistically possible” is not going to be practical for all heterosexual male EAs right now, purely as a matter of numbers.

Maybe we should talk more offline?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Aaron Gertler

(Repost from Facebook)

Something I didn’t notice on my first pass: Consideration of your impact on coworkers and business friends.

In the Jane Street world, you might work in an office with 20 millionaires. In the tech world, you might befriend a hundred-millionaire or two. Elie and Holden had $300,000 in Bridgewater seed funding (from friends) when they started Givewell. Someone at some point convinced Peter Thiel to start giving money to MIRI (whether it was a personal friend or someone who wrote a great essay on FAI).

You’re a better writer than you give yourself credit for, and good writers tend to be good persuaders. There’s a good chance you could become an “influencer” wherever you go.

(Though I don’t know whether influence has higher EV in finance or tech, I’d guess tech, because it seems like a more social environment in general.)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.