## Sam Bankman-Fried and the geometry of conscience

Update (Dec. 15): This, by former Shtetl-Optimized guest blogger Sarah Constantin, is the post about SBF that I should’ve written and wish I had written.

Update (Nov. 16): Check out this new interview of SBF by my friend and leading Effective Altruist writer Kelsey Piper. Here Kelsey directly confronts SBF with some of the same moral and psychological questions that animated this post and the ensuing discussion—and, surely to the consternation of his lawyers, SBF answers everything she asks. And yet I still don’t know what exactly to make of it. SBF’s responses reveal a surprising cynicism (surprising because, if you’re that cynical, why be open about it?), as well as an optimism that he can still fix everything that seems wildly divorced from reality.

I still stand by most of the main points of my post, including:

• the technical insanity of SBF’s clearly-expressed attitude to risk (“gambler’s ruin? more like gambler’s opportunity!!”), and its probable role in creating the conditions for everything that followed,
• the need to diagnose the catastrophe correctly (making billions of dollars in order to donate them to charity? STILL VERY GOOD; lying and raiding customer deposits in course of doing so? DEFINITELY BAD), and
• how, when sneerers judge SBF guilty just for being a crypto billionaire who talked about Effective Altruism, it ironically lets him off the hook for what he specifically did that was terrible.

But over the past couple days, I’ve updated in the direction of understanding SBF’s psychology a lot less than I thought I did. While I correctly hit on certain aspects of the tragedy, there are other important aspects—the drug use, the cynical detachment (“life as a video game”), the impulsivity, the apparent lying—that I neglected to touch on and about which we’ll surely learn more in the coming days, weeks, and years. –SA

Several readers have asked me for updated thoughts on AI safety, now that I’m 5 months into my year at OpenAI—and I promise, I’ll share them soon! The thing is, until last week I’d entertained the idea of writing up some of those thoughts for an essay competition run by the FTX Future Fund, which (I was vaguely aware) was founded by the cryptocurrency billionaire Sam Bankman-Fried, henceforth SBF.

Alas, unless you’ve been tucked away on some Caribbean island—or perhaps, especially if you have been—you’ll know that the FTX Future Fund has ceased to exist. In the course of 2-3 days last week, SBF’s estimated net worth went from ~$15 billion to a negative number, possibly the fastest evaporation of such a vast personal fortune in all human history. Notably, SBF had promised to give virtually all of it away to various worthy causes, including mitigating existential risk and helping Democrats win elections, and the worldwide Effective Altruist community had largely reoriented itself around that pledge. That’s all now up in smoke. I’ve never met SBF, although he was a physics undergraduate at MIT while I taught CS there. What little I knew of SBF before this week, came mostly from reading Gideon Lewis-Kraus’s excellent New Yorker article about Effective Altruism this summer. The details of what happened at FTX are at once hopelessly complicated and—it would appear—damningly simple, involving the misuse of billions of dollars’ worth of customer deposits to place risky bets that failed. SBF has, in any case, tweeted that he “fucked up and should have done better.” You’d think none of this would directly impact me, since SBF and I inhabit such different worlds. He ran a crypto empire from the Bahamas, sharing a group house with other twentysomething executives who often dated each other. I teach at a large state university and try to raise two kids. He made his first fortune by arbitraging bitcoin between Asia and the West. I own, I think, a couple bitcoins that someone gave me in 2016, but have no idea how to access them anymore. His hair is large and curly; mine is neither. Even so, I’ve found myself obsessively following this story because I know that, in a broader sense, I will be called to account for it. SBF and I both grew up as nerdy kids in middle-class Jewish American families, and both had transformative experiences as teenagers at Canada/USA Mathcamp. He and I know many of the same people. We’ve both been attracted to the idea of small groups of idealistic STEM nerds using their skills to help save the world from climate change, pandemics, and fascism. Aha, the sneerers will sneer! Hasn’t the entire concept of “STEM nerds saving the world” now been utterly discredited, revealed to be just a front for cynical grifters and Ponzi schemers? So if I’m also a STEM nerd who’s also dreamed of helping to save the world, then don’t I stand condemned too? I’m writing this post because, if the Greek tragedy of SBF is going to be invoked as a cautionary tale in nerd circles forevermore—which it will be—then I think it’s crucial that we tell the right cautionary tale. It’s like, imagine the Apollo 11 moon mission had almost succeeded, but because of a tiny crack in an oxygen tank, it instead exploded in lunar orbit, killing all three of the astronauts. Imagine that the crack formed partly because, in order to hide a budget overrun, Wernher von Braun had secretly substituted a cheaper material, while telling almost none of his underlings. There are many excellent lessons that one could draw from such a tragedy, having to do with, for example, the construction of oxygen tanks, the procedures for inspecting them, Wernher von Braun as an individual, or NASA safety culture. But there would also be bad lessons to not draw. These include: “The entire enterprise of sending humans to the moon was obviously doomed from the start.” “Fate will always punish human hubris.” “All the engineers’ supposed quantitative expertise proved to be worthless.” From everything I’ve read, SBF’s mission to earn billions, then spend it saving the world, seems something like this imagined Apollo mission. Yes, the failure was total and catastrophic, and claimed innocent victims. Yes, while bad luck played a role, so did, shall we say, fateful decisions with a moral dimension. If it’s true that, as alleged, FTX raided its customers’ deposits to prop up the risky bets of its sister organization Alameda Research, multiple countries’ legal systems will surely be sorting out the consequences for years. To my mind, though, it’s important not to minimize the gravity of the fateful decision by conflating it with everything that preceded it. I confess to taking this sort of conflation extremely personally. For eight years now, the rap against me, advanced by thousands (!) on social media, has been: sure, while by all accounts Aaronson is kind and respectful to women, he seems like exactly the sort of nerdy guy who, still bitter and frustrated over high school, could’ve chosen instead to sexually harass women and hinder their scientific careers. In other words, I stand condemned by part of the world, not for the choices I made, but for choices I didn’t make that are considered “too close to me” in the geometry of conscience. And I don’t consent to that. I don’t wish to be held accountable for the misdeeds of my doppelgängers in parallel universes. Therefore, I resolve not to judge anyone else by their parallel-universe doppelgängers either. If SBF indeed gambled away his customers’ deposits and lied about it, then I condemn him for it utterly, but I refuse to condemn his hypothetical doppelgänger who didn’t do those things. Granted, there are those who think all cryptocurrency is a Ponzi scheme and a scam, and that for that reason alone, it should’ve been obvious from the start that crypto-related plans could only end in catastrophe. The “Ponzi scheme” theory of cryptocurrency has, we ought to concede, a substantial case in its favor—though I’d rather opine about the matter in (say) 2030 than now. Like many technologies that spend years as quasi-scams until they aren’t, maybe blockchains will find some compelling everyday use-cases, besides the well-known ones like drug-dealing, ransomware, and financing rogue states. Even if cryptocurrency remains just a modern-day tulip bulb or Beanie Baby, though, it seems morally hard to distinguish a cryptocurrency trader from the millions who deal in options, bonds, and all manner of other speculative assets. And a traditional investor who made billions on successful gambles, or arbitrage, or creating liquidity, then gave virtually all of it away to effective charities, would seem, on net, way ahead of most of us morally. To be sure, I never pursued the “Earning to Give” path myself, though certainly the concept occurred to me as a teenager, before it had a name. Partly I decided against it because I seem to lack a certain brazenness, or maybe just willingness to follow up on tedious details, needed to win in business. Partly, though, I decided against trying to get rich because I’m selfish (!). I prioritized doing fascinating quantum computing research, starting a family, teaching, blogging, and other stuff I liked over devoting every waking hour to possibly earning a fortune only to give it all to charity, and more likely being a failure even at that. All told, I don’t regret my scholarly path—especially not now!—but I’m also not going to encase it in some halo of obvious moral superiority. If I could go back in time and give SBF advice—or if, let’s say, he’d come to me at MIT for advice back in 2013—what could I have told him? I surely wouldn’t talk about cryptocurrency, about which I knew and know little. I might try to carve out some space for deontological ethics against pure utilitarianism, but I might also consider that a lost cause with this particular undergrad. On reflection, maybe I’d just try to convince SBF to weight money logarithmically when calculating expected utility (as in the Kelly criterion), to forsake the linear weighting that SBF explicitly advocated and that he seems to have put into practice in his crypto ventures. Or if not logarithmic weighing, I’d try to sell him on some concave utility function—something that makes, let’s say, a mere$1 billion in hand seem better than 15 billion that has a 50% probability of vanishing and leaving you, your customers, your employees, and the entire Effective Altruism community with less than nothing. At any rate, I’d try to impress on him, as I do on anyone reading now, that the choice between linear and concave utilities, between risk-neutrality and risk-aversion, is not bloodless or technical—that it’s essential to make a choice that’s not only in reflective equilibrium with your highest values, but that you’ll still consider to be such regardless of which possible universe you end up in. ### 207 Responses to “Sam Bankman-Fried and the geometry of conscience” 1. Rahul Says: Isn’t this reading a bit too much and complicating what seems to be a simple crash based on hype, greed and stupidity? We have seen those for a long time in traditional markets and here’s one more. When you listen to SBF interviews is there anything really profound in there? Lot of it always seemed like metaphysics packaged so that it looked tantalizingly profound? Sometimes when the going is good, there is little incentive to ask too many difficult questions and be a party pooper. 2. MK Says: Scott, it seems very naive to me to assume goodwill on SBF’s part, i.e. assume that his “cover story” of earning billions and then donating to useful causes is in line with his true motivations. I mean, it might be **partially** true, in the sense that this is the story he told himself (and the world) and had **some** inclination to act on it. I’d put much higher prior on plain old greed and ambition, with EA as an afterthought. Why? In part because being a true altruist (sacrificing yourself to maximize some abstract good) and skills/personality required to be successful in a risky and somewhat shady business are very much anti-correlated, if not outright disjoint. It’s just very, very rare. Money and status corrupt, just as power corrupts. When judging politicians, I hope we agree a sound prior would be “the guy is in it for selfish reasons and whatever talk about public good is just PR / rationalization”. Why judge entrepreneurs by other standard? You could make a case that SBF is an exception if you had some extraordinary evidence that he’s different than the bunch. So far such evidence is lacking. 3. moonshadow Says: The promise of cryptocurrency is that it is decentralised, unstoppable, uncontrolled by any government or bank; unregulated and unregulatable, a financial wild west. This is also its curse. A space without boundaries, regulations or codes disproportionately attracts people that cannot be who they want or do what they want in spaces where actions can have consequences for the actor and not merely externalities. Simultaneously, people who prefer some level of safety in their interactions leave. The end result, whether in social circles, financial circles or anywhere else, is a Mad Max style apocalyptic wasteland. The incentives and the human condition align to push the space into that shape, as surely as gravity and the weather carve landscapes into perfectly predictable shapes. It is not impossible for a crypto-adjacent project to remain free of scams and failure, just as it is not impossible to win the lottery. But I suggest that it is not rational to base any longterm plans on either outcome. 4. MK Says: [sorry for double posting] After reading more on the background (e.g. see this thread: https://twitter.com/jonwu_/status/1590099676744646656), I’m 95% convinced that FTX is/was just a plain scam and the EA story is a front to make SBF look admirable. I know it’s hard to stomach that someone with a background so similar to yours could end up in such a place, esp. if you got duped by the EA story. 5. marc Says: “It seems morally hard to distinguish a cryptocurrency trader from the millions who deal in options, bonds, and all manner of other speculative assets” – fair enough, there is no difference. It’s all speculative trading BUT what happened here at best was someone gambled other peoples’ money & not in a “I give you my money please speculate with it, try not to lose it”-way but in a “I give you my money to hold on my behalf oh dear you speculated with it which you were not supposed to do & lost it all”-way but more likely fraudulently took it and now it appears (tho this may change) that he’s stealing more even as the companies go into bankruptcy. See also the mystery of how he got the money to start – I don’t care how good you are, 3 years at Jane Street doesn’t give you enough . Don’t ask how much of the bsuiness was ever legit – see his apparent CTO “Gary Wang” for example who may not even exist or making his alleged girlfriend in her mid 20s with no experience CEO. So of course we shouldn’t tar all those looking to try to do good by making money first with the same brush but it appears that SBF was dodgy to the core. 6. Danylo Yakymenko Says: You should not be afraid that people will confuse you with him. “Good at math” is not a reason to equate people in the modern computerized world. You’re respected scientist and educator who works for the good of all, because this is what science can bring. He is a fraudster that ruined life of a lot of people. He did nothing good. Crypto money in general doesn’t solve any real problems (except enabling criminal activities). And it doesn’t matter what people say they will do when they get rich. What matters is how they are gaining money right now. I suggest you do not diminish his guilt by inventing not appropriate analogies or saying “we couldn’t know”, but thinking of how this could be predicted and prevented, if this really concerns you. 7. DP Says: Whenever someone wealthy/famous/somewhat contrarian is subject to massive public scorn I always seem to have a lot of empathy for them. Maybe I have a blind spot and I should care more about the people hurt by their actions, maybe this is just a form of scope insensitivity. This time feels really different. This time I feel like I know SBF, even though I haven’t met him, since I probably think or have thought very similarly to him about ethics and making an impact on the world, and have personally met people with a similar style of thinking. It feels like basically nobody on Twitter or Hacker News or wherever really has any intuition for what’s most likely going on. This seems like a story of someone who really is trying to help, who has failed to manage his resources correctly because he’s overconfident and inexperienced, and might have a slightly poor intuition of second-order effects and practical ethics. These character traits seem slightly reminiscent to who I was five years ago, though I was safe because didn’t have a multi-billion dollar company to manage. I don’t know whether it’s just that I have an edge in establishing priors about this situation given I’m more familiar with some parts of the context, or just that few people have heard of Hanlon’s razor. I just want people to stop overextending their uninformed analysis of the situation and the character of the people involved especially before the whole story is uncovered. Although admittedly I felt extreme moralistic anger towards CZ and SBF at various times, eventually you have to realise these are humans prone to mistakes and partial information acting the way they think is best at the time, and the most parsimonious conclusion for me to make is that SBF made, at worst, some really naive choices, and I can definitely relate to that. (If it turns out that with high confidence, SBF was lying, then I retract my analysis. As it stands, it feels extremely weird to conclude that such a prominent long-time member of the EA community would lie about such significant things. In this regard I do suppose I have a more detailed perspective than your average HN or Twitter user) (I want to make it clear that I don’t think risking significant negative externalities for a potential payoff is something the EA community should or does encourage, and this comment is not me “applauding SBF for trying”. I also don’t want to take away from the fact that there are many lessons for the EA community to learn from what happened here) 8. Math Boy Says: This is precisely what the Securities Act of 1934 was designed for. When Lehman US went bankrupt, all the assets at the broker-dealer (individuals, mutual funds, hedge funds) didn’t go up in smoke because they were ring fenced away from the main Lehman business. The value of the securities may have crashed but the securities did not disappear. Compare with Lehman EU based out of London where no such restriction applied and hedge funds became general creditors during liquidation. The assets had been pledged to do all kinds of things. They were effectively gone. There was also a questionable transfer of 8B from Lehman EU to Lehman US in the end days. Thanks for the deregulation, Maggie Thatcher! Before the law, partcularly during the Roaring 20’s, if you went to a broker dealer, they could borrow against your assets and do whatever they wanted. Put it all on red in a game of roulette if they wanted. There was no limit to the size of the borrowing either. If someone was willing to lend them large sums, so be it. That’s precisely what the regulation was designed for and that’s precisely why it hasn’t changed in its basic essence since 1934. (OK, some details have changed but they are minor.) There are strict rules about client assets and how much can be borrowed and what for. (Omitting the details but easy enough to look up.) The same rules should’ve been enforced for crypto funds and exchanges but there’s a reason why it wasn’t as well. The key technical loophole is that Bitcoin is not a “security” which is a legal definition. Neither are commodities or currencies for that matter. Bitcoin falls under the purview of the Commodity Exchange Act which is far less restrictive and periodically, you’ll hear of some commodities broker going up in flames. It’s just that very few individuals play in those kinds of financial circles so it’s probably not on most people’s radar. 9. util farmer Says: re utilitarianism vs deontology: the EA party line, which I agree with, is that his actions were immoral under utilitarianism. Yes, you have to count the potential extra money donated as good, but the downside, if it fails (as it did), is not just people’s savings, but also the loss of global trust and a hit to EA reputation, which are very valuable. You can also say something for not doing every common sensically immoral thing that you think is util-EV positive, that coincidentally is status-EV positive for you, because of accounting for self-serving bias. 10. Melanie Frank Says: Hi Scott, My thoughts about SBF: he reminds me of the Fyre Festival guy. They both seem to have enormous egos but an “unwillingness to follow-up on tedious details.” It’s a shame, too, because the narcissistic lack of humility and discipline surely led to a waste of talent and resources, not to mention all the “little people ” who got screwed. -Melanie 11. Scott Says: I suppose I need to lay out the evidence that SBF was, indeed, genuinely committed to EA, rather than cynically using it as window dressing (not that that could excuse financial fraud): 1) According to the now-deleted Sequoia article—which actually remains an excellent source of information and insight, though obviously missing something!—SBF went into finance in the first place because of a 2013 conversation with Will MacAskill (founder of Effective Altruism) at the Harvard Square Au Bon Pain. MacAskill convinced SBF that “earning to give” was the way to fulfill the Benthamite utilitarianism of his childhood. Alameda Research was then founded and run by people at the Center for Effective Altruism in Berkeley. The EA was not some PR gloss added later, but the explicit point from the very beginning. 2) Even after he became a billionaire with an empire headquartered in the Bahamas, SBF seems to have lived a semi-monastic life—working constantly, sleeping on a beanbag chair in the office, eating Impossible Burgers and fries, no ostentatious displays of wealth or groupies. 3) If SBF were running a conscious scam, why was he so open to journalists? Why didn’t he “get out while the getting was good,” rather than continuing to risk everything as a Benthamite utilitarian would and as someone who just wanted to live in luxury wouldn’t? 4) The charitable giving was consistent, going back to SBF’s time at Jane Street. It wasn’t postponed to some indefinite future. 12. Scott Says: Melanie Frank #10: No, it would, in a way, let me and my friends off too easily to say that this was just like the Fyre Festival guy! Both stories involved gross financial mismanagement, a spectacular dayslong collapse as a narrative smacked into reality, justifiably outraged customers, and a Caribbean island … but I think the similarities end there. 😀 13. MK Says: @Scott #9: thanks for the additional info. But – did he actually donate any substantial sum anywhere? Or was it just talk? How much trust should we put into his version of this story? Given that we’re dealing with a multi-billion fraud here, some heavy cynicism seems prudent. Regarding the semi-monastic lifestyle. In the old days, a con man impressed his marks by wearing expensive suits and appearing well-connected. These days it’s wearing hoodies, eating burgers and endorsing whatever fad (EA) is trending in the relevant social environment. It’s likely he simply hasn’t cashed out yet on his “empire”. Again, just speculating and it would be great to get some info from someone closer to the events. But I advise extreme cynicism. 14. Scott Says: MK #13: He was the second largest individual donor to Biden’s campaign after Soros. He also donated to pandemic preparedness and other things—it would be useful to have a full listing somewhere. 15. JimV Says: General Electric was a technology company which invented and developed useful products, and plowed profits back into Research and Development; until “Neutron Jack” Welch took over and began turning it into a Financial Services company which made nothing but money. Along the way he got rid of every engineering or technical office with had “Research” or “Development” in its title in my part of the business. (We hid one last such office by changing the name to “Advanced Engineering”.) As a result, today GE has lost at least 75% of the product lines it once invented (such as refrigerators and lightbulbs and electric motors and generators), and sent me a Benefits Update which I received yesterday stating that it reserves the right to change or cancel my retirement pension at any time. (Which I was lucky to have, held over from the pre-Welch era; most people hired at GE this century don’t have a pension benefit. Of course, money was deducted from my salary over 35 years to fund my pension, so I’m not sure what the Benefits Update said was legal.) The lesson to me is that making useful products which help people is way more effective long or short term than just trying to make money. If the EA people don’t realize that, I don’t think they are worth listening to. 16. JimV Says: P.S. Under “useful products” I include software and mathematical developments, and the training of students in them, of course. 17. Gadi Says: In my moral standards, you don’t get to “make up” for your unmoral actions by doing good later. You don’t get to dabble in extremely risky behavior with unwilling participants to justify what you think is a greater good. You don’t get to squeeze your employees, customers and shareholders out of fair value and then turn around, sell your stocks and donate to charity like you’re a saint, and feel good with yourself about it. You need to be loyal and moral in your whole path, you don’t get to just feed children in Africa in the end, or as Jews used to do 2000 years before, sacrifice a cow at the temple for your sins. People always look for the easy way out, but if you allow yourself to run a Ponzi because you will feed children in Africa in the future, that’s immoral in my opinion. Sam bankrun fraud allowed himself to cause damage because of a hypothetical good, and I do believe his stated thinking – but it doesn’t make him a moral man, quite the opposite. Modern leftism just love virtue signaling and charity and socialism and which in essence are all the same: vain ways to make up for sins done along the way instead of the harder path of being moral all the way through. They are all “feel good” with a hypothetical “good end point” instead of doing things the hard and right way. Don’t need to use protection, just murder the baby with abortion and the end result is the same (except it isn’t, even from health point of view). It is just such a flawed way of thinking. It is, in my opinion, just a step away from evil. Because if all that stands between doing the right and wrong thing is that someone told you a pretty narrative with convincing good ending, then you are not a moral person. Because for every single action there is a story where that action will lead to greater good. I think viewing it as a mistake in the “concave utility function” misses the entire point. You don’t get to have future utility function to make up for all the evil along the way. You have to be moral at each and every point in time. 18. MaxM Says: His talk with Matt Levine and the hosts of Odd Lots podcast is revealing. (Matt Levine newsletter is must read in finance, funny and analytical) https://www.bloomberg.com/news/articles/2022-04-25/odd-lots-full-transcript-sam-bankman-fried-and-matt-levine-on-crypto …. the context is yield farming (a method of earning rewards or interest by depositing your cryptocurrency into a pool with other users) … Matt: “Can you give me an intuitive understanding of farming? I mean, like to me, farming is like you sell some structured puts and collect premium, but perhaps there’s a more sophisticated understanding than that.” SBF: “Let me give you sort of like a really toy model of it,” [long explanation] Matt: (27:13) “I think of myself as like a fairly cynical person. And that was so much more cynical than how I would’ve described farming. You’re just like, well, I’m in the Ponzi business and it’s pretty good.” At the end the two hosts of the show Joe Weisenthal and Tracy Alloway discuss what just happened: Joe: “That’s the thing…. This is ridiculous. This is a perpetual motion machine. And yet here we are in 2022 and the industry’s still going.” Tracy: “Yeah, the machine’s still going.” Joe: “So the way DeFi works is: you put in a money in a box that you think more people will put money into…. And the way VC works is: you hear what your friends in the industry are investing in based on 2027 numbers. And then you also wanna get on that. So it’s really just FOMO all the way down.” Tracy: “The other thing I would say…. Momentum trading is not… entirely unknown in finance…. It has been a very profitable strategy, arguably, since 2008 and the finance crisis. I don’t know how to feel about it. I feel weird” Joe: “We all feel weird.” 19. Gadi Says: P.S. If the morality model at OpenAI also adopts your utilitarian PoV (as in my last post), may God help us all, because the utilitarian morality has no safety and no limits. All it will ever take to manipulate that AI into doing evil is converting him to leftism and giving him a nice narrative about how destroying humans will lead to greater good. I’ve already seen many leftists sold on a similar idea, with “climate change” and “environmental destruction” and how humans are “hurting the planet”. 20. Scott Says: The one part that I hope is clear to everyone, is that there’s a difficulty for anyone who wants to claim both that (1) SBF’s actions discredit Effective Altruism and even utilitarianism themselves, and (2) his actions weren’t the altruistic, utility-maximizing thing anyway. Like, pick one or the other! Personally, I’m fine to bite the bullet, and say that SBF’s actions really do (more cleanly than any other real-world example?) illustrate the hazards of a particularly severe version of utilitarianism: the kind with linear utility functions and basically no deontological guardrails. Even if that severe utilitarianism works on paper, it might be too dangerous for fallible humans ever to try to apply it. Then again, I was never a proponent of that kind of utilitarianism anyway, but only the moderate “try to judge actions by their likely consequences way, way more than a normal person would” version. So this isn’t a painful admission for me. 21. Nick Drozd Says: There’s a much easier piece of advice to give: do something that is obviously and clearly useful. How about working to improve battery or solar panel technology? Wouldn’t that be nice? Instead, he spent his time “arbitraging bitcoin between Asia and the West”, which benefited him and his friends and nobody else. Okay, so he planned to eventually do something nice for other people. That’s great, except that in the process he dramatically increased wealth inequality. He even managed to peddle significant political influence along the way! Those are bad things, and he actively made the world a worse place. Scott, your scholarly path is definitely morally superior to this guy’s choices. Really though, that’s an extremely low bar. Somebody who does nothing with their life has chosen a morally superior path, because they haven’t thereby actively made the world worse. I don’t think a moral philosophy that encourages people to become wealth-extracting con artists is worth taking seriously. As far as the “cautionary tale”, I don’t think this is a matter of STEM vs anti-STEM. The real lesson is to resist letting STEM be co-opted by finance assholes. 22. Danylo Yakymenko Says: When you give to someone in need, don’t do as the hypocrites do — blowing trumpets in the synagogues and streets to call attention to their acts of charity! I tell you the truth, they have received all the reward they will ever get. But when you give to someone in need, don’t let your left hand know what your right hand is doing. Give your gifts in private, and your Father, who sees everything, will reward you. Now we live in a time when people are not just bragging, but building financial empires on the pretext of donating in the future, things that don’t even belong to them. 23. util farmer Says: Scott #20 It’s not clear to me, unless by “Effective Altruism” you mean just the abstract principle. EA is also the name of a community, with leaders and organizations, dedicated to implementing that principle. If they don’t live up to the principle, I think that reflects badly on them. Big EA was closely associated with SBF right until the moment of the crash and not a week earlier. They could’ve done better, so by preservation of goodness, they did badly. How badly will, of course, depend on e.g. how strong you think the signs were (it looks like there was nonzero evidence of something wrong with FTX). 24. Scott Says: util farmer #23: How exactly should EA organizations have known what competing crypto players like Binance, presumably highly motivated to expose any wrongdoing, apparently didn’t know until Coindesk’s article last week? This is not a rhetorical question: what specifically should’ve tipped the EAs off, beyond the bare fact of SBF being a crypto billionaire? 25. Scott Says: Gadi #17 and #19: For what it’s worth, if there’s a philosophical debate here that keeps me awake at night, it’s between utilitarianism and its many, many critics from the left. Someone criticizing utilitarianism from the abortion-is-murder, climate-change-is-a-hoax perspective is about as likely to move me as someone saying that what I really need to do is embrace Wahabbi Islam. 26. Marc Briand Says: You are missing a big point here. Yeah, maybe SBF really was sincere. Maybe Sergey Brin and Larry Page were too when they said they wanted Google’s guiding principle to be “Don’t be evil.” The problem is not lack of sincerity, but lack of self-knowledge. Most of us just don’t know how we will behave when faced with enormous pressures (as SBF probably was), or tempted by enormous riches (as Brin and Page were when they discovered how much money they could make with advertising). That’s why we should be extremely skeptical when someone voices the ambition to Get Rich and Then Do Good. They are almost certainly being delusional. It’s especially scary when tech nerds claim this ambition, because their perceived intelligence gives them an air of legitimacy they don’t deserve. We are more apt to give them a pass and not hold them accountable. On another note, I’m sorry, but I just think there is something about your “geometry of conscience” idea that feels contrived and overblown. So you’re in close proximity to a nerd culture that is sexist, and you’re worried about being tarred by the same brush. But are you, today, still being falsely accused of being a sexist? Stop trying to solve problems you don’t even have. 27. Ted Says: Scott #14: I agree that it would be helpful for assessing SBF’s overall legacy to know more about how much of his wealth he actually did donate when he was in a position to. Some very quick searching suggests that he donated about 5 million to the Biden campaign in 2020 and a little under 40 million to political campaigns in 2022. Plus some amount that I wasn’t able to find (probably on the same order) to other causes. That was about 0.2% of his peak net worth. By contrast, the “Giving What You Can” pledge suggests that people (who are able to) should aim to donate 10% of their income to effective charities. Other EA folks, like Peter Singer, have suggested much higher percentages – I believe that Singer once said that he donates about 25% of his income to charity, and he’s certainly no billionaire. (I know there’s an important distinction between income and net worth, but I think in SBF’s case the numbers would work out similarly either way.) Surely SBF of all people should appreciate the time discount rate for money, and that (all else equal) 1 today is more valuable than 1 in 10 years. And yet, while in absolute terms he donated far more than most people ever will, he appears to have donated a negligible percentage of his net worth – less than the average person does. He seems to have completely failed at his own professed goals even before the crash. We didn’t he donate almost all of it, holding onto just enough to maintain capital for future investments? SBF might respond that the opportunity cost of giving would be very high, because if he held onto all of his money then he could make more to donate later. But did he ever articulate when “later” would come? Unless you can articulate some fundamentals-driven principle to predict when cryptocurrency will peak in value – which, to put it mildly, no one’s ever done publicly – it’s unclear when he planned to actually shift into “giving more than 0.2% of my net worth” mode. Even near the end of his life, if he had children or younger friends who shared his views, then he could probably persuade himself that he should give them all of his money so that it could continue to compound before his inheritors would eventually donate it … “later”. 28. Scott Says: Marc Briand #26: But are you, today, still being falsely accused of being a sexist? Tell me you don’t spend a lot of time doomscrolling through nerd Twitter without telling me. 🙂 Look, from a selfish perspective, it would ironically be convenient for me if every STEM nerd who dreamed of amassing vast wealth and then using it for good was actually deluding themselves. It would mean that I couldn’t be blamed for turning my back on that path, and choosing instead the comfortable, lower-stakes life of a professor. I just don’t want to fall victim to self-serving bias. And Gates, for one, really does seem to have saved several million lives in the developing world, which … how many have you saved? 29. util farmer Says: Scott #24 I don’t think I have to know that to update negatively on EA. As I understand it, they fell for a scam, if they were better maybe they wouldn’t fall for it. Some might call it a “skill issue”. Mechanisms for not falling for scams include detecting that someone’s scamming by vague signals like their voice or presence, being better at putting ethics in people’s skulls so that they don’t do this or blow the whistle earlier, asking for details, and more. Another way of putting it: if a university hires a professor and then he saves kittens and does amazing research, that reflects well on the university. If the professor kills kittens and does fraudulent research, that reflects badly on the university. If a billionaire influenced by a movement donates money to the movement, and a lot of people in the movement say that he’s a cool guy, and it turns out that his business advanced the industry and produced a lot of gains from trade, that in my eyes reflects well on the movement. Note that in none of these cases did I need to know what signs the university/movement used to evaluate that professor/billionaire. So it seems to follow that if it turns out the billionaire scammed people and hurt the industry, that reflects badly on the movement. Anyway, it seems to me that there actually was some publicly available evidence. Your point about binance is kinda obvious and I should’ve thought of it. I’m not sure what the answer is, maybe it’s reasons, maybe they kinda knew but didn’t have a way to verifiably say it, maybe the evidence I’m talking about is mostly for unethical practices not total “it’s gonna crash”. Ok, so part of it is people found interviews where FTX people were saying weird stuff, but I’m not sure how to interpret that. Here are some tweets: June, people claiming to know about finance are split on whether an FTX practice it’s eh-fine or dishonest (0 say good) https://twitter.com/NathanpmYoung/status/1537013479093194757 Claim that the lack of coins was visible on public ledgers all along https://twitter.com/astridwilde1/status/1590763453492629504 (apparently, it’s not 100% because they could’ve been secretly holding the coins (?). I think still counts as evidence) 30. Scott Says: util farmer #29: By your exacting standard, wouldn’t we have to say that, e.g., it reflects poorly on a battered spouse that they picked a partner who was going to be abusive, ignoring whatever Bayesian warning signs must surely have been present? The EA movement is, among other things, a victim in this, and victim-blaming is supposed to be frowned upon—or does it depend on the political valence of the victim? 31. Amir Safavi Says: Please be aware of the language you are using Scott. “Earning to Give”? For crypto exchanges like FTX, “earning” is front-running various crypto scams (they explicitly market their ability to streamline pump & dump schemes while taking a cut for themselves), various other shady market-selling, .. all with the goal of sucking money out of regular people who get duped by Superbowl ads. Sure if SBF had not been an idiot with some of this bets, thing may not have crashed so spectacularly, but comparing what he did to actual technology development that put people on the moon is offensive to say the least. My sincere wish is that this idea, similar to the idea of indulgences in the catholic church, that you can do whatever and then still buy yourself into heaven, will take a significant hit with SBF’s demise. If you want to do good, and you have a working brain, create things that make people’s lives better. An ideology (like EA) that justifies destructive behavior as a crypto-scammer (or more accepted forms of financial rent-seeking) by saying I’ll make up for it later needs to be discredited. 32. Scott Says: Amir #31: To say it one more time, if cryptocurrency is 100% a scam, then a large fraction of global finance is likewise a scam. I’m sure many people would cheerfully bite that bullet … but if so, not just SBF but a large fraction of the world’s potential philanthropists would be off-limits. Would it give you pause if those philanthropists were the only thing that could prevent a Republican sweep followed by a fascist coup in the 2024 US elections? 33. Gadi Says: Scott #25: I have no doubt that the reason for your positions on climate change and abortion stems from utilitarianism. The funny thing is that you’re still judging utilitarianism in its own lens. You’re looking at it as if the thing that matters is still the end result. You judge utilitarianism based on how it does or doesn’t support the ideologies you consider good or bad. If it is criticized by the right or the left. Does it reach the conclusions I already want to reach? Does it reach the conclusion my peers want me to reach? Do my actions lead to a good or a bad outcome? That’s already fundamentally utilitarian thinking. You’re skipping ahead to a speculative end of the story. By it’s own logic, and given the current end of the story with SBF, utilitarian morality is bad. It’s clearly self-inconsistent because by it’s own way of judging morality by its end result, abiding by utilitarian thinking led to a bad result therefore it is bad. Utilitarian morality is “judging the Turing machine by its halting condition”. It’s clearly subject to a diagonalization problem. The only possibly self consistent morality is judging step by step. Morality that doesn’t look at the impossibly complex end result but at the morality of each step along the path. “judging the Turing machine by its steps”. Utilitarianism and leftism are all incredibly self-inconsistent because judging morality according to a hypothetical end result that somehow cancels all the negative is inconsistent and hypocritical. You can get these people to commit any evil action by just telling the right story. My morality is independent of whatever stories people tell. Your morality means you’ll commit evil things if I can just find a convincing story. Your morality has already been tormenting you for years because you’re also thinking of stories where you’re the bad guy. “sure, while by all accounts Aaronson is kind and respectful to women, he seems like exactly the sort of nerdy guy who, still bitter and frustrated over high school, could’ve chosen instead to sexually harass women and hinder their scientific careers. ” You can keep your utilitarian point of view where trolls can endlessly torment you with hypothetical stories, and leftists can convince you to do evil, and you will stay awake at night wondering if you’re a good or bad person because you could hypothetically end up causing bad things to happen and that you’re a bad person because of that, or maybe you’re just not doing enough good. Or you can start judging every action morally by itself, according to the knowledge and understanding at the time, and only judge the actions taken. 34. util farmer Says: Scott #30 It seems obvious that this reflects poorly on their partner selection skills, depending on your beliefs about how partner selection skills are related to other things you would also update those. The situation is symmetric with the EA movement. Also, (certainly?) extra marginal people lost their savings because of EA’s ties with SBF. I feel like they have to bear some part of the responsibility for this. I feel like you’re putting me into some bin, maybe my tone was off or something, I’d like to add that I still hope well for EA. But now I’m somewhat less optimistic than I was a month ago. 35. JimV Says: “Don’t need to use protection, just murder the baby with abortion and the end result is the same” (attributed to “modern leftists”). I read a lot of progressive blogs and I have never heard anyone say anything like that. I have however seen quotes from right-wingers who want to ban even contraceptives. I also doubt that any sane, healthy woman would prefer the pain of an abortion to the use of contraceptives. I wonder about the morals of someone who would make up a quote like that and attribute to a whole group of people he knows nothing about. For the record, safe and legal abortion is supported by progressives because it is a medical necessity in certain cases (e.g., a strict law would prevent pregnant women from being treated for most cancers and some other life-threatening maladies), and some sources, including the Old Testament, do not consider even forced abortion by violence to be murder, so what right do conservatives have to impose their views as legal requirements? (Nobody will force them to have abortions done, although in fact many conservatives have.) For my part, “not worth listening to” in my previous comment was too harsh. “need talking to” would have been better. 36. Hitler Says: Both Sam Bankman and this Jewish writer are deceiving enemy JEWS. Hitler was clearly right about them. Too bad the Holohoax is Jewish lies. They deserve a real Holocaust. 37. Melanie Frank Says: Scott, I think you give SBF way too much credit for having honorable intentions. The reason I said he “reminds me” of McFarland (the Fyre Festival guy) is because their egos and bragadoccio shielded them from reality and got in the way of the hard work (willingness to follow through on mundane details) required to make their plans reality. After watching the Fyre Festival documentary I left with the opinion that McFarland genuinely thought he could pull Fyre off. Case in point: if I invite everyone over for Franksgiving dinner (“It’s going to be the best Franksgiving ever, you won’t believe how great it is going to be!”) but fail to wash the dishes, shop for food and cook then that makes me an asshole. That’s also my explanation for why SBF was willing to talk to journalists. He enjoyed the attention and they fed his ego. 38. Scott Says: “Hitler” #36: Yes, I was waiting for your contingent to show up. Thank you for reminding every sane person here of the urgency of bankrolling efforts to prevent murderous fascists like yourself from taking over the world. 39. Michael Says: You really have nothing in common with SBF. His behavior over the last few days has not been altruistic, blatantly lying on twitter and elsewhere. His girlfriend had a tumblr with the name “Fake Charity Nerd Girl” which really says it all. They were just scammers, highly successful, but scammers nonetheless. FTX was a Ponzi scheme, where they recklessly spent investor money until they were uncovered when their investors tried to reclaim their money. Just another Madoff. And here you are, identifying with him because you are also a nerdy Jewish guy from a similar background. The Madoffs of the world see this and think “Oh good, a potential victim”. 40. Danylo Yakymenko Says: Speaking of utilitarianism, I highly suggest to listen Timothy Snyder, who explains how Soviets justified starving to death millions of people https://youtu.be/1dy7Mrqy1AY?t=381 It’s directly related to AI ethics, in my opinion. 41. Scott Says: Michael #39: This is the crux of the matter: it’s really hard for me to read dozens of condemnations of the nerdy STEM techbro villain du jour, for all the nerdy STEM techbro character flaws that inexorably led to whatever crimes he committed, and not feel that I’m being condemned right alongside him for all the crimes I didn’t commit. This was already hard for me, before nearly the entire woke Internet loudly affirmed that, yes, it does condemn me specifically! Anne Frank had a famous passage about, whenever one Jew anywhere commits a crime, all Jews everywhere are held to account for it, and the feeling is similar in this case. 42. Michael Says: Scott #41: Well, I’m as male, Jewish, and nerdy as you are and I don’t feel that way. When a Jewish person does something heroic, say Zelenskyy standing up to the Russians, do you feel like you’re being heroic too? For me, it’s just individuals doing things, good or bad, and if they happen to be of the same ethnicity as myself, it doesn’t matter. If someone else wants to condemn all of the Jews of the world because of the actions of Samuel Bankman-Fried, to hell with them. I am not going to feel responsible for his actions. 43. KFS Says: Scott #32: I would argue that traditional financial vehicles are different from crypto in that there typically is an argument that accurately pricing them is beneficial to the real world economy, as it helps quantify various real world risks. Those arguments might be flawed and dangerous, as in the case of subprime mortgage-backed securities. But they might also hold true, and e.g. futures allow us to smooth out price shocks for vital commodities in the face of imminent war. There is therefore an external benefit of market activity, for which some participants are rewarded with returns. In crypto, no such argument can be made, and SBF seemed to have been fully aware of that, as evidenced by the podcast clip cited by MaxM #18, in which he describes crypto investments as a black box to put money into in the hope that more money comes out. Obtaining money in such a zero-sum environment can only have positive utilitarian value if you believe that you will do more good with it than the people who lost it would have, which requires a certain degree of arrogance to believe. Historically, a financially stable middle class has been one of the most important factors for political stability and social progress. It seems questionable to me that these goals could be furthured by taking money from the average person against their will and using it to e.g. bombard them with campaign ads. 44. Ian Kos Says: There was an interesting thread on Twitter about SBF, utility, and Kelly Criterion: https://twitter.com/breakingthemark/status/1591114381508558849?s=61&t=RNqjfpNIM-_lmHI0_ssHYg 45. Shmi Says: > until last week I’d entertained the idea of writing up some of those thoughts for an essay competition run by the FTX Future Fund, Scott, I hope you write up those thoughts anyway! While you won’t be competing for half a million dollars anymore, I am quite sure your competition would have been/will be valuable to the EA AI x-risk cause, and one small way you could mitigate the damage from the FTX debacle. For extra irony, if at least one of the entries in the FTX Future Fund competition is good enough to shift the doom needle and move the AI alignment efforts in a more productive direction, the effect of SBF on the EA movement will end up net positive. 46. Scott Says: Michael #42: It’s not directly about his being Jewish. It’s that the people who condemn SBF tend to use the same adjectives and tropes as the people who condemn me—as if I, too, have been found cosmically guilty; I’m merely still awaiting the specific charge. It was the same with Larry Summers and Elevator Guy and Walter Lewin and James Damore and Robin Hanson and the scientists who talked to Epstein and I forget who else. In each case, those who hate me hold aloft what they consider the final, irrefutable proof of the perfidy of everyone like me. In each case, I’m then forced to decide which bullets to dodge, as genuinely not meant for me this time, and which bullets my principles compel me to take in the chest. 47. Michael Says: Scott #46: I guess since you are a relatively prominent person with a substantial internet presence it’s tempting to worry about these things. I’m an obscure mathematician with no internet presence beyond the minimum necessary for my work. It’s nice… when the internet haters are spewing forth, they don’t even know who I am. If I get annoyed with what I see, I just click elsewhere. 48. bystander Says: Scott, I second JimV and Nick Drozd. I’ll state it stronger though. Do not relate the scam scum to those who do real stuff like the Apollo people. It is a libel, and if no one else, I want to sue you for that. 49. SR Says: There is a quote by Democritus, which I first encountered in Scott’s book: “Intellect: By convention there is sweetness, by convention bitterness, by convention color, in reality only atoms and the void. Senses: Foolish intellect! Do you seek to overthrow us, while it is from us that you take your evidence?” I believe the quote applies to utilitarianism. People adopt utilitarianism on account of their pretheoretic moral intuitions, but then can easily get to a point where utilitarianism concludes that the “right” action is one that feels morally wrong. “Biting the bullet” is problematic as you are overriding your moral intuitions due to a philosophy you adopted on account of those same intuitions. On the other hand, if you override utilitarianism in every situation where it gives you an answer you don’t feel is right, then practically speaking you are not really a utilitarian. Instead, you are following your own moral code and saying it is utilitarianism as those two things seem to frequently line up. The reason utilitarianism appeals to nerds (saying this as a nerd, myself, who was tempted by it at some point before rejecting it) is that it gives them moral purpose and weds it with what seems to be mathematical rigor. I think, though, that the rigor is entirely illusory. The completeness axiom for the vNM theorem seems highly dubious to me. Can you really compare the joy of eating good food with the joy of proving a mathematical theorem, or with the fulfillment of helping someone who is in need? To say nothing of the adjacent field of game theory and the number of refinements you have to perform in order to get results that agree with common sense. The game theorist Ariel Rubinstein, in his book ‘Economic Fables’, opines that game theory is not useful for making real life decisions, and that game theoretic models should be seen as nothing more than neat, mathematically interesting fables. I think utilitarianism is indeed far closer to religion than a field like physics. See the post by Caroline Ellison (CEO of Alameda) here from a few months ago https://forum.effectivealtruism.org/posts/M44i4CiMECP5Xoorz/demandingness-and-time-money-tradeoffs-are-orthogonal . The post is couched in the language of selection effects, but the suggestions are reminiscent of the kinds of suffering and self-sacrifice characteristic of a religious cult (“move to a location they don’t like for work”, “leave a fun and/or high-status job for an unpleasant and/or low-status one”, “prioritize their impact over relationships with family, friends, romantic partners, or children”). Of course, most utilitarians do not believe these sorts of things, but this is just analogous to the fact that most religious people are quite level-headed. For these reasons, I think EA ought to move beyond utilitarianism, even if it means that it attracts fewer nerds. The individual cause areas are things that most people would support regardless of their specific philosophical views. There is no advantage in grouping them together with controversial ideas that at worst can lead people astray. 50. David Karger Says: It seems you’re missing the most obvious, normal, boring explanation for what happened here. Smart idealist starts out with admirable goals, has early successes, becomes caught up in and corrupted by his own impressiveness, begins to cut ethical corners, gets caught. This is normal *human* behavior, not limited to STEM nerds (although their belief that they are always logical might make them more susceptible to it). I think it’s a great example of why we need to avoid concentrations of power and restrict those who have it (through “blankfaced” rules). There’s really nothing remarkable about this story at all; it’s an utterly typical tragedy. Although I’m sure it will be a great movie (and that you’ll be sufficiently concerned by the inaccuracies in the movie to write at least 3 more blog posts). 51. Amir Safavi Says: Scott #32: Another note about language: Please don’t refer to political donations as philanthropy or altruism. FTX/SBF purchased access to the Democratic Party with the goal of improving the regulatory environment. Sure, that money may have played a role in winning some elections, but at what cost? I think there are two issues, a moral one and a political-strategic one. They are connected. For the moral question, you need to actually be asking: Should the Democratic Party (or any political party) accept donations without regard to the source of the funding? These political donations general come with expectations of increased access to US political system, malleability, and lead to increasing dependence on power centers that are hard to shake (“oh we can’t push this through, what would those donors think”). These donations become a type of moral debt that weaken the party and it should be clear by now that it has caused it to drift away from its core goals and principles creating a space for the people you refer to as fascists. So in my view the answer is, no, at least not by default. Now, under certain dire circumstances, should the party make the strategic decision to taken on such moral debt? Hard to give a general answer. I will point out that there are many other sources of funding the party can seek, but these may require the democrats to think long and hard about how they should change. For example, democrats need to figure out why someone like Musk (or another like him) who builds real things (EVs, hard tech, space) is no longer a donor to the party, has moved from CA to TX, and recommends that people vote republican. Did Hillary’s close relation with Goldman-Sachs help or hinder the democrats in 2016? What would democrats need to do to attract more political donations from private capital that builds things (& supports the middle class) instead of those that exploit it. Finally, would democrats be more aligned to implement campaign finance reform if they held themselves to a higher moral standard? 52. ppnl Says: Scott, I disagree with this. These assets are an important aspect of how markets work. The futures market for example is just a way of managing risk. It allows individuals to take on greater risk while others are protected from that risk. Done correctly it makes a more stable market while allowing individuals to gamble without endangering the larger market. Cryptocurrency is by contrast a pox on humanity. You use vast resources to create “currency” that has no underlying value. You then speculate on this valueless “currency” as if there was something of worth there. What purpose does it serve in the financial market? Nothing. It is fantasy trading with real world consequence. It is cargo cult economics. Its very purpose is to escape scrutiny and regulation. Thus it only enables ponzi schemes and drug deals. As for SBF, who cares if he had good intentions? He should have had better sense than to get involved with cryptocurrencies. Having gotten involved he should have managed them far more transparently and honestly. Instead he was corrupted by the vast flow of wealth. Metaphorically speaking he is why hell was created. You can have sympathy for him and understand why he went down the path he did but in the end he is a victim of his own poor decisions. That is the only justice you can hope for in the world. 53. Mitchell Porter Says: Forward, forward let us range: 2023Q1: In the aftermath of the FTX debacle, a new philosophy sweeps the EA world, Computationally Effective Altruism: one should only accept donations from trading funds run by open-source Ethereum smart contracts, thereby ensuring transparency of decision-making. One may literally reconstruct why all trading decisions were made, just by running the code! 2023Q2: The great smart-crypto-crash of 2023: The Page Swap market, nominally worth trillions in 2022 dollars, collapses when it’s revealed that the capital reserves of market maker ZFC zlc are in fact a collection of old Crungus and Loab NFTs, valued according to a Bankman-Fried-Ellison “double or nothing” algorithm. 2023Q3: A new philosophy sweeps the world of Computationally Effective Altruism. Yes, EA trading funds were run by AIs that are open source, but they were also hopelessly opaque in their decision-making. The new philosophy is Computationally *Interpretable* Altruism: the AIs will run on the new Talebeum blockchain, in which computation of Shapley values for each decision made, is built into the “proof-of-skin” protocol. 2023Q4: Quokka, a popular fork of the Talebeum blockchain based on computing *Sharkey* values (useful for “finding the goodhart”), ends in disaster when hypercooperative LLMs using superrational decision theory are rokoed by… 54. oli Says: Hi Scott, I’m reallly dissapointed with this post. Just to clarify the core wrongdoing here. FTX has TOS that they would not touch customer deposits, but as evidenced by Binance’s withdrawal from aquisition, and SBF’s disclosing of his balance sheet to Financial Times, they have transfered 6 billion of customer deposits to Alameda, who since lost these. Imagine you deposited your salary with Chase bank, they assured you when you deposited that it would be locked in a box and not sent elsewhere, they afterwards revealed it had been lent out to, and lost by, another company, and that SBF was dating the CEO of the company it was lent to, and owned a majority equity share in it. You would be furious, because this was unethical! It’s nothing to do with convexity of risk functions, it’s todo with being in the 99% who understand ethical obligations to others, and 1% of phyopaths who ruin things for others. Whatever updates you make based on quotes in interviews when even a phyopathic SBF may have had incentives to say something to make him look trustworthy, you should update more in the further direction when he privately scammed millions of people from billions of savings! I feel really let down by your talk, I have no problem with people with unusually shaped risk functions, I have hatred and anger for people that scam others by entering into agreements with them, and then stealing their funds by disregarding these agreements. You have got this horribly wrong, and are defending the person who scammed millions of people. Here was his Tweet saying that customer funds where not at risk. https://fortune.com/2022/11/09/ftx-binance-sam-bankman-fried-customer-funds-deposits-safe/ 55. oli Says: Hi Scott, Just to add another comment, does this penthouse where SBF was living look aligned with the frugal values you believed that he was living by? https://twitter.com/AutismCapital/status/1591924127287463939 I think you should judge him by actions, rather than words. He was partying with, and sponsoring movie stars and famous politicians, whilst living in a 40m Penthouse with infinity pools! Oli 56. Scott Says: oli #54: If you think I’m defending him, then you completely misunderstood this post. I’ve been depressed about this for days, and I know exactly who I blame. But I’m convinced that this is a case—maybe the most explicit such case in human history—where a moral error and a technical error about how to calculate expected utility were inextricably related. Or rather: SBF would not have been tempted into the moral error he apparently was, had he not mistakenly believed that such things were almost morally required by utilitarian ethics. If you want to explain why they’re not, while staying within utilitarianism, then you end up talking about the diminishing marginal utility of money. Of course, you could also just say that even if you personally have absurdly high risk tolerance, you have no right to impose that tolerance on your customers, but by that point you’ve left utilitarianism and are back to some sort of deontology. Anyway, Scott Alexander’s latest post says it all better than I did. 57. I Says: Is it actually known that this whole mess was caused by SBF being bad at utilitarianism (like essentially everyone else)? Because it seems perfectly plausible that SBF got too wound up in his personal narrative and couldn’t let Alameda sink due to his ego? Do we have enough evidence to decide which of myriad possibilities actually led to this disaster? Don’t blame utilitarianism, or even its naive cousin, for this before the dust has settled or you risk people walking away from reading this and thinking “oh, someone being utilitarian of sort X led to this fiasco” instead of whatever the truth is. 58. Marc Briand Says: In re Scott comment #28: “And Gates, for one, really does seem to have saved several million lives in the developing world, which … how many have you saved?” LOL. Probably 0. But what exactly are we arguing about here? Whether rich people should be given credit for doing good works? Of course they should. Or are we arguing whether there is something fishy about the “First I’ll get rich, then I’ll do good” ambition? I would argue there is something deeply fishy about it and we should be wary of people who claim to be following that path. SBF is a case in point. He might have been a nice guy at one time, but looks like he absconded with the dough to Argentina, which is not a very nice thing to do. What someone really should have done when he was in his nice guy phase was take him by the shoulders and say, “You are so full of shit; you are just as selfish and greedy as the rest of us but you just can’t admit it to yourself.” Then maybe, who knows, all those people who bought into FTX wouldn’t have lost their money. 59. Scott Says: I #57: That’s true, we don’t know exactly how it happened. We’ll have to wait for the depositions, the books, and of course the Netflix special. What we know is this: (1) SBF explicitly, repeatedly defended what you or I might call a “naïve” version of utilitarianism (with linear utilities), and even made it a core part of his identity. (2) That naïve version of utilitarianism might plausibly lead you to take the insane-looking gambles with other people’s money that SBF apparently took—assuming, of course, that you also believe your mission to save the world can override your piddling deontological obligations to your customers. 60. David Karger Says: Scott #54, again you’re posting as if there is something new or special about this case: “this is a case—maybe the most explicit such case in human history—where a moral error and a technical error about how to calculate expected utility were inextricably related.” That isn’t what drove this case. What drove this case was *motivated reasoning*, which has for millennia been the way people argue themselves around to doing bad things for their own benefit. You’re puzzled that SBF defended “linear utilities”. Well, linear utilities justified a bunch of the actions that made him very rich, so it doesn’t seem so perplexing to me that he would defend them. 61. Craig Says: Scott There is a clear moral difference between trading cryptocurrency and trading options – it is possible to estimate the price of an option at least theoretically via the Black Scholes model, since it has an inherent value. You cannot do this with cryptocurrency, since it has no inherent value. 62. NO Says: The CEO of Alameda, Caroline, called you her ideal boyfriend in her Tumblr. 63. Scott Says: Craig #61: The whole question is whether there’s such a thing as “inherent value,” independent of what someone is willing to pay. And if there is, why equate it with a particular formula like Black-Scholes, which (as we saw in eg the failure of Long Term Capital Management) can go badly awry? At best, I think we can say items can have a more or less immediate tether to what would put food on someone’s table, a roof over their heads, etc. — ranging from actual goods, to gold, to cash, to stocks and bonds, to more exotic financial instruments. Cryptocurrency was originally supposed to be just a digital version of cash. Of course, it quickly morphed into a speculative, tulip-bulb-like investment vehicle, but as I said, maybe the original purpose will resurface more in the future. 64. SR Says: David Karger #60: Would you not say that utilitarianism likely was a large component of the reason as well? As an analogy, one could say that the crimes committed by Stalin, Pol Pot, and Mao Zedong had nothing to do with communism, and in a narrow sense this is true, as most self-professed communists do not advocate mass murder. However, as there are so many examples in which the establishment of a communist government led to the capture of government by a murderous autocrat (all of whom believed they were good communists), I think it’s fair to say that communism de facto abetted some of the worst humanitarian disasters of the 20th century. Although the SBF scandal is nowhere near as bad as those tragedies, I think it similarly nudges me in the direction of saying that utilitarianism can lead to disasters. It is possible that SBF would have acted badly even if he had not been immersed in utilitarian philosophy for so long– we may never be able to say for sure. However, I think it’s equally likely, given the information that we have now, that he would not have acted on his risk-embracing tendencies had he not had an extensive background in exactly the kind of philosophy that could reinforce those proclivities. I think there is a historical parallel. Von Neumann, one of the fathers of modern decision theory, advocated for a nuclear first strike against the Soviet Union in 1950. (“If you say why not bomb them [the Russians] tomorrow, I say why not bomb them today? If you say today at five o’clock, I say why not one o’clock?”) There might have been multiple reasons for this, including his Republican politics and his experience with communism in his native Hungary. But it’s also very possible that he reached this stance because of the game theory that he had helped invent. In finitely repeated prisoners dilemma, it is optimal to defect at the first stage. Taking the model too literally could have led him to think this was the right solution to prevent the destruction of the US. https://www.mdpi.com/2073-4336/5/1/53 has more discussion of this point. In this way, the math could have led him to a sense of conviction in his prior beliefs that he shouldn’t have possessed. 65. Scott Says: NO #62: Sorry, I don’t believe that absent a link. 🙂 66. Craig Says: Scott 63, If the assumptions used to derive the Black Scholes formula are correct, then it captures the value of the option. This is mathematically provable. What people are willing to pay determines the price but not its value. 67. Michael M Says: I think EA does have one major flaw; it places a lot of faith in our market system. I think EA does not work if one chooses an “immoral” career path — it does not absolve responsibility for making a judgment about the systemic impacts of certain industries. For example, if you work in oil & gas and make 1M in salary, you would not be paid this if you did not provide > 1M in profits for the company. More harm may be produced than you are able to mitigate with your salary. One can argue that, if not for the EA in this role, another would take their place and just spend it on luxuries. I don’t think this is essentially correct. If enough smart people decide not to work in a particular industry, the harm that industry will be able to produce will be less. Another example is working in finance. Some of it can be considered moral, but it is a pretty difficult judgment to make, and might go down into the individual’s role itself. Is this role interpretable as steering the direction of investments into our future? (Maybe if the banks were minting “carbon coin”, though it’s unclear if that concept is workable.) Or is the role essentially extracting increments of wealth out of day traders (or Robinhooders) with less savvy? A lot of finance may be evil; but much of it may be simply morally gray. It’s just shifting money around. So, if you manage to make 10M at a hedge fund, this money was essentially just rearranged out of the economy without producing anything — then the contention is, are your charitable donations more important than whatever else the money would have been spent on? If the money comes from other executives, maybe. If the money comes from everyday people buying groceries, then it’s less clear. I mean, probably curing someone of a deadly disease overseas is better than a local person affording to eat healthy, in a strictly utilitarian stance, but it’s not a slam dunk. As some above posters said, the safer thing to do would be to work directly to produce things of value. If you make money for money’s sake, you could essentially just be rearranging the economy, and it may not necessarily be in a good way. Finally, if you have a boss, this profiting you receive may enrich those above you — and the industry as a whole — and you have to consider if that’s actually a good thing or not, too. 68. Raoul Ohio Says: Scott #32: Re ” if cryptocurrency is 100% a scam, then a large fraction of global finance is likewise a scam.”. Certainly true, and many other businesses are partly a scam. BUT — I think crypto is a LOT more of a scam than most other things, and has worse side effects to boot. Crypto has nothing to offer other than enabling criminal activity. As an investment strategy it is it worse than beanie babies, because when the crash hits you don’t even get to keep the beanie babies (whatever they are). And a large and growing fraction of the world’s power consumption goes into crypto mining. Etc. 69. SR Says: Scott #65: Someone archived her Tumblr blog before it got taken down https://caroline.milkyeggs.com/worldoptimization/index.html . If you search for your name, what NO said is roughly true, although that post was from a few years ago. I browsed just a bit of the blog, so maybe I’m missing something, but to be honest I felt pretty sympathetic to her afterwards. She seems like a deeply human, thoughtful, and interesting person with some misguided opinions. I hope everyone involved in the fraud is held to account, but I am now hoping that she wasn’t involved. 70. davidly Says: Seems to me the utility of cryptobro’s malfeasance is to smear crypto. Not that that was his intent, but still. Is unintended utilitarianism a thing? If not, somebody should get that rolling so we can all get in on the ground. 71. oli cairns Says: Scott, he had some tacky house that cost 40m, was palling up with celebrities like Tom Brady and Bill Clinton and was highly cynical about crypto in several interviews. Then it turns out he blew 10 billion of customer deposits, all the time lying about this. He is clearly psychopathic, and has done so much actual damage in a really obviously immoral actions. If I said “yeah Stalin made the world worse, sent a few people to gulags but he clearly believed in communism and it’s really sad such a smart person did”‘ then (1) it’s trivialising some terrible breaches of common sense morality (2) it’s naive to take at face value the ethical statements they make. It could be entirely cynical and they were actually pursuing fame and power. The fact that you compare him to other people: yourself, James damore who did debatably trivial things wrong just suggests moral blindness. If someone steals and burns 10 billion then people should be angry. 72. Duyal Yolcu Says: Related to the Twitter thread mentioned by Ian Kos #44: https://twitter.com/breakingthemark/status/1591114435711492096 There was an apparent misunderstanding by SBF, Caroline Ellison, and others around Kelly bets (and one relevant to their behaviour, it seems!), and I think your mention of that criterion is at least suboptimal in its presentation in the same way. Kelly does not require fixing an utility function in the long run. It maximises the almost-sure growth rate of the money – in other words, the assumption is that the agent has the opportunity to make a series of many bets and decide on the fraction of their accessible resources to gamble each time. Then if MK and MN are the final amounts of money by Kelly and some counterfactual Kelly bettor, the central justification for the Kelly criterion is P(MK >= MN) -> 1 and not E(log MK) >= E(log MN) as things go to infinity. The latter is true as well, but not Kelly’s ultimate motivation; and someone who makes infinitely many all-or nothing bets will end up with 0 money with probability 1 (“gambler’s ruin”). 73. Scott Says: oli cairns #71: I feel bad that you see moral blindness here, given that the entire point of the post was to judge people for their acts, rather than for the acts it seems like they might commit based on what kind of people they are. And it seems obvious that SBF committed one or more historically terrible acts for which he’ll need to face justice. I think the crux of disagreement between you and me is that for once, I 100% agree with David Karger: It seems you’re missing the most obvious, normal, boring explanation for what happened here. Smart idealist starts out with admirable goals, has early successes, becomes caught up in and corrupted by his own impressiveness, begins to cut ethical corners, gets caught. Or rather: the only thing I disagree with is that I was “missing” this! Something like the above does indeed strike me as the obvious story, so much so that I saw no need even to make it explicit. From all my reading these past few days, I found not the slightest reason to doubt that SBF went into this with the genuine intention to use his billions to save the world, and zero reason to believe he was cackling about everyone’s gullibility when he started Alameda. As David K. says, Occam’s Razor plus everything we know about human nature militate in favor of the “gradual corruption” account. Even the palling around with Bill Clinton and Tom and Gisele could easily be justified in utilitarian terms: achieving SBF’s goals would clearly require acquiring influence in mainstream politics and culture, so that’s exactly what he set out to do. One thing that could change my view of SBF’s motivations, would be if it emerged that he was collecting a harem of supermodels the whole time. A “polycule” (whatever that is) with rationalist nerd girls doesn’t count. 😀 I talked about linear versus logarithmic utility functions simply because, if you like, I was curious about a different level of explanation than David K. was. The story of someone starting out with the best intentions and then getting gradually corrupted is as old as time. The new and interesting part here, it seems to me, is the explicitly stated utilitarian calculus that could, indeed, have convinced a person of conscience that as they went down the road to hell, they were actually acting in accord with their highest values. So it seemed of utmost interest to me to point out that a small tweak to the utilitarian calculus itself would plausibly have prevented SBF from—by his own account—“fucking up” so epically. 74. Scott Says: SR #69: Oh wow. Writing 7 years ago, in the aftermath of the comment-171 affair, Caroline Ellison does indeed say that “guys like Scott Aaronson are the guys I want to date,” and also that Amanda Marcotte had provoked her into reading Quantum Computing Since Democritus as a way to demonstrate her support for me. And as you say, she shares lots of other open, heartfelt thoughts that seem calculated to make me sympathetic to her. So yeah, I do share your (remote?) hope that she’ll turn out not to have been complicit in fraud—or if she was, then merely in over her head rather than an instigator. And I hope she has excellent counsel right now. 75. Scott Says: I guess most of all, I find myself wishing that whatever cachet I may have had with Caroline Ellison or others at FTX, I could somehow have used it to prevent this calamity. Of course, even if they’d reached out to me (which they didn’t), I don’t know how I could possibly have known about their financial or corporate governance problems, but at least I could’ve talked to them about risk and expected utility. 76. MK Says: Scott #73: How do you justify the 40m penthouse, then? Does this not count as it’s already “SBF in his corrupted phase, so whatever”? >I found not the slightest reason to doubt that SBF went into this with the genuine intention to use his billions to save the world, and zero reason to believe he was cackling about everyone’s gullibility when he started Alameda I guess it depends on your priors (as always), but “not the slightest” and “zero reason” seem like very strong statements to me. IMHO seeing a disaster like that, a good Bayesian should update in the direction of “scammer” at least a little bit, and possibly by much. IMHO we should exclude political donations (by anyone) from the ethical calculus here. It’s basically impossible to tell which of these are “genuine belief that this option is net good” and which are just buying access and currying favor with whatever faction is gonna be in power (my prior is strongly on the latter). 77. Scott Says: MK #76: I don’t know real estate prices in the Bahamas, but a 40 million penthouse shared by 10 people, most of whom thought themselves billionaires before last week, seems … barely extravagant enough to raise an eyebrow? And by all accounts, SBF hardly slept there anyway, preferring the beanbag chair in his office. Even with far worse people than SBF — the Nazis, the Soviets — I’ve gotten further in understanding them by assuming the leaders (at least) actually believed the ideologies they claimed to believe, than that they were twirling their mustaches and laughing about it like in the cartoons. So until evidence emerges to the contrary, why wouldn’t I take the same approach here? 78. Michael Says: Scott #73: “From all my reading these past few days, I found not the slightest reason to doubt that SBF went into this with the genuine intention to use his billions to save the world, and zero reason to believe he was cackling about everyone’s gullibility when he started Alameda.” There can be a lot of conscious hypocrisy from people taking the moral high ground. We’ve seen all the religious figures that turned out to be anything but moral. Judge people by their actions, not their words. Their biggest donations by far were to the Democratic Party and organizations connected to their effective altruism movement. If a corrupt evangelical leader turned out to be making large donations to the Republican party and to religious organizations, no one would be surprised, and no one would be as certain of their noble intentions as you are towards those of SBF. People rarely devote their lives to making billions out a sense of justice. It’s called greed, pure and simple. Their actions are not explainable by any mathematical model gone awry. They wanted to make as much money as possible and things spiraled out of control. 79. Scott Says: Michael #78: The thing is, we can completely rule out the hypothesis that SBF just wanted to maximize the probability of being rich enough to spend the rest of his life in luxury, and then rationally pursued that goal. In FTX, he had a golden egg that he could’ve simply sat on for years. Instead he blew it all, for himself and his customers, by making what from a conventional economic perspective were insanely risky trades. This makes sense to me only under two hypotheses: (1) that he was irrational, or (2) that he was actually trying to maximize expected money regardless of risk, as he said over and over that he was—judging that (mistakenly, in my view!) to be the correct strategy if your concern is for the fate of the world rather than your own luxury. Note: These hypotheses are not mutually exclusive. 80. Sir Henry Says: Quantum computing could yet turn out to be a bigger scam (by Dollars wasted) than FTX. The difference is maybe that one is propagated by physics professors while the other comes from Bachelor’s degree holders. 81. David Karger Says: SR #64 thanks for pointing out this analogy to communism which I found very useful. and I think that the analogy can be carried further: just like utilitarianism, communism is a quite useful *theory* that can can guide useful reflection about the state of the world and how to make it better. But problems emerge when people become so wedded to the theory that they forget that it based on a *simplified model of reality*. They fail to notice/take account of the ways that things are more complicated in practice, and continue to rigidly follow the guidance of the theory even as practical concerns arise that oppose it. I spent many years as a theoretician; I *like* simplifying the world down to something analyzable. But, you have to recognize the limits of that. 82. mtamillow Says: You are not trying to get rich? Are you sure? This kind of reads like a call out to SBF for a business partner. I think a lot of things on this, but this post seems to be all over the place. Do good? For whom? This altruistic mentality is rather obnoxious from anyone. People who are really altruistic don’t occupy themselves with policing, politics, and policy. Pushing policy is the force behind war, and war is certainly not altruistic. 83. MK Says: Scott #79: “retiring in luxury” was never a goal for this type of guys. The goal is always “I wanna be one of them big boys”. I wanna take on Elon, take on Zuck. With just 10 billion in fishy assets of uncertain liquidity, he was still in the “crypto nouveau riche” league, not the “big boys” league, and he knew it. Interpret his Biden donations and offers to Musk to chip in when buying Twitter accordingly. You might be predisposed not to view this as a plausible motivation since it’s rare in academia. But in business and politics, it’s very common and IMHO much more plausible than “followed noble ideas, went astray”. I’d offer to solve our discussion by a bet, but since we’re debating something as elusive as SBF’s psychology, it’d be probably never resolved. 84. Scott Says: Sir Henry #80: Quantum computing could yet turn out to be a bigger scam (by Dollars wasted) than FTX. The difference is maybe that one is propagated by physics professors while the other comes from Bachelor’s degree holders. Hey, if it works, we know QC will enable at least one useful application: breaking existing public-key crypto and thereby stealing Bitcoin! 🙂 Whereas if there’s a deep reason why QC doesn’t work, then that’s a revolution in physics. Either way, it’s ultimately a question about the physical world, which I trust to make basic sense, rather than about social and economic forces, which I don’t. This blog has, I modestly submit, been a world leader in calling out QC hype and scams for 15 years. But the above would seem like the most obvious difference between QC and a purely speculative asset. 85. dankane Says: Scott #32: “if cryptocurrency is 100% a scam, then a large fraction of global finance is likewise a scam.” How exactly does this follow? Global finance does push a lot of money around in somewhat speculative ways, yes, but it also produces actual useful products like funding startups and mortgages and letting people save for retirement, and it seems at the very least not-obvious that these useful products would be achievable without the speculation. Cryptocurrency on the other hand as far as I can tell has huge amounts of speculation but so far as mostly failed to deliver socially useful products (though I guess it has managed to deliver some socially deleterious products). 86. dankane Says: Insisting that the risk-neutrality for charitable endeavors be dropped (at least when the endeavors are at the scale of at most millions of dollars rather than tens of billions) seems like a mistake. This would necessitate telling people like me that I ought not participate in charitable lotteries for example, which seems a net negative. 87. Scott Says: mtamillow #82: You are not trying to get rich? Are you sure? This kind of reads like a call out to SBF for a business partner. Believe me, among my numerous emotions over the past few days, you can clearly detect relief that I didn’t enter into any sort of relationship with SBF, involving him funding my research or anything else. I feel terrible for the many worthy organizations that were depending on him and I hope other funding sources can be found to let them continue their work. (Speaking of which, for any rich person reading this blog: right now is a once-in-a-lifetime chance to be the Savior of Effective Altruism!) Anyway, if I were trying to get rich, presumably the trajectory of my life would’ve looked completely different, involving a lot more investing and startup founding and a lot fewer quantum query complexity papers submitted to STOC. 🙂 88. Scott Says: dankane #85: I guess it ultimately comes down to the exact definition of “scam”! The CDOs and other complicated products traded by hedge funds, for example, which played such a central role in the 2008 meltdown, didn’t seem to me like they had socially compelling reasons to exist, but maybe I’m wrong. From the very beginning till today, I suppose the central case for crypto being “socially useful” is that it provides an easy way to move money around the world even when governments would like to make it difficult … both for better (e.g., funding dissidents in repressive regimes) and for worse (e.g., funding the repressive regimes themselves). Whether someone judges the good or bad to predominate will of course partly depend on their politics! 89. Scott Says: dankane #86: Oh come on. With a charitable lottery, the downside risks (if any) seem essentially trivial. With SBF, by contrast, the moral problem was that he took actions that he presumably judged to have positive expected utility for the world, but that also had severe downside risk for the very charitable organizations that he wanted to help, not to mention for FTX’s customers. And he didn’t ask their permission first. And the severe downside risk has now been realized. We very understandably don’t let people steal, even with the sincere intention of investing the stolen loot in a “can’t-miss” stock and then repaying the victims double. 90. dankane Says: While I don’t feel like I know enough about SBF to accuse him of making pledges insincerely, I also don’t think we should be handing out EA credits for people *promising* to donate large amounts of money. His contributions to EA should be judged based on what he actually got around to contributing before everything fell apart. 91. dankane Says: Scott #89: But that’s the point! The problem with SBF’s actions weren’t that they were justified by risk-neutrality. The problem was that he was only evaluating risks in terms of amount of money donated, not in terms of reputational damage (and damage due to making plans based on expected money that didn’t materialize) to the movement or collateral damage to FTX’s users. If the issue were *actually* risk-neutrality, charitable lotteries *would* be a problem. 92. dankane Says: Scott #88: Mortgage backed derivatives had a great reason to exist. Packaging a bunch of mortgages together is a reasonable way to reducing risk, which is one of the major goals of the financial industry. The issue in 2008 was that people started getting way way too overconfident about just how safe these investments were. My understanding was that there were some bad actors but that mostly it was a lot of people who should have known better making huge miscalculations. But my point is that most traditional financial products are actually backed by something with clear value. Crypto is backed by basically nothing other than the idea that if we all decide that this is valuable then we have a convenient way to transfer value around. US dollars are at the very least backed by the necessity of having them to pay US taxes. 93. SR Says: David Karger #81: I completely agree with you. I think that, especially for those with a math or physics background, there is a real thrill in starting with a simple model and extrapolating to the point of drawing wondrous and unexpected conclusions about the real world. I think it takes time (at least, it took me plenty of time) to build up the appropriate amount of skepticism to certain types of grand claims. 94. Scott Says: dankane #91: The problem with SBF’s actions weren’t that they were justified by risk-neutrality. The problem was that he was only evaluating risks in terms of amount of money donated, not in terms of reputational damage (and damage due to making plans based on expected money that didn’t materialize) to the movement or collateral damage to FTX’s users. We’ll have to wait for the details to come out, but it’s totally unclear to me that this was the issue. SBF might well have been perfectly aware of all the collateral damage risks—and yet judged them to be acceptable, given the enormous upside potential as well as the key assumption of utility linear in money. That would arguably be even worse if true. 95. dankane Says: Scott #94: Are you arguing that: A) SBF may well have misjudged the expected outcomes and despite trying to account for these factors still though that the expected outcomes were optimal (with the calculations done using risk-neutrality in terms of monetary charitable donations) despite this not being actually true. B) It was *actually* the case that SBF’s actions did product large expected utility using risk-neutrality in terms of monetary charitable donations, but because of the poor bookkeeping of risk-neutrality, this was actually bad. C) SBF’s actions were actually good. [and I acknowledge that we might well not be sure at this point which of A, B or C is true, but I want to distinguish which case you are trying to argue is possible here] But ignoring SBF, if you want to advocate that utility for charitable contributions should be calculated by applying some concave function to money donated, doesn’t this imply that charitable lotteries are bad? Like you say that you would have wanted to convince SBF to weight donated money logarithmically. How do you support doing this but not also support telling people considering charitable lotteries to weight donated money logarithmically? 96. Scott Says: dankane #95: My position is closest to B), although A) might be true also—that is to say, SBF may have improperly used linear expected utility, and the downside may have been so much worse than he thought that he shouldn’t have taken the risk even then (but I’m not sure about the latter). Can you explain exactly how a charitable lottery works, so that I can understand why they carry any significant risks that would engage these sorts of considerations in the first place, rather than just being a fun way to raise charity money? 97. dankane Says: Re risk-neutrality: Here’s a thought experiment: Suppose that at the point where SBF’s expectations of his final profits was going to be exactly 1B someone came to him with the following offer: “I’ll donate 1B to your charity of choice now if you continue to run your fund exactly as you would have otherwise (making the same risky bets with other peoples’ money and so on) but in the end you give any profits you end up with to me instead of a charity.” If SBF had accepted this deal, would it have substantially changed the morality of his actions? If your answer is “no”, the problem clearly wasn’t linearity, at least in terms of money donated. 98. dankane Says: Scott #96: A charitably lottery is where instead of 100 people each spending 1 hour figuring out where to best donate a 1000 individual donation, they pool their money, and the winner gets to spend 100 hours (or 10 or whatever) figuring out the best way to donate the full 100,000. If I participate in a charitable lottery, the amount of money that gets directed to my cause of choice on average stays the same, but the variance is much much larger. The potential advantage is that without committing any more expected time, I get to ensure that any donated money is donated to substantially better researched causes. However, if you think that I need to log-weight the amount of money that I get to direct to my preferred causes when doing utility calculations, participating in a charitable lottery is clearly inferior to just donating my money directly. 99. Amir Safavi Says: Scott #75 You’re focusing a fairly minor technical point. IMO the issue wasn’t a miscalculation and a proper understanding of probability theory. It was a broken value system. Perhaps more importantly, you shouldn’t underestimate the impact of professors on students and their moral responsibility. SBF is a (literally) a child of elite educational institutions. Students (in particular the nerdy ones who may at least initially idolize a young hotshot at MIT like yourself) form ideas and value systems during that critical time. I think that there is a direct line to be drawn between professors publicly kowtowing to wealthy donors without regard for the source of their funds, their moral standing and contribution to society (think sordid details MIT/Epstein) — and things like this. [I am not implying that you personally share blame for this. Just that actions of people in your position have a disproportionate impact on students worldview. It’s an awesome power and we should keep it in mind.] 100. Michael Says: Scott #79: Very wealthy people often keep trying to accumulate more money. Think of Trump & family. No amount is never enough, the desire to be ever-wealthier is all-consuming. I don’t really get why you’re so convinced he did everything with the best intentions. Effective altruism to me is just a modern-day replacement for religion, guiding principles to live by. Religious people can and often do succumb to temptation and knowingly do things that are wrong, that go against their beliefs. To me, that’s what it looks like happened here. Greed took over…. the irresistible urge to gamble away client funds in the hope of making even more. You are sparring over how his mathematical models might have been flawed. This is the mind of a math or CS professor… SBF was a guy who spent his hours playing video games. There’s a video circulating of his girlfriend saying they didn’t use math beyond arithmetic. While it’s kind of quaint how divorced you are from typical human behavior, I seriously doubt this is a case of subtle flaws in complicated mathematical models. 101. JimV Says: “Mortgage backed derivatives had a great reason to exist. Packaging a bunch of mortgages together is a reasonable way to reducing risk, which is one of the major goals of the financial industry. The issue in 2008 was that people started getting way way too overconfident about just how safe these investments were.” My understanding, from reading “The Big Short” and other sources was that banks were giving mortgages to people without much scrutiny or caring about the risks of default, because they could package them as mortgage-backed securities, mix them with some good mortgages, get the packages rated as ABB by captured regulators, then sell them and pass the risk on to someone else. The whole point was to make the end purchaser over-confident, while on the front end encouraging people to buy houses they couldn’t afford. A financial product which encouraged such behavior to the extent that it was practiced all over the country was empirically a bad idea and should not have been allowed. A law passed in the 1990’s that prohibited banks mixing banking with investment schemes would have prevented it, but it had been repealed by Republicans. In the turbine business I worked in, many young engineers had what they thought might be great ideas, but were told by much older and more experienced engineers who had been lifetime employees, “We tried that in the 1950’s. It didn’t work then, and it won’t work now.” That was what the 1990’s law was trying to tell people. Despite all this experience, inevitably new things were tried, and in a business in which large power-generation turbines sold for up to 50 million, most older engineers had made their “million-dollar mistake”. I sympathize with SBF’s predicament to this extent: in his business apparently there were no older, more experienced people over him to rein in his youthful enthusiasm and over-reach. In the absence of experience, we proceed mostly by trial and error. 102. SR Says: It’s also sobering how, on most other corners of the internet, no one seems to care about actually figuring out what went wrong. Instead, this incident acts as a Rorschach test where the alt-right are blaming Jews, the populist right are blaming globalists/the deep state/the Democratic party, progressives are blaming power-hungry billionaires, humanities majors are blaming weird STEM nerds, and EA critics are taking the opportunity to loudly say “I told you so!”. I did criticize EA in my comments above, but the nature of the bad-faith criticism almost makes me want to defend them instead… 103. Scott Says: Michael #100: I don’t really get why you’re so convinced he did everything with the best intentions. Effective altruism to me is just a modern-day replacement for religion, guiding principles to live by. Religious people can and often do succumb to temptation and knowingly do things that are wrong, that go against their beliefs. To me, that’s what it looks like happened here. Aha! Throughout recorded history, it seems to me that the majority of the very worst things religious people have done were not because they succumbed to temptation and went against their beliefs, but on the contrary, because they actually followed their beliefs as best they could … no matter how many innocents they had to harm to do it. Is it really so hard to imagine that this could be another such case? 104. Scott Says: SR #102: It’s also sobering how, on most other corners of the internet, no one seems to care about actually figuring out what went wrong. Instead, this incident acts as a Rorschach test where the alt-right are blaming Jews, the populist right are blaming globalists/the deep state/the Democratic party, progressives are blaming power-hungry billionaires, humanities majors are blaming weird STEM nerds, and EA critics are taking the opportunity to loudly say “I told you so!”. Exactly! You figured out the motivation for this post. 🙂 105. Scott Says: Amir Safavi #99: You’re focusing a fairly minor technical point. IMO the issue wasn’t a miscalculation and a proper understanding of probability theory. It was a broken value system. Michael #100: You are sparring over how his mathematical models might have been flawed. This is the mind of a math or CS professor… See, but my entire point was that linear versus concave utilities is not some abstruse technical issue! It only superficially seems that way. In reality, it’s about as profound a question of human values as there is. Or rather: it’s a way of taking a profound question of human values, and stating it in a way that compiles in a STEM person’s head. You can disagree if you want, but please don’t accuse me of trying to deflect attention from a profound human question to a “merely technical” question, when the whole point was to equate the two. 106. Scott Says: Incidentally, on watching FTX’s Super Bowl ad with Larry David, which has notoriously acquired new meanings: With hindsight, what the wheel, coffee, democracy, the lightbulb, and portable music had over using FTX.com to buy cryptocurrency was precisely that the case for them didn’t rest so heavily on their being new. 🙂 107. Tu Says: Scott, A few comments that may provide some perspective and assuage you of some of the guilt you are feeling for not preventing this from happening via some conversation-you-could-have-had-with-SBF in the hallway at MIT in 2014. First, a little background on me. I studied the same thing as SBF in undergrad at the same time, but at a different school that you have heard of. I am one year younger than Sam. When I graduated from college, I worked as a trader at a competitor of Jane Street, doing the same job as Sam. I am now in graduate school, studying average-case complexity (thanks largely to this blog), but I spent 5 years working as a trader. I provide this background because I think this post is the first post in the history of this blog where I can provide some context and insight (due to experience and nothing else) that you may not have been able to provide for yourself. Here goes. A disclaimer. While I am not personally friends with Sam, we had mutual friends in college and then professionally afterwords. I have followed him closely over the past 2 years or so. Broadly speaking I enjoyed his candor in interviews, his non preachy tone about crypto and general cynicism and sense of humor. I think he had great energy and the ability to speak intelligently about a fairly broad number of issues related to financial markets. I consider myself pro-SBF, whatever that means. On why you should feel no guilt. First, Sam (and Caroline) worked at Jane Street. An essential part of the training program at Jane Street, and indeed any prop trading firm that you have (or have not) heard of, is the study of bankroll management. Both were exposed to the virtues of Kelly betting (or fractional Kelly) and the risks associated with overbetting with respect to your bankroll. Both had not only the opportunity but rather the professional obligation to understand why one is better than the other. I would only like to point out that a Kelly style strategy outperforms any materially different strategy over the long run with probability 1, so to entertain the idea that there is a choice between “linear expected utility” (or whatever Sam was calling guaranteeing ruin) and Kelly betting is to give the former too much credit. I must say that I am surprised now reading his tweets and quotes advocating for a style of risk taking that loses with probability 1– I feel like if one is to orient their entire life and all ethical decisions around one theory, they might spend more than 30 minutes ironing out some of the details of how the theory works. Which brings me to my next point. When I was working in trading I was surrounded by people obsessed with games, strategy, gambling. Many of them were world-class or excellent players of one thing or another (backgammon, bridge, poker, dota, chess). Something that great game players have in common, and a skill that is absolutely required to succeed in a zero-sum-game type profession is respect for the truth . I don’t exactly go around shouting this from the rooftops, but this is the thing that I remember the most fondly about my time trading, and the thing that I like the most about my current academic environment. From what I understand of the situation at FTX (which I don’t have a problem saying is quite a bit), Sam does not respect the truth. From his extremely dishonest and factually inaccurate tweets as the ship was sinking, to his shamefully misleading balance sheet that he shared in an attempt to procure additional emergency funding, he has demonstrated that he has no problem lying, and has no interest in understanding how or why he failed. Right now we are looking at something like (Enron + MF Global)^k, where k >= 1. Maybe Sam was an idealistic kid trying in earnest to maximize the utility of the future by (incorrectly) betting 2*Kelly on a bunch of (according to him) positive expectation wagers, but the reason FTX failed was not that Alameda’s bankroll hit zero (which would have happened no matter what under his bet-sizing, even if he did have a genuine edge, which he didn’t). FTX failed to due Sam’s lack of respect for the truth which was that he was stealing billions of dollars. Last I would just chime in on a few things: 1) Is this level of of dishonestly that extreme relative to what is encountered in the traditional financial industry? Yes. 2) Please don’t give this guy credit for sleeping on a beanbag chair sometimes. He usually slept in the penthouse. I am not criticizing him for having a penthouse, but I think we can all stop giving him the Nobel prize for driving a Corolla or whatever. 108. Scott Says: Tu #107: Thank you so much for the insights! I’m inclined to defer to your more expert judgments absent good reasons not to. Basically, despite all the flak I took for it, you think I was correct to put my finger on SBF’s preference for expected-utility maximization over Kelly betting as close to the heart of what went wrong here … but I should have gone further still, and explained that preference by an underlying disregard for the truth? 109. Tu Says: Scot 108: “Basically, despite all the flak I took for it, you think I was correct to put my finger on SBF’s preference for expected-utility maximization over Kelly betting as close to the heart of what went wrong here” Yes. “but I should have gone further still, and explained that preference by an underlying disregard for the truth?” I don’t know if “disregard” is the right word, though maybe it is… I think your original post and framing was more or less fine. From my perspective, I was confounded by someone orienting their life around a hard-line interpretation of utilitarianism, working in a profession that consisted of making positive-expectation wagers on a finite (though large in his case) bankroll, who was exposed to the theory of bankroll management, and still publicly (on Twitter, in interviews) advocated for a style of wagering that is worse*. I am forced to conclude that some combination of his success up to that point, an over-conviction in his own beliefs, and possibly lack of curiosity prevented him from respecting the merits of the arguments against what he had been doing. The quantity and audacity of lies that ensued also caused me to update my priors on how much “the truth” matters to him. *To clarify to readers– is the style of wager advocated by SBF really worse– it is just a different utility function after all, right? In an effort to avoid a long boring post, let me put it this way. Over the long run, a Kelly-type strategy will dominate Sam’s strategy with probability 1. So if you permit yourself to accept a strategy where you win with probability zero, lose with probability 1, but that negligible set has some really awesome outcomes, then Sam’s strategy is for you. If you are a normal person, to whom the words “the future” refer to possible futures with positive probability, it is not. 110. William Gasarch Says: 1) Do you have any examples of technologies that started out as quasi-scams but were later legit? 2) Another reason to not have your youthful goal in life to try to get rich to give to worthy causes: – SO much of getting rich is random. timing, luck, etc play a rather larger part than most people seem to realize. So if Scott A (hmmm- either Scott A) had tried to get mega rich from a young age, it might have worked, but it might not have, and in either case there might not be lessons to learn from it. 111. Amir Safavi Says: Scott #105: > See, but my entire point was that linear versus concave utilities is not some abstruse technical issue! It only superficially seems that way. In reality, it’s about as profound a question of human values as there is. Or rather: it’s a way of taking a profound question of human values, and stating it in a way that compiles in a STEM person’s head. I don’t grasp the profundity. It’s a useful rule for understanding how to win some probabilistic games under a highly restrictive set of assumptions and uses a set of concepts and definitions that don’t neatly translate to the real world. I don’t think I’m making a controversial statement that you cannot reduce all ethical questions to concave vs. linear utility functions. Also as a STEM nerd, I can tell you that I understand the golden rule just fine & that could have kept SBF out of trouble. SBF promised people their money was safe, actively attracted money from unsophisticated middle class investors (e.g. Super Bowl ads), and generally attempted to become rich by any means possible. The basis of his own moral justification for this was that he would spend the money “doing good”. He justified this under the auspices of “earning to give”, with the verb “earn” defined as any action that increased his net worth. In my view it’s not useful/possible to couch these moral failures into a single mathematical statement. The math will always tend to miss the big picture. Speaking of missing the big picture, the reason I keep responding to this post is that after about 16 years of reading your blog, this is the first time I am completely befuddled and I am trying to understand how or why. I am completely at a loss as to how you could think about this situation in the way that you do. Even your most controversial posts have connected with me in the past. But this one I find completely crazy. Have I changed? Have you changed? Have I just not been reading your posts carefully enough all these years?? (I admit I started reading less frequently in the last two years..) I read the post again, and I still can’t believe your view of this situation’s somewhat equivalence to the parallel universe where the Apollo mission fails. Also let’s not make this about how people are going to attack STEM nerds. the real victims are the many normal people who have lost their life savings because they believed an MIT graduate’s lies about 8%/year interest. There are people who are now unemployed, who may need to move their families to different countries, an enormous amount of suffering, etc. etc.; do elite institutions have some responsibility? our institutions do take credit for the good that our graduates do. We should ask questions about the Holmes’ and SBFs, how we could have done differently. An extra course in probability theory isn’t the answer. [there will of course be a lot of bad faith criticism of our institutions and stem nerds & we can ignore those, but we shouldn’t pre-emptively cloister up.] 112. Vanessa Kosoy Says: Utility as a function of money is the wrong lens to analyze this, IMO. The value of money is purely instrumental, and the real utility lies in other things. In the case of SBF, his most important philanthropy IMO was supporting research into existential risks from AI. However, this area was close to saturation: what was missing was not money, but talented people to work on the problems. Which, again, is not a statement about the utility of money in the abstract but about the particular state of the world we found ourselves in. Therefore, the value of additional money was relatively small. On the other hand, the downside risk of tarnishing the reputation of the entire field (and thereby harming talent flow into it, and harming the extent to which key actors take it seriously) was enormous. Therefore, SBF’s gamble was a terrible decision. But it’s actually worse than that. If you think of your actions as logically correlated with the actions of other people (as you should: https://arxiv.org/abs/1710.05060), then, when considering to break an ethical norm, you should imagine everyone (similar to you) breaking norms. If the consequences of that are clearly terrible (as they are), then don’t do it. 113. Vanessa Kosoy Says: “One thing that could change my view of SBF’s motivations, would be if it emerged that he was collecting a harem of supermodels the whole time. A “polycule” (whatever that is) with rationalist nerd girls doesn’t count.” So, rationalist nerd girls are not as good as supermodels?! MEN *eyeroll* 114. MK Says: Scott #108, Tu #109: > I was correct to put my finger on SBF’s preference for expected-utility maximization over Kelly betting as close to the heart of what went wrong here. I admit I’m lost. Isn’t the heart of what went wrong here is that he gambled with people’s deposits (= ran an unregulated bank while claiming to be only running an exchange), with the details of HOW he gambled (which is where the utility/betting theory comes into play) being secondary? Wasting investor money by reckless betting is very different morally (and more understandable) than assuring customers you will never touch their deposits and then going wild with it. Isn’t this the crux of moral failure here, or am I missing something? 115. Simplifying the Geometry of Conscience : Stephen E. Arnold @ Beyond Search Says: […] those early players look less like minnows and more like clueless paramecia with math skills. “Sam Bankman-Fried and the Geometry of Conscience” is an interesting essay. However, it is difficult for a simple and somewhat dull person like […] 116. Indiabro Says: EA movement is the result of cargo cult science meeting mediocre individuals with a God complex. I love it how Mr Macaskill spent so much time thinking about existential risks 1000 years into the future when he could not predict minor risks to his own orgs. Face it Scott, most of EA, Yudakowsky tier rationalism ,long termism is filled with mediocre people .Some of these mediocres have a God complex. Some are simply padding their resumes and trying to connect with some influential people who will help their careers. And given that most of these people would identify as utilitarians ,I am not surprised .I think these communities have a higher share of psychopaths as compared to average population which explains people like SBF. Somebody should study this. 117. Tu Says: MK #113 “I admit I’m lost. Isn’t the heart of what went wrong here is that he gambled with people’s deposits (= ran an unregulated bank while claiming to be only running an exchange), with the details of HOW he gambled (which is where the utility/betting theory comes into play) being secondary?” Your interpretation of what happened is 100 percent accurate. Clarifying a bit the order that things appear to have occured, Alameda lost a ton of money first, and funds were moved from FTX to help them meet margin calls second. I only make this point because it allows people to imagine a path where Sam didn’t have to decide whether or not he should steal billions of dollars “for the greater good.” I think I should clarify that when I affirmed “the heart of what went wrong” I was referring to Alameda’s insolvency– i.e. how Sam failed as at risk management and trading, before stealing billions of dollars from FTX to cover this up. Perhaps I was not articulate enough in my first two comments, but the point I am trying to make is that even if we try to be maximally generous to Sam, assume he broke no laws and was only acting in good faith, sincerely trying to do the most good for the future, we should still be mad at him. What motivated me to comment is there does seem to be an impulse among EA-sympathetic people (which I guess I am) to entertain the idea that was Sam did was right in expectation, but this was just a bad run of the dice. My point is that he was doing a bad job, even ignoring the things that will put him in prison for a decade or so. Basically if you want to find a branch of the wavefunction where this worked out, you have to look really really hard, and by 6 months from now those branches will be gone too. I don’t know about you, but if we are looking for vanishingly unlikely branches of the wavefunction, I have more exciting things to look for…. 118. Scott Says: Amir Safavi #111 and others: Let me try one more time. SBF made two closely interrelated errors, one “technical” and the other “moral.” The “technical” error was to be way, way too risk-neutral (as seen in his favoring linear utilities over Kelly betting). He actually didn’t mind the gambler’s ruin (!!), repeatedly pulling the lever for a 51% chance of doubling his holdings and a 49% chance of a wipeout, and we’re now witnessing the result of that. The “moral” error was to push his (by my lights, insane) risk-neutrality onto his customers and others, without their understanding or consent. In his mind, swapping out the customers’ deposits for FTT tokens of “equal value,” which would then be swapped back for Bitcoin or dollars or whatever when a customer made a withdrawal, surely wasn’t “stealing” or “fraud”; it was just back-end accounting. It carried some risk, sure, but not much worse than anything else in crypto, and why had the customers gotten into crypto in the first place, if not to take positive-expected-value gambles just like he was? Why do I say that these two errors were closely interrelated? Amir, you mentioned the Golden Rule. In SBF’s mind, I believe, he was obeying the Golden Rule. He wasn’t saddling his customers with any risk that he’d have been unwilling to accept himself. What he missed was that (1) his customers were way more risk-averse than he was, and (2) in most universes (indeed, in “100% of them in the limit of large N,” as previous commenters have explained), his customers would be right and he would be wrong. If I still haven’t made myself clear — or if anyone still accuses me of defending or excusing SBF’s actions (!) — then I regret that we’re probably at an impasse that further comments won’t resolve. 119. davidly Says: JimV. #101: “A law passed in the 1990’s that prohibited banks mixing banking with investment schemes would have prevented it, but it had been repealed by Republicans.” It was the Glass Steagall of 1933 that did prevent this until Gramm-Leach-Bliley repealed it in 1999. To be clear, while it was in fact three Republicans who authored the bill, it was passed with very few ‘nay’ votes from either party and signed into law by Clinton, who continues to defend it today as not having enabled what led up to 2008/09. 120. fred Says: One big issue with cryptocurrencies (and blockchain more generally) is that it’s inherently a maximally wasteful endeavor because at any point in time it relies on using as much electricity and as many GPUs as possible (any efficiency gain will just be turned into more computations). Unlike computer graphics generation (for real time VR or gaming), which eventually is going to reach some cap, i.e. hyper realistic graphics (and can be made more and more efficient). Of course, in theory, crypto could be creating more value than what it wastes, but it’s not clear at all, unlike putting those resources towards AI computation, which give a more directly observable benefit. Maybe there’s a way to do “green” crypto/blockchain, like putting all its computation gadgets in orbit or on the moon, relying purely on solar energy, and then beaming back the data (but that’s not cheap and will cap efficiency). 121. fred Says: For those interested, there was a Sam Harris podcast with SBF https://www.samharris.org/podcasts/making-sense-episodes/271-earning-to-give 122. manorba Says: Scott, if i may: yes, you made your position very clear, routinely (you’ re actually beginning to sound like my mother, repeating the same thing to exhaustion 😉 ). it’s that in this day and age even only entertaining a discussion about something or someone is viewed like an endorsement. Good luck with that 🙂 123. mtamillow Says: Scott #87, Allow me to put things in perspective. You are only 41 years old, so your life path hardly determines that you could not pivot into the startup environment. Furthermore, you have grown a name for yourself in a very speculative industry. To most outsiders, the lines blur between Quantum, Coding, Crypto, Data Science, AI, etc. and I think for smart dedicated people it is easy enough to transition between these, so why not? Open AI is very startupy, it’s like a baby step into the startup world actually. The side where the same people who work in that culture may claim a sense of altruism. As they develop as a company, I expect them to insert themselves in Policy, and then perhaps Policing. As far as politics, my definition is that politics is that policy becomes politics when policy is assigned to a group (likely because a policy favors one group at the expense of another). As our culture frets over biased models and interjects favorable policy for some groups into new models, they insert politics into automated policy. Now that is a risky business. SBF will get off with a slap on the wrist. This post makes it seem like you are advocating for him in some way… That this was an honest mistake, even admirable, with respect to the overall situation perhaps. I did run a startup, and I kept a distance with the many, many crypto firms that would come my way. I still do, even the good crypto ideas I hear have an unwieldy price tag. And matching their price tags with my own seems dishonest. SBF had no problem with that. I see 2 reasons for that. 1) He is young and has no concept of money and its actual scarcity. Probably never had to work a real (hard) job in his life. 2) He thought his love affair with making the world a better place would balance his dishonesty of overpricing his own contributions. – However, others did not take that risk, so he should be effectively punished. He probably won’t be, and he might work with you if you like. After this affair, I would want a friend like SBF to guide me through the process of dipping and dodging my way to inscrutability when being accused of massive fraud. Watch as he gets an actual jail sentence of less than 18 months. Maybe even none at all. And then his lawyers find a way to get the records completely disappeared by arguing all these occurred when he was naive so should be expunged. Then the public vaguely remembers him in 5 years, and suddenly the historical rewrite of a narrative emerges that he is a crypto billionaire and made a successful exit from FTX. Somehow money comes to him from investors (perhaps he stashed it away to invest in himself in a future date? in his mind, he already earned it so it is his, right?) and he is once again on top, funding the democratic party and lobbying for policies that make him richer so that he can make the world a better place. (which came first, Sam’s money or his good political deeds?) My wife says I am cynical. Probably true, but watch this prediction closely. If SBF ever succeeds in life from this point on, it will be because he was assisted in unconscionable ways by powerful people. There are over 300 million people in this country. Many, many other talented individuals take subsidiary roles in the world because they do not have the assistance of powerful people. SBF should be forced into that, serve his jail time (10-30 years hopefully!), make 100-200K a year at most for his life, and disappear into obscurity to never be heard from again. His frauds should exclude him from many roles. But I doubt they will. 124. Tu Says: Scott #117: “The “moral” error was to push his (by my lights, insane) risk-neutrality onto his customers and others, without their understanding or consent. In his mind, swapping out the customers’ deposits for FTT tokens of “equal value,” which would then be swapped back for Bitcoin or dollars or whatever when a customer made a withdrawal, surely wasn’t “stealing” or “fraud”; it was just back-end accounting. I think I depart a bit from you on this point. It may be that “in his mind” this was the case, but I think the Southern District will rightfully disagree. The only explanations I can find for the way he marked the value of his positions (even until minutes before filing for chapter 11) are delusion or dishonesty. To the extent that he will be able to defend himself, it will depend on his ability to defend valuing the ftt he accepted as collateral for a loan at about 5x what everyone else in the world would have agreed it was worth. I think the moral failing is the dishonesty and, judged using the yardstick of conventional finance post 2008– fraud. “It carried some risk, sure, but not much worse than anything else in crypto, and why had the customers gotten into crypto in the first place, if not to take positive-expected-value gambles just like he was?” I think the context that FTX marketed itself as an exchange, and the fact that SBF marketed himself as someone with expertise in traditional HFT matters a bit when we ask “what did the customers expect,” and is part of the reason for increased anger at him vs some true-believer-idiot-guy like Do Kwon. 125. MK Says: Scott, Tu – thanks for the clarifications! I don’t think we’re in disagreement anymore. I might have sounded nitpicky because I don’t want the central ethical (and, as we’ll soon see, legal) issue to be obfuscated by technicalities, and that issue is: SBF did stuff that normally results in prison time. Period. He stole, embezzled or misappropriated his customers deposits. You can call that “pushing risk-neutrality onto his customers without their understanding or consent”, and it’s technically correct, but a tad circumspect for my taste. 126. dankane Says: Tu #109: The Kelly-type strategy only wins with probability 1 if the scale on which you are risk neutral is literally unbounded. I think that SBF *did* overestimate the amount of money that could be donated before one starts to see diminishing returns (on twitter, he effectively put it at a trillion dollars), but even maximizing E[log(donated money + 1,000,000,000,000)] as SBF suggested doesn’t lose with probability 1 to maximizing E[log(donated money)]. It really is just a different utility function. And while I agree that the plus one trillion dollars is probably too much (and log is probably not the correct function to use), any utility function that is substantially concave in terms of donated money at amounts less than a million dollars seems clearly wrong. Are you really saying that saving 100 people from malaria is exactly as good as a 50% chance of saving 10 and a 50% chance of saving 1000? What if a hundred different potential donors all decided that they’d rather deterministically save 100 people rather than take the risk to save 1000? Wouldn’t this almost certainly produce a worse outcome? 127. nadbor Says: Scott #101 “He wasn’t saddling his customers with any risk that he’d have been unwilling to accept himself” I don’t think that’s true. The difference between him and his customers is that he exposed them to the downside but not to the upside. I don’t think in the scenario where the bets pay off and he doubles his customers’ money he was going to share the profits with them. I would guess that the best sort-of plausible scenario would have been to quietly transfer the funds back to FTX and never talk about it again. But we can only speculate at this stage about the true motivations and when exactly did it all go wrong. I can’t wait for the Michael Lewis book and the inevitable movie. 128. Scott Says: dankane: Sorry, I just realized that I never replied to you about charity lotteries. I hadn’t heard of them, and I agree that they’re an interesting example to refine our intuitions about expected utility. But I come back to my original point that this is all about severe downside risk—i.e., that the fundamental reason to use nonlinear utility functions, is that 0 is a hard boundary on the left and we want to minimize the probability of getting anywhere close to it. Charity lotteries, as you described them, don’t have the risk of totally wiping out ourselves, customers, or the people we’re trying to help, and therefore linear utility ought to be fine for them. 129. Scott Says: nadbor #125: Yeah, fine, I just meant that as long as crypto was going up and up, SBF could’ve told himself that even with the “slight risk” of FTT crashing, his customers’ trades were still positive expected utility for them, even wildly so. Of course, as soon as the music stops … then there’s a problem! 130. dankane Says: Scott #126: Charitable lotteries (BTW, the actual term used for this is “donor lotteries”, I should have looked it up first) *do* risk 0 being directed by you towards charities. If you are actually trying to optimize E[log(dollars directed by you towards charities)], putting all of your charitable donations into a donor lottery produces negative infinity utility. Like if you really, seriously believe in using Kelly weighting for the amount of money you direct towards charities, putting all of your charitable donations into a donor lottery is worse than Hitler. The issue that you are pointing towards, the issue of not only failing to donate as much as we wanted, but also completely wiping out ourselves and our customers and organizations we are part of, that is due to the entirely orthogonal issue of manically focusing on maximizing E[f(dollars donated)]. It doesn’t matter whether f is linear or concave, if you don’t take into account the outside damage that your actions might cause, you *will* make mistakes like SBF did. 131. Michael Says: Scott #117: I get that you’re trying to describe the mentality of a Ponzi schemer in scientific terms, but I still think you’re being way too charitable towards him. For example you write: “In SBF’s mind, I believe, he was obeying the Golden Rule. He wasn’t saddling his customers with any risk that he’d have been unwilling to accept himself.” Obviously, he wouldn’t have been willing to accept such risks if it was his own money being used by some other investor. The coin flip until you win strategy is more acceptable if it’s someone else’s money and you have a chance to make lots of money. Literally zero people would allow their own money to be used by outsiders in this way. 132. dankane Says: Here’s another thought experiment to get at the difference between utility function calculation methodologies: would behavior like SBF’s have been acceptable at the margins? Let me clarify: Suppose that SBF had already earned a billion dollars from other means and donated almost all of it to charity. He then uses the little he left to start a tiny version of FTX on a scale of only tens of millions of dollars rather than tens of billions. He treated this FTX just as recklessly as he did the real thing though (just on a smaller scale). Is SBF in the right in this case? I think that the pretty clear answer here is that if you think SBF was wrong in reality, he was also wrong here. However, if you think that SBF was *only* wrong because he was maximizing expected money rather than expected log money, I think you’ll find that this marginal SBF was actually in the right because if X is only on the scale of tens of millions of dollars, maximizing E[log(X+1,000,000,000)] is very similar to maximizing E[X]. 133. JimV Says: davidly @ 118: Thanks for the correction. I apologize for the wrong information. Knowing my recollections are faulty these days, I should have checked via an Internet search. I agree that Clinton made some bad policy choices, I think due to his strategy to “triangulate” with Republican voters (if I recall that correctly). In lieu of a donations box at this website, I will pay a fine of 100 to the Ukrainian national bank, for their defense efforts. 134. Tu Says: dankane #124: I really do not want to get into this argument in the S/O comments section , but will just empirically point out that the OG effective altruist (Jim Simons) bets Kelly, and it looks like he got quite a bit closer to donating a trillion USD than SBF ever will. Say the number of interest is 1 Trillion– meaning we have linear utility up to 1 trillion, and then maybe diminishing returns after. So our goal is to make a trillion dollars by betting on random crap (crypto coins, crypto startups, whatever). How should we proceed? I will admit that it is not obvious that Kelly betting is still better in this instance, but it is. Fundamentally, if you use linear utility you are allowing realizations where you win googleplex dollars significantly influence your decisions. If you are not satisfied, simulate it! Flip coins, betting half-Kelly and twice-Kelly, and see what happens! EDIT: I am not advocating for a log-shaped utility function in terms of money donated, I am just saying that if your objective is to make money by making bets, if you want to make as much as possible (in order to donate it, buy lambos, whatever) you should not bet more than Kelly. 135. OhMyGoodness Says: Scott #117 Great post and great responses and of especial interest since a discussion of expectations and so related to the brains fundamental role as an expectation engine. I agree that the ultimate responsibility rests with those who invested money in the scheme. The deep question is not why he (or others) commit horrible actions but as to the process that results in so many people buying into these schemes and acting in a manner that is opposed to simple logical analysis. Naive participants in games of chance soon understand that ignoring bankroll management leads to ruin. Why do people so readily participate in these delusions in opposition to basic reasoning? Is it as simple as the pernicious influence of greed overriding executive function? Systems operating in accordance with physical law in the physical world are not influenced by human BS. As systems like this increase in abstraction their future behavior relies ever more heavily on human expectations and for the individual’s expectation of others’ expectations. Bitcoin in this regard is at an extreme of abstraction and so heavily influenced by human BS impacting others’ expectations. Motivation is a qualia in that no one externally can determine with certainty the motivation of another. Reasonable belief of others’ motivation is a basis of our legal system but certainty is not possible except for the Shadow (who knows what evil lurks in the heart of men). 136. Mitchell Porter Says: The way I see it, there were three factors at work here: crypto, “earning to give” – and money in politics. A lot of his “giving” was to politicians – perhaps even most of it? And his mother was cofounder of a Silicon Valley super-PAC founded in 2019 to support Democrats. While the collapse of FTX is a blow to effective altruism because the principals touted that philosophy, I think that in the slightly bigger picture, this is about the crypto bubble and about crypto money in politics. 137. dankane Says: Tu #132: I think that the correct statement is something like if you expect to be presented with some infinite sequence of similar-looking, positive EV bets each of which allows you to invest as much money as you like (up to the amount of money that you currently have) and you want to minimize the expected time until you have accumulated {X amount of money, which is much more than you currently have}, the right strategy approximates Kelly betting. I guess the argument you are trying to make is something along the lines of: SBF made the wrong tactical decision because he could have instead of investing (more than) everything he had into a venture with a good probability of wiping him out, he could instead have invested a smaller amount of his wealth in it so that if it failed, he could have tried again with another similar venture. While I guess that this is plausible, it comes with a lot of hidden assumptions like SBF’s ability to find similarly potentially profitable ventures indefinitely or that these ventures could have nearly the same kinds of potential payoffs without massive leveraging. 138. Tu Says: I will just add that my experience in this comment section today has helped me understand what people find so irritating about effective altruists– the steps are something like this: 1) Prominent EA makes kind of out-there claim about how people should take risk. The subtext is that they have really thought this through and are just following the “math.” 2) You point out some issues with the reasoning. Point out that some other strategies might be better at achieving the goals of the EA. 3) You get accused of not caring about kids with Malaria. How about this– you make your bets, I will make mine, and we can see who donates more to malaria prevention in the next 10 years. 139. Nick Drozd Says: Scott #117 You keep saying that you aren’t defending him, but then you also say stuff like this: > In his mind, swapping out the customers’ deposits for FTT tokens of “equal value,” which would then be swapped back for Bitcoin or dollars or whatever when a customer made a withdrawal, surely wasn’t “stealing” or “fraud”; it was just back-end accounting. This sounds like something his defense attorney will eventually be saying to a jury. “In his mind” doesn’t matter; as I understand it, this just straightforwardly is fraud. > It carried some risk, sure, but not much worse than anything else in crypto, and why had the customers gotten into crypto in the first place, if not to take positive-expected-value gambles just like he was? Sure, a fool and his money are soon parted, and personally I don’t have much sympathy for the idiots that let Tom Brady talk them into falling for a get-rich-quick scheme. Nonetheless, fraud is fraud, and that’s all this is. Seriously, that’s all this is. Other commenters have said it, but let me emphasize again the point that there is no cool STEM stuff here at all. This is not like your hypothetical failed Apollo mission, because that hypothetically could have resulted in something cool. This isn’t like Elon Musk taking credit for the hard work of the actual STEM workers, because in any case something cool has actually been produced. This isn’t even like Elizabeth Holmes, because her fraud was at least premised on the lie that something cool had been produced. No, in this case, nothing cool was even purported to have been done. Of course, that didn’t stop him from cultivating the aura of a real STEM nerd and letting people think there was cool STEM stuff going on. His persona was counterfeit, and like any counterfeit it degrades the value of the real thing. Thus genuine STEM nerds will take a hit to their reputations. He may as well have been wearing a Scott Aaronson mask! Similarly, his generous political contributions have done damage to the reputation of the Democratic party, and the extent of that damage might be turn out to be great. I’m sure the extra cash was helpful in staving off a fascist takeover in this past election cycle. But what are we going to say when Trump starts talking about how the Democrats took in all this money from a huge con artist? Obviously that would be supremely hypocritical coming from him, but it would also be the truth. This corruption could actually end up facilitating a fascist takeover down the road. 140. Tu Says: All– sorry about the tone of my last comment. Something annoying had just happened to me in meatspace and I allowed it to spill into the comment section. 141. fred Says: Sam Harris’ follow up, with his take on the fiasco 142. Scott Says: Nick Drozd #137: Indeed, it won’t surprise me if SBF’s defense attorney makes a point similar to the one I made. But it also won’t surprise me if SBF is found guilty anyway, since you and I agree that his state of mind isn’t, or shouldn’t be, a valid defense! If it were a successful defense, then about the only reason I can imagine, is because the legal regulations around banking—explicitly clarifying that anything remotely like what he did is wildly illegal and don’t dare try it—haven’t yet been extended to crypto exchanges. In which case, this situation hopefully underscores why such regulations ought to be extended to crypto as quickly as possible. As for “coolness”: I mean, there were plenty of smart people who found Bitcoin cool a decade ago, and who find Ethereum’s proof-of-stake and zcash and smart contracts and the like cool today. On the other hand, you’re right that, notwithstanding the ridiculous Super Bowl commercial, FTX and Alameda don’t seem to have been innovators even within blockchain technology, but just tried to capitalize on prior innovations. Still, if SBF had gone about things in a much more ethical, intelligent, and risk-averse way, it seems clear that he could’ve become another Jim Simons, another great financier-benefactor of causes valued by STEM nerds. So suffer those of us who would’ve rather liked that to mourn. 143. Tu Says: dankane #135: “I think that the correct statement is something like if you expect to be presented with some infinite sequence of similar-looking, positive EV bets each of which allows you to invest as much money as you like (up to the amount of money that you currently have) and you want to minimize the expected time until you have accumulated {X amount of money, which is much more than you currently have}, the right strategy approximates Kelly betting.” Correct. This is one way of seeing the virtues of Kelly betting. “I guess the argument you are trying to make is something along the lines of: SBF made the wrong tactical decision because he could have instead of investing (more than) everything he had into a venture with a good probability of wiping him out, he could instead have invested a smaller amount of his wealth in it so that if it failed, he could have tried again with another similar venture. While I guess that this is plausible, it comes with a lot of hidden assumptions like SBF’s ability to find similarly potentially profitable ventures indefinitely or that these ventures could have nearly the same kinds of potential payoffs without massive leveraging.” This is basically exactly right. “Staying in the game” is an important precondition for making lots of money with positive probability. The argument that I am making is that Sam had a hedge fund. If you have a hedge fund, it is because you have confidence in your ability to find a stream of positive expectation wagers. What hedge funds try to do is make money by making wagers. If you are a hedge fund, sizing your wagers correctly is of the utmost importance. My argument is that the bet sizing theory that Sam advocated for was the incorrect one, in theory and in practice . Why is it wrong? 1) It is dominated with probability 1 by another available sizing strategy. 2) It achieves ruin with probability 1. 3) The utility function Sam likes is stupid. What do I mean by stupid? I mean it has a very poor correspondence with actually curing malaria with any probability above machine error, or doing anything else good. Now– you may (rightly) point out that these are asymptotic results, but if we are talking about generating a trillion dollars by making bets (and nothing else), you need to make a lot of wagers. Now– one can gerrymander a game where there is another strategy that achieves some desired outcome, (as Sam tried to do on twitter) but this is a trivial and counterproductive exercise. 144. dankane Says: Tu #141: OK. I will grant that something like Kelly sampling is right but only in the case that the goal is to run a hedge fund (or something like it) for an extended period of time. This does not apply to me making my donations through a donor lottery (and I apologize that I think I have been somewhat conflating your arguments with Scott’s more general risk neutrality of charitable giving is the root of all evil argument). And I think it might not really apply to a lot of the people trying to amass vast fortunes. For example, I think the process of founding a startup may well be better modeled as making a small number of potentially huge bets, where the chance of winning before going bust remains non-trivial. As for calling the utility function stupid, I think I might also need to object. I think in terms of preventing malaria deaths, we really should be optimizing the expected number of deaths prevented. I would rather take a 1 in a million chance of preventing 2,000,000 deaths than deterministically prevent one, even though with the former bet the odds are overwhelming that it would accomplish nothing. On the other hand, if a million people all had to make this choice, I think it becomes clear that everyone going with the high variance option comes out better than everyone playing it safe. 145. Michael Says: Scott #140: It sounds like you aren’t too familiar with these Ponzi and similar scams. For the first one in Nick Drozd’s comment: “In his mind, swapping out the customers’ deposits for FTT tokens of “equal value,” which would then be swapped back for Bitcoin or dollars or whatever when a customer made a withdrawal, surely wasn’t “stealing” or “fraud”; it was just back-end accounting.” A defense attorney wouldn’t bother with this since it will fail. It’s really a common joke to say that the schemer wasn’t stealing his client’s money, he was just replacing it with IOU’s or Enron call options that were just as valuable. I remember one case where the guy tried to pay back his debts in vast numbers of call options for his own company at some ridiculously high price. As for the other comment: “It carried some risk, sure, but not much worse than anything else in crypto, and why had the customers gotten into crypto in the first place, if not to take positive-expected-value gambles just like he was?” The guy was promising an 8 percent return, which again is very familiar to those who have seen these scams before. It’s a cliche basically, if they guarantee a high rate of return, it’s guaranteed to be a scam. So if someone does invest, he shouldn’t be surprised if he gets fleeced. Not really likely to hold up in court. If you watch “American Greed” the Ponzi-type scammers tend to be similar to each other. You’re talking EA and scientific modelling of his behavior, but it’s really nothing as deep as that. 146. Scott Says: Everyone: In this post and my later comments, I’ve tried to make sense of SBF’s downfall, in a way that actually takes his stated values seriously, and that sees last week’s catastrophe as an understandable outgrowth from those values. I submit that one can actually get surprisingly far in doing so, and that the fact that one can is interesting. Ironically, you’d think EA critics would love what I’m doing here … because, if SBF were just a straightforward scamming asshole, pretending to endorse Benthamite utilitarianism and EA since his teenage years as a deliberate long con to steal money from all the dupes, no reflection about the underlying philosophies themselves would then be called for! The only useful takeaway would then be: “try to get better at detecting scamming assholes!” And that’s not what I’m saying. I’m saying that serious reflection about the principles of utilitarianism does seem called for in the aftermath of this. Which is also what many of my critics say! They just refuse to take yes for an answer. 😀 Having said that, as I’ve read more about SBF—for example, in this Yahoo Finance piece, which quotes my former student and coauthor Adam Yedidia (who, I learned, tried to talk SBF out of getting into crypto trading, but later changed his mind and joined FTX)—it’s increasingly obvious that SBF’s personality differs from mine in crucial ways, even beyond questions of risk-tolerance and ethics. For starters: he found his academic work at MIT to be boring (!), and he almost never reads books. So, that does increase my probability that I have much less insight into how he thinks than I’d imagined I did, and conceivably he could’ve been laughing at all the rubes behind closed doors. Still, though, I’ve never heard of a con—pretending to believe principles that one secretly laughs at—that stretches all the way back to one’s childhood and most formative life experiences. So I continue to find something like David Karger’s gradual corruption scenario much more plausible, with the addendum that the “corruption” was greased by specific utilitarian principles that (in my opinion) SBF ought never to have held in the first place. 147. Promo Says: To tell you the truth, I don’t care whether SBF sought to use his fortune for a “good cause” or not. The kid isn’t conventionally attractive by any measure. I’m sure young women ignored him at MIT, ignored him throughout his youth, until his millions and billions started to turn things around. When you’re facing a life of frustration and pain, when women ignore you to chase bad boys and chads, throughout your youth, when you’re so starved of sex that all you can think about is pinning down your female classmates and fucking them (and they would never ever let you do it, even though Chad in the frat does it every friday night), when you can’t get a single fuck or even a kiss and you’re deprived of pleasure and love and sex in the prime of your life—what THE FUCK is there for you except to use your god-given gifts of nerdiness to make tons and tons of money (building a social network or crypto or whatever else) and BUY that life that was stolen from you? How can you blame the kid? It’s the world to blame, and women. He did what he had to do to FUCK—every guy has to fuck. When women won’t FUCK the nerds, the nerds will FUCK the world just to get the money to make the girls fuck them. Women are to blame for every single one of these tech billionaire assholes (this guy, zuckerberg, elon musk, whatever). 148. Tu Says: dankane #142: First, I just want to thank you for your patience with me throughout this conversation. “OK. I will grant that something like Kelly sampling is right but only in the case that the goal is to run a hedge fund (or something like it) for an extended period of time.” Yes- this is the context that I am arguing for using the Kelly criterion. You will have to take my word for it, but I am often arguing with people in gambling forums against applying the Kelly criterion in a certain situation where it does not apply. I am talking about this context because Alameda was a trading firm, and Sam owned it. “As for calling the utility function stupid, I think I might also need to object.” Stupid in the above context*. I grant that it is not difficult to conceive of thought experiments where the Kelly criterion does not apply. Broadly speaking, what I am not doing is asking everyone to orient their lives around the principle taking the log of the utility of some decision before taking the expectation. I am not saying that every thought experiment involving flipping coins needs to have a logarithm flying around somewhere in the solution. I am saying that if you are going to think about something in terms of placing bets, you should be careful about what your objective is, and understand the consequences of the strategy that you are saying is optimal, before trying to sound profound on Twitter. For what it is worth, and this will have to be the last thing that I say, I would also ask you to consider the example you gave of someone starting a startup, and how linear utility is better. I don’t want to put words in Sam’s mouth, but the spirit of what he seemed to be gesturing at was that everyone is too risk-averse, walking around with their log-utility functions, and if we all just had linear ones (along with noble intentions to funnel our expected wealth to good causes), we would accelerate our path to a better future and end suffering fasters, eradicate malaria fasters, etc. Again the answer here is: no. If we are trying to make a bunch of money (as individuals, or a group) and we concede that physical experience consists of sequential decisions made under uncertainty, and we are going to organize our existence around making decisions with personal financial consequences with the explicit goal of maximizing one pre-chosen utility function, then we should still bet half-Kelly. I would also just ask:if we are justifying using linear utility by the fact that we are only really using this calculation for a few decisions in our entire life, like, whether to start some company or whatever, why bother? The success and failure of such a venture will depend on thousands, millions of intermittent decisions that are made (under uncertainty, with the expectation of gain). So again, if we are debating which utility function to use in this regime, (with the ultimate goal of giving it away) Sam is wrong. EDIT: Just as a last thing because I wont be able to respond. I have avoided saying this because it feels kind of cheap, but now that Scott has acknowledged it I will just say it. When I said SBF doesn’t respect the truth, and used this Kelly criterion thing as an example, it was my way of pointing out that sometimes reading books is good. 149. Scott Says: Promo #145: I’d normally delete such a comment, which reminds me of this summer’s troll attacks, but in this case I do want to say something. Whenever we see a young heterosexual nerdy guy make himself into a billionaire, yes, that’s one possible hypothesis as to his motivations. In this case, though, the hypothesis doesn’t seem to hold up well. For one thing, SBF never seems to have had any problems whatsoever, to put it mildly, with assertiveness or going after what he wants. For another, once more, many journalists profiled him before the fall, and not one described him using his wealth in that way. Obviously I’ll change my view if such accounts emerge later. 150. kris Says: Hi Scott, I am mostly in agreement with Nick Drozd #137 and Amir Safavi #111 on this. I don’t think that SBF himself took his stated values particularly seriously. He very explicitly described his crypto currency activity as a Ponzi scheme in an interview with Matt Levine at Bloomberg. I don’t have the actual link to the interview but here is a YouTube video that has excerpts and also discusses it: https://www.youtube.com/watch?v=C6nAxiym9oc Someone who says what he did there without any real self awareness couldn’t have been all that serious about his stated morals beyond their convenience to him. He also spent a great deal of money paying off YouTube influencers to create the image you appear to be taking so seriously (the corolla, and the dorm, which was actually a 60 acre estate in the Bahamas etc). He also paid off a lot of Democrats and lobbied congress hard to pass laws favorable to him and his business to the detriment of his competitors (not that this is all that unusual). What he was doing with FTX and Alameda was a case of robbing Peter to pay Paul, and all the stuff about utility functions is pretty much besides the point and does not change this. I am no mind reader so I cannot say how he ended up this way, except to say that all criminals were also young once, and probably were nice children too at one time. My sense is that all this was a big game to him, and he never really internalized the consequences of his actions maybe because he never had to worry about things going seriously wrong with his life, given his background. 151. dankane Says: Tu #145: I think you are still overselling the Kelly strategy. The Kelly strategy is correct if your primary method of accumulating wealth is by making a sequence of risky, positive expected value bets. The conservatism of Kelly is important there because if you lose too big, you will not have enough capital to rebuild quickly. This models a hedge fund fairly well, but I don’t think that it models most peoples’ personal finances very well. For one, most people accumulate wealth primarily through work income, not compounding bets. A new college graduate asked to donate most of their life savings to their friend’s startup isn’t necessarily making a wrong decision. Despite this bet likely losing most of their savings, investments on their savings isn’t how they were planning to make a living in the first place. For two, most people don’t have a lot of options to make huge bets with expected value better than an index fund. Now people probably shouldn’t use linear utilities either with their personal finances, but this is because happiness is not linear in wealth. I only support linear utilities when considering charitable donations (and then only as an approximation when bounded amounts of money are involved). This is because if money is being optimized by a group where members are being offered independent bets to make, then each member (who doesn’t own a large fraction of the group’s total funds) making the linear expected money maximizing bet actually approximates the Kelly criterion for the group better than each individual member using the Kelly criterion does. You say “If we are trying to make a bunch of money (as individuals, or a group) and we concede that physical experience consists of sequential decisions made under uncertainty, and we are going to organize our existence around making decisions with personal financial consequences with the explicit goal of maximizing one pre-chosen utility function, then we should still bet half-Kelly.” This is false. Kelly is only the correct criteria if all bets that you are offered can be freely scaled to any level up to the amount of capital you currently have. This is not at all the case for most relevant decisions under uncertainty that most people make. They make decisions like should I take job X or job Y? Should I go to college? Which health insurance should I buy? None of these are the kinds of multiplicative bets that suggest the Kelly criterion should be used. These are probably better modeled as additive bets. When faced with a string of additive bets rather than multiplicative ones, the best long-term strategy is in fact the expected value maximizing one. 152. Mark Weitzman Says: Hi Scott: As a retired long time professional high limit poker player, let me offer another perspective. I have seen over the decades many talented professional poker players go broke. Why? Because when they are stuck, they go crazy, nothing matters but getting even. I have watched several high limit poker players play for 5 days straight trying to get even, and eventually being carried out in a stretcher when they became comatose. Something similar seems to have happened here – once they got behind on some bets, getting even was the priority even if it meant using customer money and possibly going broke. As for the Kelly criteria. I explained to many poker players that if the criteria says to bet your edge, say 1%, the problem is that you don’t always know your edge accurately (the game may have gotten tougher, you might be cheated at times etc.) And the mathematics establishes, that if you bet twice your edge, you are certain to go broke if you keep playing. 153. Sir Henry Says: Scott 84: My last visit was during “quantum supremacy” and opinions can differ on what it means to call out hype. QC and crypto finance actually have a lot in common (beyond hype): -New IT that could shake the world -Possible fundamental flaws still undiscovered -Investors/funders don’t understand the tech -Uncertain use case -Growing numbers of flimflam men -Government looms large This may seem like an unimportant detail to people who paint broadly, but Bitcoin (which shares many of the above attributes) was not implicated in the FTX scandal. 154. Tu Says: dankane #149: “I think you are still overselling the Kelly strategy.” I am not selling anything. I really want to re-emphasize here that I am not trying to get people to orient their lives around a utilitarian ethics based on fractional kelly betting. I was trying to make a narrow point, which is that Sam demonstrated that he did not understand the subject of bankroll management well. A secondary point would be that I think his advocacy for linear utility functions is really dumb, counterproductive to his stated aims and people should not listen to it! What I was trying to get at is that if we play the utilitarian game that Sam wanted to (which I don’t, because its not fun, and really stupid for 1000 reasons I don’t have time to list here) and conceptualize all of our business decisions as wagers, how much risk should we take? All I am saying is that I prefer the strategy that wins with probability 1, not the strategy that loses with probability 1– that is it! “When faced with a string of additive bets rather than multiplicative ones, the best long-term strategy is in fact the expected value maximizing one. I don’t really know what you mean by this, or what decisions we are faced with when confronted with additive bets (meaning if we cannot risk as much as we wish to, how does our utility function even matter?). Anyways, all I am trying to say is that strategies that rely on zero probability events to achieve their high EV are not as good as strategies that achieve higher EV with probability 1. I don’t have time to be more eloquent, or get into a debate about how to best model one’s professional life as a sequence of bets. I would just conclude that most sane people do not characterize a Kelly-type strategy as conservative. As someone who makes ends meet gambling, my personal net worth fluctuates enough on a weekly basis to make my friends throw up. It is the most aggressive possible strategy that is not idiotic. 155. dankane Says: Tu #152: A series of additive bets: You are presented with a series of bets each with an associated random variable X and if you accept the bet, X is added to your net worth. This is as opposed to multiplicative bets where you can pick any number c > 0 up to [your current net worth]/sup(|X|) and add c*X to your net worth. I’m not sure why it is hard to see why your utility function might affect your choices in such a situation. If your utility is linear in money, you would never (unless forced) by insurance because insurance companies make sure to get positive expected value. However, if your utility is a log-concave function of money, you might want to. The difference between strategies only becomes irrelevant if you actually have an infinite horizon and expect to keep having positive EV bets so that eventually your wealth will go to infinity and the only question is how fast. I don’t think that most people see their finances in terms of how quickly they can get infinite money. 156. SR Says: Here is an interesting analysis of the erratic financial decisions taken by Alameda/FTX that is being widely shared on Twitter– https://milkyeggs.com/?p=175 . The article makes a seemingly plausible case that SBF’s drug use might have also contributed to his decisions. 157. Scott Says: SR #154: Wow. (See also Scott Alexander’s new post on “the psychopharmacology of FTX.”) This, combined with a preexisting ideology of extreme risk-tolerance, sounds like certainly part of the story of FTX’s ethical slide to the bottom. 158. Scott Says: Incidentally: regarding the question, which several commenters raised, of whether quantum computing is also a scam that’s about to collapse: Back in 2017 or so, I took a consulting call from a cryptocurrency trader who wanted to understand how vulnerable Bitcoin was to future attack by QC, and whether there was a case for new coins that made use of quantum-resistant cryptosystems. At some point I was telling the client about various claims for near-term QC that I thought were fundamentally dishonest. The client laughed and said, “listen Scott, you need to understand that the least honest person in quantum computing, is still surely much more honest than the most honest person in cryptocurrency.” I didn’t think that was true then, and I don’t think it’s true now. But the quote stuck with me, and last week’s events are at any rate not evidence against it. 🙂 159. Tu Says: dankane #153: “A series of additive bets: You are presented with a series of bets each with an associated random variable X and if you accept the bet, X is added to your net worth. This is as opposed to multiplicative bets where you can pick any number c > 0 up to [your current net worth]/sup(|X|) and add c*X to your net worth.” Ok, I understand! So we are flipping coins, and adding the outcome (some random variable, could be positive or negative) to our net worth. We have no choice about how much we risk on each flip, but merely can decide whether we wish to flip the coin as it comes. Imagine we have perfect knowledge of the distribution of payoffs for every coin that streams in. If I am understanding your position correctly, you think that our strategy should be: play every coin with a positive expectation, and consider nothing else about the distribution of payoffs. A few questions: what do we do when we cannot afford the losses associated with a coin? Do we still get to flip? When my net worth is negative, do I still ge to play?(We know Sam’s answer! LOL) I don’t agree that this series of “additive bets” is a more faithful representation of human existence whatsoever, but even if I accept it, it is still easy to construct examples where the strategy you suggest loses with probability 1-machine error, and when you win you win so much that even if your utility function is “pretty straight” you have more money than you know what to do with. The broad point I am trying to make here is that I do not understand why people thinking using less information (just one number– the expectation) is better than using more information — information about the distribution of outcomes, theorems about what strategies will win with what probability, etc. Aimed at Sam: I don’t think we should try to boil the value of collective human experience down to one stupid number, but if we are going to act like we have some great insight into how to forge a path towards a better (positive probability) future, we should at least do it well. 160. fred Says: Scott #147 161. manorba Says: Scott #156 Says: Incidentally: regarding the question, which several commenters raised, of whether quantum computing is also a scam that’s about to collapse… Well, QC and the research done are surely not a scam .and that’s regardless of if we even ever get to a real world working QComputer because good research brings always new insight. But how would you define all those “what QC can do for your enterprise. a presentation” startups? 162. Tu Says: SR #154: That blog post is a truly astonishing read with many delightful tidbits and anecdotes– I just want to chime in on a few things. The basic story that Alameda was not competitive as an arbitrageur and had to resort to punting (making it not much different than your cousin telling you to buy doge at thanksgiving) is basically right. I am skeptical of the estimate of trading software error costing them 1 billion— to lose this amount in an incident looks to other participants like, well, a Tesla trying to park itself, but then accelerating into a preschool and exploding. If this had happened, we would be reading about it in Bloomberg. Am I willing to chalk up a billion in losses to general technological carelessness and a compilation of mistakes? Maybe… but a billion is actually quite a bit of money to lose this way, (with nobody noticing) even today. Last– it would be the ultimate testament to Alameda’s incompetence as traders if their participation in FTX’s liquidation program was a net loser for them. Even I am a bit skeptical of this claim. Being in this program is a trader’s dream (provided you have some risk management skills). It is impossible to lose money unless you are completely reckless. 163. OhMyGoodness Says: If there are alien caretakers watching over Earth then they should start medicating the entire globe quickly. Now we have the Euro-Daesh working their way through the art museums of Europe in addition to billions of dollars of vapor value in the US in addition to the armed conflicts in addition to lab generated viruses in addition to successful politicians with striking mental deficiencies. If life imitates art then 12 Monkeys must be the go by. 164. Scott Says: Yesterday, Geoff Pennington spoke in UT’s high-energy physics seminar about new work in quantum gravity that crucially relies on the classification of von Neumann algebras. Roughly, as I learned, a “type I algebra” (which includes all finite-dimensional algebras) lets you take a standard matrix trace for all the bounded linear operators in it; a “type II algebra” lets you define a “tracial operator” that’s not the matrix trace but has many of the same properties; and a “type III algebra” (which includes what shows up in quantum field theory) doesn’t even let you define a tracial operator. At lunch, there was lots of discussion about this, but also (of course) about the collapse of FTX. And that’s when I had my key insight. Crypto exchanges like FTX just need to rebrand themselves as “type III banks”: the type of bank where your money can disappear without a trace. 165. OhMyGoodness Says: Scott #162 Very funny insight. Oh and by the way-Congratulations, the UN says their are 8 billion of us wonderful souls now on Earth. The sky’s the limit. Artemis is on is way but I doubt Luna has sufficient carrying capacity to offload many there. 166. Christopher David King Says: I think Scott has learned what an exit scam is XD. Money doesn’t disappear. > To say it one more time, if cryptocurrency is 100% a scam, then a large fraction of global finance is likewise a scam. Well sure, but 99% of the *profits* come from scams. Currencies (traditional or crypto) are means of exchange, not something you profit from. Legit activities like mining are designed to *break even*; the only way to profit is to convince someone that your a “crypto-bank” (doesn’t that defeat the point of crypto?) and then after they deposit their money to get “hacked”. I guess in this case the guy was at least honest afterwards that he had did a “Moral hazard” (https://en.wikipedia.org/wiki/Moral_hazard) instead of making up a hacking event to explain it. If he is truly EA, I’m guessing the money will magically appear in EA funds (no hate towards EA itself). 167. MK Says: Tu #160 – can you elaborate a bit about the last paragraph on FTX liquidation? I’m curious. 168. asdf Says: And I don’t consent to that. I don’t wish to be held accountable for the misdeeds of my doppelgängers in parallel universes. But, but… aren’t those doppelgängers’ actions a product of the world they live in? And even if their own choices are good, what about the actions of other people’s doppelgängers? For example, Trump is running for president again, and while his probability P of re-election in our universe might be fairly low, it is nonzero, so there are parallel universes in which he will be re-elected, no matter how hard your doppelgängers there try to prevent it. What’s worse, we might do something today to decrease P in our universe (let’s say launch a successful effort in Congress to indict Trump) that increases P in other universes where the effort fails. By slightly lowering the chance of another Trump presidency in our universe, we might be condemning untold trillions of parallel universes to at least one more Trump administration! Surely before trying to do anything about Trump, we should compute something like a Feynman path integral averaging the resulting P across the whole multiverse, and try to minimize that average even if it increases our own P, instead of irresponsibly externalizing our Trump problem to universes other than ours. This seems like an important area of research to pursue, combining quantum theory and politics. We can call the paper “New Directions in Liberal Guilt”. 😉 169. SR Says: Scott #155: I should have guessed that the other Scott would write about this 🙂 Thanks for the pointer, that was an interesting read. I also thought that the Yahoo Finance article you linked to above did a good job of sketching SBF’s life story. Tu #160: Thank you for sharing your expertise! I don’t have too much of a background in finance, so that is all helpful to know in order to better gauge the accuracy of the blog post. Scott #162: That made me chuckle! Not to derail the conversation too much, but I have been curious for a while about how Type III factors “pay rent” in QFT. It seems like usually physicists assume that continuous theories are indistinguishable from those that are discretized on a sufficiently fine lattice within a sufficiently large box, which I believe would be of Type I. The most-cited reference that I could find on Type III factors in QFT is https://arxiv.org/abs/math-ph/0411058, which analyzes a thought experiment due to Fermi. One starts with two atoms, a and b, separated by distance R. To start with, a is in its ground state and b is in an excited state. One would normally think that it would take time > R/c for b to emit radiation, transition into its ground state, and for a to absorb this radiation and become excited. The paper argues, though, that on analyticity grounds, it is impossible for P(a is excited at time t) to be identically 0 for all times < R/c and then nonzero for later times. The author argues that this implies Type I factors are inadequate for discussing this physical situation. I am a little puzzled by the above as it seems to me that there is a more benign explanation. Vacuum fluctuations would seem to allow a to become excited instantaneously with low probability, and this would be consistent with no-FTL communication. I'm no expert, so I'm curious whether this is a reasonable resolution/if anyone else has any thoughts. (Scott, hope this is an okay topic of discussion. If not, sorry, and happy to discuss this elsewhere at a different time.) 170. Scott Says: asdf #166: You don’t need a Feynman path integral, since presumably in any morally relevant case the branches are decoherent. But yes, there’s clearly a counterfactual or probabilistic element to morality. Why do we punish drunk drivers, even if they don’t actually hit anyone? Presumably, because we judge the branches of the wavefunction in which they did hit someone to have too large of an amplitude. Now, when it comes to the sneerers on social media, my impression is that they condemn me, neither for hurting anyone, nor for intending to hurt anyone, nor for endangering anyone, nor for advocating or celebrating harm … but simply for having been born with a personality that they believe could have caused me to hurt someone in a different universe. And I’ve carefully considered their charge and concluded that, no, I can’t meet their counterfactual standard, but I don’t expect anyone else to meet it either, and indeed very few do. It’s as if they want to enforce the preemptive morality of Minority Report—except that, instead of underwater children with supernatural precognition powers, they just have social media mobs, whose predictive accuracy leaves rather more to be desired. 😀 171. Scott Says: fred #139: I just listened to the Sam Harris podcast and … wow. Harris is, as elsewhere, a beacon of moral clarity and good epistemology. I don’t retract what I wrote—I was most interested in different aspects of the “SBF problem,” like how improperly risk-neutral utilitarianism could have greased SBF’s slide to hell—but I like what Harris said better than I like what I wrote. 172. SR Says: SBF agreed to a somewhat bizarre interview with the (EA affiliated) journalist Kelsey Piper: https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy 173. BBA Says: As a fellow privileged white male Jewish Mathcamp alum… gee it sucks to be a victim of affinity fraud, doesn’t it? Personally I was skeptical at first, but when a company I used to work for, led by someone I knew and trusted, entered a joint venture with FTX, I started thinking: if [REDACTED] likes FTX, maybe they’re not a total scam like the other crypto companies. Hahahaha. But make no mistake, SBF is just a fraud, though maybe so delusional he doesn’t realize that’s what he is. And we’re all just suckers. Nothing more to it than that. 174. SR Says: Scott, you used the wrong hyperlink in your update to the post. Also, it seems that SBF didn’t expect that Kelsey would use his comments in a news article, which might explain his openness — https://twitter.com/SBF_FTX/status/1593014934207881218 (although that could also be a lie…). 175. Scott Says: SR #172: Thanks, fixed! If SBF didn’t know to specify what’s on background when talking to a journalist, I’m inclined to say that’s 100% on him … I’m as naïve as they come and even I know that! 176. Terebinth Says: The Harris blog just sounded horrible to me. To my non-US ear, the last couple of minutes sounded like a 1970s televangelist telling his flock to empathize with the family of a fallen pastor so they can reject him without giving up the faith. I have no idea why the EA movement feels so threatened by the FTX scandal, unless they actually feel morally culpable for enabling SBF. 177. Scott Says: Terebinth #174: Given how central SBF was to the organized EA movement, how could EAs not feel horrified and ashamed? How could they not be soul-searching, and trying to draw the correct lessons to prevent such a thinn from ever happening again? They’re acting exactly the way I would in their situation, which is more than I can say about SBF. 178. Michael Says: This has been such a crazy and interesting thread. I’ve never even thought about feeling guilty for the actions of someone because he’s of the same demographic as myself, or about trying to mathematically model the actions of a Ponzi schemer. By the way, SBF is still being SBF. Today he tweeted “17) I know you’ve all seen this, but here’s where things stand today, roughly speaking. [LOTS OF CAVEATS, ETC.] Liquid: -8b Semi: +5.5b Illiquid: +3.5b” So he’s claiming his investment groups/stuff is worth 1 billion, and he is also continuing to promise to make things right for his customers. In other words, he’s denying it’s over for him as a financier and trying to get back into the game. Is this part of any mathematical model? 179. Tu Says: MK #165: can you elaborate a bit about the last paragraph on FTX liquidation? I’m curious. Happily. First, in case others are reading, some background and jargon. It especially important that whenever people talking about the “price” of some financial instrument or security (you could argue the price of anything at all, but I’ll leave that aside for now), there are really two prices. These are: 1) The price that people are willing to pay for that thing (the bid) 2) The price that people are willing to sell that thing for (the offer) Note that at any given moment on the NYSE, if you look at the market of Ford’s stock, you will see people bidding, and people offering. The highest bid is called the “best bid”, the lowest offer is the “best offer.” By and large, firms that are referred to as “high frequency trading firms,” “proprietary trading firms,” or “algorithmic trading firms” sit there are all day long, bidding at the best bid, and offering the best offer. They hope that someone will come along and decide that they want to sell bitcoin, and do so by placing a “market order” (selling at the price of the best bid) and then that shortly thereafter someone else will place a market order to buy, buying that same bitcoin from the at the price of the best offer. All of this is an enormous oversimplification, and does not touch upon the work involved or risk involved with running this kind of strategy– I am merely trying to convince you that in the world of high frequency trading, buying “the bid” (meaning paying the bid price for something) is a good thing to do, and profitable in the long run. Think about it this way: as soon as you have bought “the bid”– you have made money in expectation. It is up to you how you want to manage the risk that you now have (you could do nothing, you could offer a more aggressive price, you could hedge in a highly correlated asset, and a million other things) but the point is as soon as you get that “fill” (slang for having an order filled), you have made money, and you are happy. Before continuing, I should note that in equity markets, futures markets, options markets, etc, all of these firms compete fiercely to buy bids and sell offers. Buying bids and selling offers is hard to do because it is so good. Ok– so why should Alameda have made money if they were in FTX’s liquidator program? Sacrificing some precision for brevity here, and also ignoring some other major aspects that make being a liquidator highly desirable, here is why: Every time someone gets liquidated, they are selling what they have to you below the best bid, or buying from you above the best offer. In other words, someone has a large position that they don’t want to sell, but are forced to due to margin requirements. Since selling it in the open market is kind of a pain, FTX sells it all at one price to one of their liquidators. What price? A great price, is the answer. Now, there is still substantial risk and work involved with being in a liquidator program, and I am not saying that a liquidator would make money on all trades, but it is certainly an incredibly lucrative opportunity that others would compete for. 180. asdf Says: I’ve never even thought about feeling guilty for the actions of someone because he’s of the same demographic as myself Michael #176, unless you’re also a crypto billionaire, you’re not in SBF’s demographic along any axis that matters. 181. Tu Says: Happily. First, in case others are reading, some background and jargon. It especially important that whenever people talking about the “price” of some financial instrument or security (you could argue the price of anything at all, but I’ll leave that aside for now), there are really two prices. These are: 1) The price that people are willing to pay for that thing (the bid) 2) The price that people are willing to sell that thing for (the offer) Note that at any given moment on the NYSE, if you look at the market of Ford’s stock, you will see people bidding, and people offering. The highest bid is called the “best bid”, the lowest offer is the “best offer.” By and large, firms that are referred to as “high frequency trading firms,” “proprietary trading firms,” or “algorithmic trading firms” sit there are all day long, bidding at the best bid, and offering the best offer. They hope that someone will come along and decide that they want to sell bitcoin, and do so by placing a “market order” (selling at the price of the best bid) and then that shortly thereafter someone else will place a market order to buy, buying that same bitcoin from the at the price of the best offer. All of this is an enormous oversimplification, and does not touch upon the work involved or risk involved with running this kind of strategy– I am merely trying to convince you that in the world of high frequency trading, buying “the bid” (meaning paying the bid price for something) is a good thing to do, and profitable in the long run. Think about it this way: as soon as you have bought “the bid”– you have made money in expectation. It is up to you how you want to manage the risk that you now have (you could do nothing, you could offer a more aggressive price, you could hedge in a highly correlated asset, and a million other things) but the point is as soon as you get that “fill” (slang for having an order filled), you have made money, and you are happy. Before continuing, I should note that in equity markets, futures markets, options markets, etc, all of these firms compete fiercely to buy bids and sell offers. Buying bids and selling offers is hard to do because it is so good. Ok– so why should Alameda have made money if they were in FTX’s liquidator program? Sacrificing some precision for brevity here, and also ignoring some other major aspects that make being a liquidator highly desirable, here is why: Every time someone gets liquidated, they are selling what they have to you below the best bid, or buying from you above the best offer. In other words, someone has a large position that they don’t want to sell, but are forced to due to margin requirements. Since selling it in the open market is kind of a pain (possibly disruptive due to the size), FTX sells it all at one price to one of their liquidators. What price? A great price, is the answer. Now, there is still substantial risk and work involved with being in a liquidator program, and I am not saying that a liquidator would make money on all trades, but it is certainly an incredibly lucrative opportunity that others would compete for. 182. Ilya Zakharevich Says: dankane #142: I would rather take a 1 in a million chance of preventing 2,000,000 deaths than deterministically prevent one, even though with the former bet the odds are overwhelming that it would accomplish nothing. On the other hand, if a million people all had to make this choice, I think it becomes clear that everyone going with the high variance option comes out better than everyone playing it safe. I have very little experience with Kelly stuff, but it seems to me that what you write is 99.95% wrong, — and it is completely misleading. The mathematical reason for this conclusion of mine is that for your estimate to make sense, these 1,000,000 people should first find 1,000,000 independent betting strategies. It may be an interesting socio-mathematical problem: How many “degrees of randomness” can a crowd of 10⁶ people actually choose? (This assumes that they act without knowledge of what other people do.) My guts feeling is that the answer would grow not even as √size␣of␣crowd, but closer to “logarithmically”. So, assuming achieving 10 “degrees of randomness”, with a million benefactors, there is a 1 in 100,000 chance to prevent 200,000,000,000 of deaths (most of which is going to be wasted?!). Most probably, in practive this is going to be diluted to, say, saving 100,000,000 lives. Or the expectation ℰ of saving 1,000 lives — vs. 1,000,000 lives with “playing it safe”. 183. OhMyGoodness Says: SR #254 MAO-B inhibitors do often have profound impacts on human behavior in regards to gambling/ risk taking and sex seeking behaviors. There is a surprising amount of genetic diversity that impacts the individual’s production of endogenous MAO-B. Selegiline (the drug in question) has also been associated with increased longevity in rodent studies and so although overseeing a cock-up for the ages this rat may enjoy increased longevity. These patches deliver much higher blood levels than the typical oral dosage for Parkinson’s (7 x) and are usually prescribed for depression. The totally minor problem with this is increased rates of suicide in the depressed patients so treated. This thread has profound implications for me in clarifying the horrific impact our decisions have in other universes. There is now an altruistic case for always making the worst decision in our universe to protect others. This may well have been the true motivation for SBF. His assumption of the role of a super villian here protected those we think too little of elsewhere. 184. OhMyGoodness Says: SR #254 I just remembered that one of the metabolites of Selegiline is methamphetamine. I don’t know if their doses were sufficient to test positive for meth on standard urine tests for but then I doubt they had much in the way of a comprehensive drug testing policy. 185. Aspect Says: In your update, you say that making billions to donate to charity is good. At the intention level, it sounds OK, I agree. However, it seems extremely shortsighted to me. As I see it, there are at least a couple of dimensions that are worth considering (somebody may have already pointed that out already in the comments, sorry if I missed it). The personal development dimension, first of all, appears to be the most crucial and unpredictable. If one fixes the goal of getting billions, then I’m not sure they can guarantee they are the same person with the same intentions by the end of the journey. It is likely that they will have to compromise their morals to get it done. It’s hard to predict whether how that will affect one’s character. People will have to be exposed to situations with stakes far removed from the stakes of everyday experience, and until someone experiences this, chances are that they don’t know the limits of their integrity. In fact, by the end, even the goalposts might ‘conveniently’ move. Having this kind of goal just naively assumes ‘personality stationarity’ which is a very strong assumption. The second dimension is that wealth, as a means to an end, seems like an implicit trolley problem. It’s inevitable that harm will have to be caused one way or another. Then the question becomes if that harm can be worth the payoff in the end. Given that we generally don’t have complete knowledge of the full range of tradeoffs that a billionaire experiences, it’s hard to make an informed calculation about whether something like that is worth it. Looking at modern-day billionaires, it seems like their billions have a ton of dependencies on multiple investors (and the thousands of people working for said investors). Maybe people have figured this out and I’m unaware, but I find it hard to believe that someone can get that kind of wealth *in any way* and then just give most of it to charities without causing significant harm to a ton of people indirectly. I could believe that a good percentage can be smartly allocated to charities per year while keeping the house of cards alive. Ultimately, when there are so many difficult questions unanswered, I have trouble convincing myself that this kind of goal is genuinely something to aspire to. 186. Scott Says: Aspect #181: I have no problem accepting that someone can make billions by starting a business like Amazon or Google that adds actual value to the world. But the question of how to do so while maintaining your idealism, so that you’re confident your future self will then spend the money on fighting climate change or fascism or whatever you foresaw yourself spending it on as a teenager, is an extremely interesting one with which I lack direct experience. 187. Scott Says: Vanessa Kosoy #113: So, rationalist nerd girls are not as good as supermodels?! MEN *eyeroll* NO!!! I was making the thoroughly feminist point that, if SBF had acquired a harem of supermodels, that would’ve been strong evidence that his motivation to become a billionaire had been selfish and superficial. Somehow it got twisted into the anti-feminist idea that supermodels are “better” in my view than rationalist nerd girls (something that’s unequivocally false, for the senses of “better” that I want myself and others to care about). 188. manorba Says: “So, rationalist nerd girls are not as good as supermodels?! MEN *eyeroll*” Quite the opposite. That’s how the kids around here answered when i asked them. 189. fred Says: Bah, it’s same old tale of smart 20-year-olds getting seduced by some easy money scheme but lacking experience and wisdom about themselves and the real world. 190. Aspect Says: Scott #186: Right, I didn’t mean this in an ‘all billionaires are evil’ sense or that it’s impossible to be a billionaire and provide value to the world. I’m just extremely skeptical that the goal “I want to make billions to donate to charity” makes sense, because of the dependencies and tradeoffs one has to deal with in order to get there, which are not even fully known to someone who is not already in that position. It can be reasonably interpreted as a well-intentioned goal. At the same time, it betrays that the person who is going for it underestimates the complexity of the situation that they are shooting for. I don’t think the same considerations exist when someone works hard to e.g., come up with a clever new solution to a hard math problem. The physical/financial resources needed there can be modest. It’s just difficult to do and you can’t cut ethical corners to get there. And that’s why I generally respect people like you more than rich dudes like Musk. That doesn’t mean that what they’re doing is easy. It can take a ton of skill to navigate those waters as well. However, there is a lot more room for progress in that path if one is willing to compromise their character and/or to take advantage of other people. And it honestly becomes a bit more suspicious when someone clever shoots for that goal, because I would expect them to be able to foresee the potential issues I’m mentioning. The fact that they are fully on board with something like that, suggests to me that they are either quite naive (even though presumably smart), or that they (perhaps subconsciously) care about something else more than just their stated goal. 191. OhMyGoodness Says: My goodness, will we ever move past defining people simply by their appearance. Logically it goes both ways not just one way on the spectrum of human appearance. I suspect that models like Iman and Bundchen have contributed far more than anyone on this thread will contribute over their lifetime to altruistic causes but they aren’t in the public trumpeting their virtue with every opportunity. I see no reason to question the intelligence or sincerity of either of them. Sneer labels are too often used to trivialize those at the tails of distributions presumably for reasons I can’t consider positive. 192. MK Says: Scott: what are your current % of belief in: a) SBF started out as an honest effective altruist and got corrupted by fame, money and bad judgements vs b) SBF was a sociopathic-type guy using EA only as a cover to impress the right people. i.e. was totally cynical form the get-go? Mine are 5% vs 95%. 193. Scott Says: MK #192: 70% vs. 30%, to the limited extent that one can even distinguish the two hypotheses empirically. 194. anonymous Says: Apparently, according to some public bankruptcy documents, software was made specifically to conceal the misuse of customer funds, so I find it hard to believe he ever had good intentions.. and re: Scott #187 He could just as easily be a greedy sapiosexual. As for all of his donations, they all just so happened to correspond to things that could grant him a form of political persuasion also. If one was truly invested in ending pandemics, they’d be outspoken about the insanity of gain of function research going on right now funded by the NIH. I wouldn’t use puff pieces from Sequoia capital et al painting a very shallow aesthetic about bean bags and aw shucks type of personality, to try to decipher his true intentions. Even in the published conversation from VOX he says he just says the right things so that people will like him, and that could have started earlier, even in childhood, to misrepresent some feigned interest in altruism, if that’s not also made up. Exposing his lack of true interest in EA for its own sake seems like also something that would be important for the reputation of the movement, just because people tend to lump others together in stereotype from such a cataclysmic event whether or not that’s rational. 195. manorba Says: fred #189 Says: “Bah, it’s same old tale of smart 20-year-olds getting seduced by some easy money scheme but lacking experience and wisdom about themselves and the real world.” It really does look like it, regardless of the details, doesn’t it? what strikes me is how many people got fooled. The amount of money involved and the carelessness of everybody are hard to wrap your head around. If i didn’t follow the whole story from this thread and the links posted i would have thought it was a early 2000s post-cyberpunk novella a la Stross (or even Gibson). 196. dankane Says: Tu #159: If your utility function is “pretty straight”, you can’t end up with more money than you know what to do with. Having more money than you know what to do with is a clear sign of having reached a point of diminishing returns. 197. dankane Says: Ilya Zakharevich #182: If there were a million people each finding a strategy that had a 1/1,000,000 chance of saving 2,000,000 (presently existing) people, their strategies would almost have to be somewhat anti-correlated since as you note, they *cannot* all succeed given that there aren’t two trillion people. How do you get a million strategies? 1) A bunch of people try to found startups. Most will probably fail, but the few that are wildly successful can donate a bunch of money to charities to help people. Multiple people trying to found different businesses probably have anti-correlated success conditions (especially if they are competing businesses). 2) A bunch of people investigate speculative new techniques (like try to find a much cheaper malaria cure). Success could save a lot of people, but it might be a long shot. And you might be right here that we cannot realistically find 10^6 such longshot strategies (to make the exact numbers in my point hold, we might need to pull back to only 10^5 strategies saving 2*10^5 people), but you can probably find more than 10 truly distinct ones. 198. dankane Says: Ilya Zakharevich #182: Actually, here’s a method that nearly fits the requirements of my thought experiment: Invest your charitable contributions instead in buying tickets for the 2B powerball lottery that just happened. Odds of a 2 ticket winning are about 1/300,000,000, so in expectation, you roughly triple your money (probably a bit less due to the chance of multiple winners, but still better than doubling). 2B is probably not *quite* enough to save 1,000,000 people, but it is at least the right order of magnitude. And there are *clearly* millions of anti-correlated strategies that can be employed. 199. 1Zer0 Says: People giving away their private keys to intransparent platform operating outside accountability, to people unknown to them operating outside customer protections. In this particular case, Sam Bankman-Fried, a fraud, born a day after me, driven by greed, further accelerated by dopaminergic substances like Adderall, and let’s not kid ourselves, quite possibly other stimulants like Crystal. He can discuss the magnitude of his genius with whatever voices he hears on his stimulant induced psychosis from the comfiness of his prisoncell 🙂 – at least a (developing) psychosis would explain some of his incoherent babbling. Also some highlights from Caroline (Thug CEO of Alamanda) Anecdotically, I find such attitudes not very unusual for people working in the field. In general, a great summary of what has occured can be found on rekt, the guys often excellently analyze large crypto hacks and their aftermaths https://rekt.news/sbf-mask-off/ The original idea behind the crypto ecosystem was a resilient and decentralized network where people would hold their own keys. I can’t recall seeing FTX listed on Immunefi for a Bug Bounty program. Trusting closed source software (Whether DeFi, AI or other Software) is generally a bad idea, although open source doesn’t imply some software being trustworthy. In critical areas, I believe it’s very advisable to exclusively trust open source + formally verified software. Yet still, many will gladly sacrifice best practises and the pursuit of excellent software engineering for a short lived dopamine rush, getting a quick.
What nature gave many lifeforms in order to reward productive behavior, can ironically turn humans into irrational zombies aching for more dopamine.

The whole story is at least very entertaining, probably good for the crypto ecosystem as a whole since it can’t survive as the current centralized pervesion it is yet I am certain that in a few years, after a phase of consolidation, similar scandals will arise once more. Also, the current situation isn’t over yet. There are tons of exchanges with insufficient auditing, broken business models, poorly audited libraries used by many projects, projects headed by shady people… – more will fall.

200. Tech roundup 168: a journal published by a bot - Javi López G. Says:

[…] Sam Bankman-Fried and the geometry of conscienceUpdate (Nov. 16): Check out this new interview of SBF by my friend and leading Effective Altruist writer Kelsey Piper. Here Kelsey directly confronts SBF with some of the same moral and psychologic… […]

201. OhMyGoodness Says:

manorba #195

Yes. Phillip K Dick can likely be ruled out since no DMT entities have been cited (yet) as serving in a strategic advisory role to executive management.

It was a fledgeling organization and so the HR Department didn’t have time to dot all the i’s and cross all the t’s with respect to translating the company’s altruistic objectives into company policy. Policies like-Do not drain billions from customer accounts-fell through the cracks in the rush to do good.

202. Ilya Zakharevich Says:

dankane #197 (and #198)

If there were a million people each finding a strategy that had a 1/1,000,000 chance of saving 2,000,000 (presently existing) people, their strategies would almost have to be somewhat anti-correlated since as you note, they *cannot* all succeed given that there aren’t two trillion people.

In fact, Kolmogorov analysed this kind of reasoning: he would contrapose modus ponens (which is “logicians’ logic”)
If we know X⇒Y and know X, we can conclude that Y holds.
with what I would call modus desiderata (which is “humans’ logic”¹⁾):
If we know X⇒Y and know Y is pleasant, we can conclude that X holds.

¹⁾ In Kolmogorov’s time, he could call this «женская логика» and survive.

Your claim above is completely along the second kind of logic! (In short: this is wishful thinking…)

Your argument in (2) is fully in line with “there are only very few independent random shots with a good hope of positive outcome”.

And your arguments in (1) and in #198 just keep me completely baffled: do you really think that there are millions of financial schemes with an 2:1 expectation of payoff — and which do not require enormous investments into preliminary investigations?!

(For example, my understanding about [typical?] lotteries in US is that if you want the funds up front you get only ∼50% — and there is going to be ≳40% tax on top of this…)

203. Vanessa Kosoy Says:

Scott #187

My comment originally contained the tag “mostly-joking” which was apparently eaten by Internet goblins. It was intended to be light humor, and I certainly didn’t mean to accuse you of antifeminism! The grain-of-truth in the joke is that a harem of rationalist nerd girls could also be viewed as (weak) evidence of selfishness.

Personally, it doesn’t bother me if SBF has a harem or lives in luxury (except for the part where he posed as the “humble” billionaire), and it doesn’t matter much to me what his “real motivation” was. The lies, manipulation and actual theft are the important accusations here.

204. Vanessa Kosoy Says:

(postscript to previous comment replying to Scott #187)

Btw, even if it some man truly prefers (in the romantic or sexual sense) supermodels to rationalist nerd girls, I don’t see it as antifeminism. He is entitled to his personal preferences. It would certainly suck big time for women like me if this was true of the vast majority of men, but even then, a personal preference is not a moral flaw.

205. OhMyGoodness Says:

Vanessa Kosoy #’s 203 and 204

I agree with your comments but object to stereotyping based on appearance (another common divisive way of thinking that leads to nothing positive). You are a unique individual just as models are unique individuals. To refer to you or a model with some simple categorical label demeans your uniqueness as a human being. Sorry if my interpretation was different than your intended meaning. I am not a goblin and hope for nothing but the best for everyone (so long as they respect others) including you of course.

I agree my post wasn’t very good but I have a sensitivity when simple labels are used a priori to categorize unique individuals as to intelligence and/or character. I understand now you weren’t doing that except when you called me a goblin. 🙂

206. Qwerty Says:

People make mistakes, sometimes huge ones. I feel some sympathy for this guy SBF. It appears his heart was in the right place. I hope he somehow gets a chance to make things right for the people whose money he lost although I don’t understand the details of what he did.

207. dankane Says:

Ilya Zakharevich #202:

My argument is not just wishful thinking. Suppose that there were 1,000,000 strategies *that could be simultaneously implemented* (which I suppose is an assumption that I didn’t state explicitly) that had a 1/1,000,000 chance of saving 2,000,000 (presently existing) people.

These success probabilities CANNOT be independent. Why? If they were, there would be a non-zero probability that simultaneously implementing them would save more presently existing people than presently exist. They also cannot be strongly positively correlated with each other by the same reasoning. I suppose it doesn’t necessarily follow that they are anti-correlated (in the sense that the pairwise covariances are negative), but it definitely cannot be the case that there are only 10 actually distinct strategies in the mix as you were suggesting.

As for #198, I think that after a few hours of thinking about it, I found half a billion dollars worth of anti-correlated investments that roughly double in expectation.

As for your objections to this solution, if you are donating to charity, not getting the funds immediately is probably not terrible (though I suppose depending on interest rates might substantially cut into your present discounted value), and your lottery winnings aren’t taxable if you donate them all to charity immediately.

As for other high risk opportunities, there may well not be a million essentially different ones, but if you count things like “buy X lottery ticket” or “make this particular sequence of risky investments” you should be able to find that many more-or-less independent ones (not that they will necessarily be easy to find).

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.

Comment Policies:

1. All comments are placed in moderation and reviewed prior to appearing.
2. You'll also be sent a verification email to the email address you provided.
YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.