{"id":2862,"date":"2016-07-22T02:50:01","date_gmt":"2016-07-22T06:50:01","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=2862"},"modified":"2016-12-10T04:39:44","modified_gmt":"2016-12-10T09:39:44","slug":"my-biology-paper-in-science-really","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=2862","title":{"rendered":"My biology paper in Science (really)"},"content":{"rendered":"<p>Think I&#8217;m pranking you, right?<\/p>\n<p>You can <a href=\"http:\/\/science.sciencemag.org\/content\/353\/6297\/aad8559\">see the paper right here<\/a>\u00a0(&#8220;Synthetic recombinase-based state machines in living cells,&#8221; by Nathaniel Roquet, Ava P. Soleimany, Alyssa C. Ferris, Scott Aaronson, and Timothy K. Lu). \u00a0[<span style=\"color: #ff0000;\"><strong>Update (Aug. 3):<\/strong><\/span>\u00a0The previous link takes you to a paywall, but you can now <a href=\"http:\/\/science.sciencemag.org\/content\/353\/6297\/aad8559.full?ijkey=wzroPPh1eIu9k&amp;keytype=ref&amp;siteid=sci\">access the full text of our paper here<\/a>. \u00a0See also the Supplementary Material <a href=\"http:\/\/science.sciencemag.org\/content\/suppl\/2016\/07\/20\/353.6297.aad8559.DC1\">here<\/a>.] \u00a0You can also\u00a0<a href=\"http:\/\/news.mit.edu\/2016\/biological-circuit-cells-remember-respond-stimuli-0721\">read the <em>MIT News<\/em> article<\/a>\u00a0(&#8220;Scientists program cells to remember and respond to series of stimuli&#8221;). \u00a0In any case, <em>my<\/em> little part of the paper will be fully explained in this post.<\/p>\n<p>A little over a year ago, two MIT synthetic\u00a0biologists&#8212;<a href=\"http:\/\/www.rle.mit.edu\/sbg\/people\/\">Timothy Lu<\/a> and his PhD student Nate Roquet&#8212;came to my office saying they had a problem they wanted help with. \u00a0<em>Why me?<\/em>\u00a0I wondered. \u00a0Didn&#8217;t they realize\u00a0I was a quantum complexity theorist, who so hated picking apart\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Pellet_(ornithology)\">owl pellets<\/a> and memorizing the names of cell parts in junior-high\u00a0Life Science, that he avoided taking a single biology course since that time? \u00a0(Not counting computational biology, taught in a CS department by Richard Karp.)<\/p>\n<p>Nevertheless,\u00a0I listened to my biologist guests&#8212;which turned out to be an excellent\u00a0decision.<\/p>\n<p>Tim and Nate told me about\u00a0a DNA\u00a0system with surprisingly clear rules, which led them to\u00a0a strange but elegant combinatorial problem. \u00a0In this post, first I need to spend some time to tell you the rules; then I can tell you the problem, and lastly its solution. \u00a0There are no mathematical prerequisites for this post, and <em>certainly<\/em> no biology prerequisites:\u00a0everything will be\u00a0completely elementary, like learning a card game. \u00a0Pen and paper might be helpful, though.<\/p>\n<p>As we all learn in kindergarten, DNA\u00a0is a\u00a0finite string over the 4-symbol alphabet\u00a0{A,C,G,T}. \u00a0We&#8217;ll find it more useful, though, to think in terms of entire <em>chunks<\/em>\u00a0of DNA bases, which we&#8217;ll label arbitrarily with letters like X, Y, and Z. \u00a0For example, we might have X=ACT, Y=TAG, and Z=GATTACA.<\/p>\n<p>We can also <em>invert<\/em>\u00a0one of these chunks, which means writing it backwards while also swapping the A&#8217;s with T&#8217;s and the G&#8217;s with C&#8217;s. \u00a0We&#8217;ll denote this operation by * (the technical name in biology is &#8220;reverse-complement&#8221;). \u00a0For example:<\/p>\n<p>X*=AGT, Y*=CTA, Z*=TGTAATC.<\/p>\n<p>Note that (X*)*=X.<\/p>\n<p>We can then combine our\u00a0chunks and their inverses\u00a0into a longer DNA string, like so:<\/p>\n<p>ZYX*Y* = GATTACA TAG AGT CTA.<\/p>\n<p>From now on, we&#8217;ll work exclusively with the chunks, and forget completely\u00a0about the underlying A&#8217;s, C&#8217;s, G&#8217;s, and T&#8217;s.<\/p>\n<p>Now, there are also certain special chunks of DNA bases, called <em>recognition sites<\/em>, which\u00a0tell the little machines that read the DNA when they should start doing something and when they should stop. \u00a0Recognition sites come in pairs, so we&#8217;ll label\u00a0them using various parenthesis symbols like ( ), [ ], { }. \u00a0To convert\u00a0a parenthesis into its\u00a0partner, you invert\u00a0it: thus\u00a0( = )*, [ = ]*, { = }*, etc. \u00a0Crucially, the parentheses in a DNA string\u00a0don&#8217;t need to &#8220;face the right ways&#8221; relative\u00a0to each other, and they also don&#8217;t need to nest properly. \u00a0Thus, both\u00a0of the following are valid DNA strings:<\/p>\n<p>X ( Y [ Z [ U ) V<\/p>\n<p>X { Y ] Z { U [ V<\/p>\n<p>Let&#8217;s refer to X, Y, Z, etc.&#8212;the chunks that aren&#8217;t recognition sites&#8212;as <em>letter-chunks<\/em>. \u00a0Then it will be convenient to make the following simplifying assumptions:<\/p>\n<ol>\n<li>Our DNA string consists of an alternating sequence of recognition sites and letter-chunks, beginning and ending with letter-chunks. \u00a0(If this weren&#8217;t true, then we could just\u00a0glom together adjacent recognition sites and\u00a0adjacent letter-chunks, and\/or add new dummy chunks, until it <em>was<\/em> true.)<\/li>\n<li>Every letter-chunk that appears in the DNA string appears exactly once (either inverted\u00a0or not), while every recognition site that appears, appears exactly twice. \u00a0Thus, if there are n distinct recognition sites, there are 2n+1 distinct letter-chunks.<\/li>\n<li>Our\u00a0DNA string can be decomposed into its constituent chunks <em>uniquely<\/em>&#8212;i.e., it&#8217;s always possible to tell which chunk we&#8217;re dealing with, and when one chunk stops and the next one starts. \u00a0In particular, the chunks and their reverse-complements are all distinct strings.<\/li>\n<\/ol>\n<p>The little machines that read the DNA string are called <em>recombinases<\/em>. \u00a0There&#8217;s one kind of recombinase for each kind\u00a0of recognition site: a (-recombinase, a [-recombinase, and so on. \u00a0When, let&#8217;s say, we\u00a0let a (-recombinase loose on our DNA string, it searches\u00a0for (&#8216;s and )&#8217;s and ignores everything else. \u00a0Here&#8217;s what it does:<\/p>\n<ul>\n<li>If there are no (&#8216;s or )&#8217;s in the string, or only one of them, it does nothing.<\/li>\n<li>If there are two (&#8216;s facing the same way&#8212;like ( ( or ) )&#8212;it deletes everything in between them, including the (&#8216;s themselves.<\/li>\n<li>If there are two (&#8216;s facing opposite ways&#8212;like ( ) or ) (&#8212;it deletes the (&#8216;s, and inverts\u00a0everything in between them.<\/li>\n<\/ul>\n<p>Let&#8217;s see\u00a0some examples. \u00a0When we apply [-recombinase to the string<\/p>\n<p>A ( B [ C [ D ) E,<\/p>\n<p>we get<\/p>\n<p>A\u00a0( B D ) E.<\/p>\n<p>When we apply (-recombinase to the same string, we get<\/p>\n<p>A D* ] C* ] B* E.<\/p>\n<p>When we apply <em>both<\/em> recombinases (in either order), we get<\/p>\n<p>A D* B* E.<\/p>\n<p>Another example: when we apply {-recombinase to<\/p>\n<p>A { B ] C { D [ E,<\/p>\n<p>we get<\/p>\n<p>A D [ E.<\/p>\n<p>When we apply [-recombinase to the same string, we get<\/p>\n<p>A { B D* } C* E.<\/p>\n<p>When we apply both recombinases&#8212;ah, but here the order matters! \u00a0If we apply { first and then [, we get<\/p>\n<p>A D [ E,<\/p>\n<p>since the [-recombinase now encounters only a single [, and has nothing to do. \u00a0On the other hand, if we apply [ first and then {, we get<\/p>\n<p>A D B* C* E.<\/p>\n<p>Notice that inverting\u00a0a substring can change the relative orientation of two recognition sites&#8212;e.g., it can change { { into { } or vice versa. \u00a0It can thereby change what happens (inversion or deletion) when some future recombinase is applied.<\/p>\n<p>One final rule: after we&#8217;re done applying recombinases, we remove the remaining recognition sites like so much scaffolding, leaving only the letter-chunks. \u00a0Thus, the final output<\/p>\n<p>A D [ E<\/p>\n<p>becomes simply A D E, and so on. \u00a0Notice also that,\u00a0if\u00a0we happen to delete one recognition site of a given type while leaving its partner, the remaining site will <em>necessarily<\/em> just bounce around inertly before getting deleted at the end&#8212;so we might as well &#8220;put it out of its misery,&#8221; and delete it right away.<\/p>\n<p>My coauthors have actually implemented all of this in a wet lab, which is what most of the <em>Science<\/em> paper is about (my part is mostly in a technical appendix). \u00a0They think of what they&#8217;re doing\u00a0as building a &#8220;biological state machine,&#8221; which could have applications (for example) to programming cells for medical purposes.<\/p>\n<p>But without further ado, let me\u00a0tell you the\u00a0math question they gave me. \u00a0For reasons that they\u00a0can explain better than I can, my coauthors were\u00a0interested in the <em>information storage\u00a0capacity<\/em> of their biological state machine. \u00a0That is, they wanted to know the answer to the following:<\/p>\n<p style=\"padding-left: 30px;\">Suppose we\u00a0have\u00a0a fixed initial\u00a0DNA string, with n pairs of recognition\u00a0sites and 2n+1 letter-chunks; and we also have\u00a0a recombinase for each type of recognition site. \u00a0Then by choosing which recombinases to apply, as well as which order to apply them in, how many different DNA strings can we generate\u00a0as output?<\/p>\n<p>It&#8217;s easy to construct an example where the answer is as large as 2<sup>n<\/sup>. \u00a0Thus, if we consider a starting string like<\/p>\n<p>A ( B ) C [ D ] E { F } G &lt; H &gt; I,<\/p>\n<p>we can clearly make 2<sup>4<\/sup>=16 different output strings by choosing which subset of recombinases to apply and which not. \u00a0For example, applying [, {, and &lt; (in any order) yields<\/p>\n<p>A B C D* E F* G H* I.<\/p>\n<p>There are also cases\u00a0where the number of distinct outputs is less than 2<sup>n<\/sup>. \u00a0For example,<\/p>\n<p>A ( B [ C [ D ( E<\/p>\n<p>can produce only 3\u00a0outputs&#8212;A B C D E, A B D E, and A E&#8212;rather than 4.<\/p>\n<p>What Tim and Nate wanted to know was: can the number of distinct outputs ever be <em>greater<\/em>\u00a0than 2<sup>n<\/sup>?<\/p>\n<p>Intuitively, it seems like the answer &#8220;has to be&#8221; yes. \u00a0After all, we already saw that the order in which recombinases are applied can matter enormously. \u00a0And given n recombinases, the number of possible permutations of them\u00a0is n!, not 2<sup>n<\/sup>. \u00a0(Furthermore, if we remember that any <em>subset<\/em> of the recombinases can be applied in any order, the number of possibilities is even\u00a0a bit greater&#8212;about\u00a0e\u00b7n!.)<\/p>\n<p>Despite this, when my coauthors\u00a0played around with examples, they found that the number of distinct output strings never exceeded 2<sup>n<\/sup>. In other words, the number of output strings behaved <i>as if<\/i> the order didn&#8217;t matter, even though it does. \u00a0The problem they gave me was either to explain this pattern or to find a counterexample.<\/p>\n<p>I found that the pattern holds:<\/p>\n<p><strong>Theorem:<\/strong>\u00a0Given an initial DNA string with n pairs of recognition sites, we can generate at most 2<sup>n<\/sup> distinct output strings by choosing which recombinases to apply and in which order.<\/p>\n<p>Let a <em>recombinase sequence<\/em> be an ordered list of recombinases, each occurring at most once: for example, ([{ means to apply (-recombinase, then [-recombinase, then {-recombinase.<\/p>\n<p>The proof of the theorem hinges on one main\u00a0definition. \u00a0Given a recombinase sequence that acts on a given DNA string, let&#8217;s call the sequence <em>irreducible<\/em>\u00a0if every recombinase in the sequence actually finds two recognition sites\u00a0(and hence, inverts\u00a0or deletes a nonempty substring) when it&#8217;s applied. \u00a0Let&#8217;s call the sequence <em>reducible<\/em> otherwise. \u00a0For example, given<\/p>\n<p>A { B ] C { D [ E,<\/p>\n<p>the sequence [{ is irreducible, but {[ is reducible, since the [-recombinase does nothing.<\/p>\n<p>Clearly, for every reducible sequence, there&#8217;s a shorter sequence that produces\u00a0the same output string: just omit the recombinases that don&#8217;t do anything! \u00a0(On the other hand, I leave it as an exercise to show\u00a0that the converse is false. \u00a0That is, even if a sequence is <em>ir<\/em>reducible, there might be a shorter sequence that produces the same output string.)<\/p>\n<p><strong>Key Lemma:<\/strong> Given an initial DNA string, and given a subset of k recombinases, every irreducible sequence composed of all k of those recombinases produces the same output string.<\/p>\n<p>Assuming the Key Lemma, let&#8217;s see why the theorem follows. \u00a0Given an initial DNA string, suppose you want to specify\u00a0one of its possible output\u00a0strings. \u00a0I claim you can do this using only n bits of information. \u00a0For you just need to specify which subset of the n recombinases you want to apply, in <em>some<\/em> irreducible order. \u00a0Since every irreducible sequence\u00a0of those recombinases leads to the same output, you don&#8217;t need to specify an order on the subset. \u00a0Furthermore, for each\u00a0possible output string S, there must be <em>some<\/em> irreducible sequence that leads to S&#8212;given a reducible sequence for S, just keep deleting irrelevant recombinases until no more are left&#8212;and therefore some subset of recombinases you could pick that uniquely determines S. \u00a0OK, but\u00a0if you can specify each\u00a0S uniquely using n bits, then there are at most 2<sup>n<\/sup> possible S&#8217;s.<\/p>\n<p><strong>Proof of Key Lemma.<\/strong>\u00a0 Given an initial DNA string, let&#8217;s assume\u00a0for simplicity that we&#8217;re going to apply all n of the recombinases, in some irreducible order. \u00a0We claim\u00a0that the final output string doesn&#8217;t depend at all on <em>which<\/em> irreducible order we pick.<\/p>\n<p>If we can prove this claim, then the lemma follows, since given a proper subset of the recombinases, say of size k&lt;n,\u00a0we can simply glom together everything\u00a0between one relevant recognition site and the next one, treating them as 2k+1 giant letter-chunks, and then repeat the argument.<\/p>\n<p>Now to prove the claim. \u00a0Given two letter-chunks&#8212;say A and B&#8212;let&#8217;s call them <em>soulmates<\/em>\u00a0if either A and B or A* and B* will\u00a0necessarily end up next to each other, whenever all n recombinases are applied in some irreducible order, and whenever A or B appears at all in the output string. \u00a0Also, let&#8217;s call them <em>anti-soulmates<\/em> if either A and B* or A* and B will necessarily end up next to each other if either appears at all.<\/p>\n<p>To illustrate, given the initial DNA sequence,<\/p>\n<p>A [ B ( C ] D ( E,<\/p>\n<p>you can check that A and C are anti-soulmates. \u00a0Why? \u00a0Because if we apply all the recombinases in an irreducible sequence, then at some point, the [-recombinase needs to get applied, and it needs to find both [ recognition sites. \u00a0And one of these recognition sites will still be next to A, and the other will still be next to C (for what could have pried them apart? \u00a0nothing). \u00a0And when that happens, no matter where C has traveled\u00a0in the interim, C* must get brought next to A. \u00a0If the [-recombinase does an inversion, the transformation will look like<\/p>\n<p>A [ &#8230; C ]\u00a0\u2192 A C* &#8230;,<\/p>\n<p>while if it does a deletion, the transformation will look like<\/p>\n<p>A [ &#8230; [ C*\u00a0\u2192 A C*<\/p>\n<p>Note that C&#8217;s [ recognition site will be to its left, if and only if C has been flipped to C*. \u00a0In this particular example, A never moves, but if it did, we could repeat the analysis for A and <em>its<\/em> [ recognition site. \u00a0The conclusion would be the same: no matter what inversions or deletions we do\u00a0first, we&#8217;ll maintain the invariant that A and C* (or A* and C)\u00a0will immediately jump next to each other, as soon as the [ recombinase is applied. \u00a0And once they&#8217;re next to each other, nothing will ever separate them.<\/p>\n<p>Similarly, you can check that C and D are soulmates, connected by the ( recognition sites; D and B are anti-soulmates, connected by the [ sites; and B and E are soulmates, connected by the ( sites.<\/p>\n<p>More generally, let&#8217;s consider an arbitrary DNA sequence, with n pairs of recognition sites. \u00a0Then we can define a graph, called the <em>soulmate graph<\/em>, where the 2n+1 letter-chunks are the vertices, and where X and Y are connected by (say) a blue edge if they&#8217;re soulmates, and by a red edge if they&#8217;re anti-soulmates.<\/p>\n<p>When we construct this graph, we find that every\u00a0vertex has exactly\u00a02 neighbors, one for each recognition site that borders it&#8212;save the first and last vertices,\u00a0which border only one recognition site each and so\u00a0have only one neighbor each. \u00a0But these facts immediately determine the structure of the graph. \u00a0Namely, it must consist of a simple <em>path<\/em>, starting at the first letter-chunk and ending at the last one, together with possibly a disjoint union of cycles.<\/p>\n<p>But we know that the first and last letter-chunks can never\u00a0move anywhere.\u00a0 For that reason, a path of soulmates and anti-soulmates, starting at the first letter-chunk\u00a0and ending at the last one, <em>uniquely determines<\/em> the final output string, when the n recombinases are applied in any irreducible order. \u00a0We just follow it along, switching between inverted and non-inverted letter-chunks whenever we encounter a red edge. \u00a0The cycles contain the letter-chunks that necessarily get deleted along the way to that unique output string. \u00a0This completes the proof of the lemma, and hence the theorem.<\/p>\n<p>&nbsp;<\/p>\n<p>There are other results in the paper, like a generalization to the case where there can be k pairs of recognition sites of each type, rather than only one. In that case, we can prove that the number of distinct output strings is at most 2<sup>kn<\/sup>, and that it <i>can<\/i> be as large as ~(2k\/3e)<sup>n<\/sup>. We don&#8217;t know the truth between those two bounds.<\/p>\n<p>Why is this interesting? \u00a0As I said, my coauthors had their own reasons to care, involving the number of bits one can store using a certain kind of DNA state machine. \u00a0I got interested for a different reason: because this is a case where biology threw up a bunch of rules that <em>look<\/em>\u00a0like a random mess&#8212;the parentheses don&#8217;t even need to nest correctly? \u00a0inversion can\u00a0<em>also<\/em> change the semantics of the recognition sites? \u00a0evolution never thought about what happens if you delete one recognition site while leaving the other one?&#8212;and yet, on analysis, all the rules work in\u00a0perfect harmony to produce a certain outcome. \u00a0Change a single one of them, and the &#8220;at most 2<sup>n<\/sup> distinct DNA sequences&#8221; theorem would be false. \u00a0Mind you, I&#8217;m still not sure what biological <em>purpose<\/em>\u00a0it serves for the rules to work in harmony this way, but they do.<\/p>\n<p>But the point goes further. \u00a0While working on this problem, I&#8217;d repeatedly encounter an aspect of the mathematical model that seemed weird and inexplicable&#8212;only to have Tim and Nate explain that the aspect made sense once you brought in additional facts from biology, facts not in the model they gave me. \u00a0As an example, we saw that in\u00a0the soulmate graph, the deleted\u00a0substrings appear as cycles. \u00a0But surely excised DNA fragments don&#8217;t literally form loops? \u00a0Why yes, apparently, they do. \u00a0As a second example, consider the DNA\u00a0string<\/p>\n<p>A ( B [ C ( D [ E.<\/p>\n<p>When we construct the soulmate graph for this string, we get the path<\/p>\n<p>A&#8211;D&#8211;C&#8211;B&#8211;E.<\/p>\n<p>Yet there&#8217;s no actual recombinase sequence that leads to A D C B E as an output string! \u00a0Thus, we see that it&#8217;s possible to have a &#8220;phantom output,&#8221; which the soulmate graph suggests should be reachable but that <em>isn&#8217;t<\/em> actually reachable. \u00a0According to my coauthors, that&#8217;s because the &#8220;phantom outputs&#8221; <em>are<\/em> reachable, once you know that in real biology (as opposed to the mathematical model), excised DNA fragments can also reintegrate back into the long DNA string.<\/p>\n<p>Many of my favorite open problems about this model concern algorithms and complexity. For example: given as input an initial DNA string, does there <i>exist<\/i> an irreducible order in which the recombinases can be applied? Is the &#8220;utopian string&#8221;&#8212;the string suggested by the soulmate graph&#8212;actually reachable? If it <i>is<\/i> reachable, then what&#8217;s the shortest sequence of recombinases that reaches it? Are these problems solvable in polynomial time? Are they NP-hard? More broadly, if we consider all the subsets of recombinases that can be applied in an irreducible order, or all the irreducible orders themselves, what combinatorial conditions do they satisfy? \u00a0I don&#8217;t know&#8212;if you&#8217;d like to take a stab, feel free to share what you find in the comments!<\/p>\n<p>What I do know is this: I&#8217;m fortunate\u00a0that, before they publish\u00a0your first\u00a0biology paper, the editors\u00a0at\u00a0<em>Science<\/em> don&#8217;t call up your\u00a07<sup>th<\/sup>-grade Life Science teacher to ask how you did in the owl pellet unit.<\/p>\n<hr \/>\n<p><strong>More in the comments:<\/strong><\/p>\n<ul>\n<li>Some notes on the <a href=\"https:\/\/scottaaronson.blog\/?p=2862#comment-1203242\">generalization<\/a> to k pairs of recognition sites of each type<\/li>\n<li>My coauthor Nathaniel Roquet&#8217;s\u00a0<a href=\"https:\/\/scottaaronson.blog\/?p=2862#comment-1203346\">comments<\/a> on the biology<\/li>\n<\/ul>\n<hr \/>\n<p><b><span style=\"color: red;\">Unrelated Announcement from My Friend Julia Wise (July 24):<\/span><\/b> Do you like science and care about humanity\u2019s positive trajectory? July 25 is the final day to apply for <a href=\"http:\/\/eaglobal.org\">Effective Altruism Global 2016<\/a>. From August 5-7 at UC Berkeley, a network of founders, academics, policy-makers, and more will gather to apply economic and scientific thinking to the world\u2019s most important problems. Last year featured Elon Musk and the head of Google.org. This year will be headlined by Cass Sunstein, the co-author of Nudge. If you apply with this link, the organizers will give you a free copy of <a href=\"http:\/\/eaglobal.org\/application-form?group=dgb\">Doing Good Better<\/a> by Will MacAskill. Scholarships are available for those who can\u2019t afford the cost. \u00a0<a href=\"http:\/\/eaglobal.org\">Read more here<\/a>. \u00a0<a href=\"http:\/\/eaglobal.org\/application-form?group=dgb\">Apply here<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Think I&#8217;m pranking you, right? You can see the paper right here\u00a0(&#8220;Synthetic recombinase-based state machines in living cells,&#8221; by Nathaniel Roquet, Ava P. Soleimany, Alyssa C. Ferris, Scott Aaronson, and Timothy K. Lu). \u00a0[Update (Aug. 3):\u00a0The previous link takes you to a paywall, but you can now access the full text of our paper here. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[5,11],"tags":[],"class_list":["post-2862","post","type-post","status-publish","format-standard","hentry","category-complexity","category-nerd-interest"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2862","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2862"}],"version-history":[{"count":16,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2862\/revisions"}],"predecessor-version":[{"id":2885,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2862\/revisions\/2885"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2862"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2862"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2862"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}