I find this part extremely off putting. While I sympathize with all other points you’ve raised, I simply reject the idea that vaccinations are part of any political debate. While, the 6th Jan “insurrection” narrative is driven by the false political agenda to de-legitimize the right, in order for one socio-political group to gain political control for self-serving causes, the mass vaccination campaign on the other hand is an international effort to save lives, and has almost nothing to do with political power gaining. Is this effort going to succeed, or does it hold some risks? Maybe. But it’s simply not a political debate (or should not be one).

]]>I think I should expand on that “:P” because, like — whenever one makes a claim of the form “We need to do even more X!”, one can expect a response along the lines of “Oh, and real Communism has never been tried, is that it?” Nobody’s actually said that yet, but, I think this is worth pre-empting.

So, first off, like Scott, I dispute the idea that real Communism would actually be a good idea. 😛 But that’s clearly not the main point here, so let’s grant that it would be.

The problem here is, essentially, monotonicity vs unimodularity. That is to say: Did meritocracy create or otherwise increase these problems that I am saying would be solved with more meritocracy? Or has it instead already decreased them, and I am only saying that applying it more would decrease them even further? I would say it is the latter of these. That’s what makes it different from “real Communism has never been tried” — in those cases, the rhetorical Communist is claiming that real Communism would solve the problems that actually-implemented Communism has created or increased. I’m merely claiming that increasing meritocracy would further solve the problems that actually-implemented meritocracy has already done a lot to reduce.

]]>OK, I guess I need to get deeper into things! And revise what I said somewhat.

I think your objections largely fall into a few categories:

1. Criticisms of imperfections in meritocracy’s implementation, the best remedy for which is to implement it better. Attempts to determine who’s best at a thing are still influenced by arbitrary social conventions? That’s a defect to be addressed! Gotta meritocracy harder! 😛

2. An overall sort of fallacy of gray — well, it’s all socially influenced, so the difference doesn’t matter, does it? No, it does! “Socially influenced” isn’t binary, something can be more so or less so; there is a very obvious difference between using a test that maybe has some cultural bias on the one hand, and just handing out government positions to your family on the other hand. Yes, there are a lot of complications and edge cases — hell, I can think of a bunch more you didn’t mention; I didn’t bring them up myself because I was just summarizing. But the existence of edge cases has no bearing on the evaluation of the obvious cases! We can still say what’s more meritocratic or less so, even if nothing is entirely so.

3. Nitpicking exact wording (or possibly injecting things I didn’t say?) — sorry, that’s kind of my fault, I am the one who phrased things the way I did. But, well, as I said, I wasn’t getting too deep into things; my descriptions were short summaries, not complete definitions. But like, no, you do not need to literally find the *best* person to do any given thing, there are diminishing returns to search there obviously. (Not to mention, if you need to pay them, price may be a factor.) The point is not to obtain the single optimum result, but rather to care primarily about the goodness of the result in the first place; and as such to pick people based on relevant criteria, not ethnicity or family connections or what have you. Similarly, you talk about a hell of a lot of things as being excluded from selection meritocracy that… I would include and see no reason to exclude? So many of the factors you exclude I see no reason to exclude, because they do indeed seem relevant, and that’s why you mentioned them? I’m actually kind of confused where you got this from, I’m not sure that’s so much nitpicking my exact wording anymore as just… assuming things I didn’t actually say, because I don’t think I said anything that implied such narrowness.

Anyway, I guess you’ve forced me to clarify my positions, and try to get the idea closer on the nose. I don’t see that any of your arguments get at the essential point, so let me try to bring things closer to that.

I would say — and these are still summaries rather than definitions, but I think these are much closer to the mark than my last attempt — that the “pro-meritocracy” faction is basically concerned with *getting things done*, or perhaps *doing things well* or *doing things properly*. By contrast, the “anti-meritocracy” faction is basically concerned with *distributing spoils while maintaining plausible deniability*. (Of course, I’ve complicated things somewhat here by now discussing the *people* with these ideas, rather than the ideas themselve, which means now I’m dragging in all sorts of related-but-logically-independent ideas. But, well, that’s easier here, so it’s what I’ll do.)

I don’t know that I have time to go into a full elaboration on that right now, but let me see if I can state the essentials quickly.

The “pro-meritocracy” faction would say that: Doing things well *matters*; making things, inventing things, etc, all this is how people’s lives become better, with the benefits primarily accruing to the *users* of these new ideas and technologies (but, y’know, also to the producers who are paid for them). Doing things poorly has a seriously detrimental effect on other people’s lives! And people are not interchangeable, so who you pick to do a thing can matter a lot. If this results in, say, income inequality, why is that a problem? Nearly everyone’s lives are getting better; hardly anyone is being made worse off. When you see an improvement like that, you take it! Even granting that negative effects do result from such inequality, they’re dominated by the overall improvement in quality of life.

The “anti-meritocracy” faction, by contrast, sees everything as zero-sum, or something like it (“limited good” may be more accurate). Everyone’s lives are getting better? Fundamentally impossible. If one person is profiting, it must be at others’ expense, and therefore illegitimate; there’s no such thing as a mutually beneficial exchange. The only important thing about jobs is that people get money from them — not that they actually create anything or do anything useful. As such, it doesn’t particularly matter whether you hire someone good or someone unqualified; that doesn’t affect anything that really matters, because what really matters is who that income stream is going to, and whether it’s going to your family or faction. What really matters is distributing the spoils.

Now, I say that what it’s really about is distributing the spoils *while maintaining plausible deniability*. Why is that second part so important? Well, because it’s not about openly demanding direct payments! It’s about getting payments by being placed in positions that “deserve” those payments, so as to maintain plausible deniability. This is important, because it’s exactly this need to maintain plausible deniability that makes the whole thing so destructive! They can’t openly demand payments because people wouldn’t go for that; but if such payments were to happen, it would actually be better (if we ignore the incentives that that would create). Like the Mafia demanding you hire them to build your building — if they just directly extorted you, you’d merely be out money; but now, you’re out the money *and* you’ve got a lower-quality building. Direct redistribution is not as destructive as redistribution with plausible deniability.

(All this has come up here before, really — Sarah Constantin wrote a good essay on it that got guest-posted here. TBH maybe I should have linked that first, because honestly that essay was a good part of what helped me crystallize these ideas in the first place; I’m pretty sure I took the “spoils” language from her, for one. I didn’t link it first because, um, I think I forgot how much it had influenced me until reskimming it just now.)

So I hope that clarifies the distinction I’m making. The pro-meritocracy faction doesn’t think much about ranking meritocracy because they think of it as mostly irrelevant to anything that matters; some may consider it an unfortunate side-effect, but either they consider it a side-effect that can be remedied, or just as one that doesn’t go anywhere near shifting things into net negative The anti-meritocracy faction doesn’t think about selection meritocracy much because they think of *that* as mostly irrelevant to anything that matters; they consider it basically as just the mechanism by which selection meritocracy is generated, and don’t see much downside to throwing it out.

Functions and graphs of functions should be taught around the same time as equations in an algebra class (and not in precalculus), that way one has a visual representation of an equation (i.e. the intersection of two functions on a graph), and an understanding of the relationship between inverse functions of linear functions and solving equations of linear functions, and both linear functions and equations could be introduced when rational numbers get introduced into the curriculum (i.e. in pre-algebra rather than in algebra/precalculus), as they provide yet another representation of the rational numbers through the slope and solution set of the function.

Basic naive set theory also needs to be in the curriculum in an algebra class, so that students could understand what the symbols $\mathbb{N}$, $\mathbb{Z}$, and $\mathbb{Q}$ are in mathematics and what a solution set of an equation is, and so that in the future students could understand that the factorial function and the binomial coefficient are functions defined over only the natural numbers (to avoid thorny questions about the Gamma function), and when defining polynomials and polynomial functions, one could sidestep the issue of defining fractional exponents in powers if we restrict exponents to natural numbers or integers for the time being.

Square roots, fractional powers, infinite decimals, e, pi/tau, and similar things usually introduced in an American pre-algebra class should be delayed until real numbers get introduced in a real analysis class, which should be an actual class instead of largely split between pre-algebra, algebra, and pre-calculus. (I like the name real analysis better than pre-calculus, as it is a topic in and of itself, rather than just being preparation for calculus) And one first introduces sequences and limits of sequences, and define real numbers as infinite decimals, which are actually limits of a sequence of finite decimals. One could then extend the domain and range of functions to cover the entire set of real numbers. Since students should already understand the pointwise definition of functions from their algebra w/functions class, one could likewise introduce sequences of polynomial functions and then define various analytic functions (exponential function, trigonometric functions, inverse power functions and rational functions, logarithms, fractional powers) as a limit of infinite sequences of polynomial functions. The relationships between all the various functions could be established later. Various other functions such as the sign function, the absolute value function, and other piecewise (linear) functions would help establish what it means for a function to be continuous or discontinuous, and continuity and limits are the prerequisites for defining the derivative in a calculus class.

I suppose one could also introduce in the real analysis class above the quadratic formula to solve equations involving quadratic functions, after introducing square roots. However, in practice, most people would end up using calculators or computer code/scripting to solve equations over the real numbers, and the answers from the real world tend to be represented approximately in decimal format rather than in exact algebraic form, so more illuminating for the student than the quadratic formula would be the algorithms used in numerical real analysis (i.e. possibly in the calculator), such as the bisection method or the secant method, which apply to more than just quadratic equations. The fundamental theorem of algebra should be eliminated completely from the curriculum, it is a theorem of complex analysis rather than being anything fundamental to algebra.

Geometry should be delayed until after the real analysis class above, as many of the functions and constants used in geometry (trigonometric function, pi/tau) are defined in that class first, and should be taught not as formal geometry, but as analytic geometry, for the benefit of those who enter some science or engineering field rather than pure mathematics, which is the vast majority of students who go through the school system. For the same reason, geometry should be an elective class, rather than required for everybody to take, as not everybody will become a scientist or engineer. Also to include in geometry are trigonometry, coordinate systems, 2D and 3D vectors and their relation to translations, the various vector operations (dot product, outer product, cross product, geometric product) and their relation to trigonometric functions, bivectors and their relation to rotations and complex numbers in 2D space and quaternions in 3D space. All of these are far more important in physics and engineering, where the translations and rotations in geometry become continuous trajectories in a phase space rather than discrete operations, than any discussion about conic sections and formal proofs of Ceva’s theorem which seem to prevail in the classes today.

For those who do not wish to take geometry, than introductory classes in probability and statics, data science, or computer science should be offered as an alternative after taking the real analysis class, as being able to understand and manipulate data and write code and scripts are fairly important skills to have in today’s digital world. Since the real analysis class already provides the necessary requirements for differential calculus, calculus could also be offered as an elective as well, for those students who need it.

]]>First, the kids I’m talking about — the ones I brought up several comments ago — aren’t for the most part talking about academia. Their horizon’s a little broader than that, and even those in grad school will mostly not become academics; most aren’t interested.

Second, I don’t know what could be airy-fairy about this much of hiring, in academia or out:

1. Whatever you’re hiring for will be modded powerfully by the interests/abilities/ambitions of the person doing the thing. A successful academic candidate will read the room, see what powerful hiring factions are interested in, and show them that, but you can say goodbye to any control over that once they arrive and set up, especially once they get tenure. If you’re not flexible here, you’re going to spend a lot of time furious at what you take for betrayals.

2. You need someone who actually wants to do the job. Not “thinks they’re supposed to want to do the job”, not “likes the idea of the job”, not “wants the title,” not “will feel like a colossal failure if they don’t go after this thing they’ve been trained to and have the cv for,” not “sees it as a useful stepping stone to the job they actually want,” not “their mother wants them to have the job title,” not “doesn’t see what else they’d do at this point.” Wants to do the job. As it happens, not a lot of people genuinely want to do research professor work. Parts of it, yes, especially for more money. All or even most of it, no.

3. You need someone who shows signs of being able to do the job well. That’s all you get to see in the average academic hiring interview process, which is part of why departments have to give themselves the out of spending five years or so testing and hazing people post-hire before granting tenure. Indeed part of the carnival of academia is people successfully pawning their duds off on other people, then laughing, laughing, laughing.

4. You need people who won’t be so miserable to everyone else that they skunk the whole project, and aren’t likely to be people who do fine on their own but will fight unproductively or unhappily when they have to work together. This is crucial in academic hiring, where two or three bad hires in this direction can hobble a department for decades.

Or I don’t know — maybe you figure academic hiring is part of that broad category, “low stakes workplace projects.”

To tell you the truth, I don’t think academic hiring’s that tremendously high-stakes apart from lecturer hiring. Most of the work that gets done in any academic department’s going to die there, even when you start collecting Nobelists. Of the work that makes it out the door, a tiny fraction becomes consequential; very little of it was meant to be consequential outside academia anyway. Mostly what you’re doing that’s of consequence, to my mind, is training people who’re going to leave and do other things. But it’s not really the main thing you hire for. Yes, people do have to show some evidence of being able to teach now, and student evaluations, for better or worse, become part of tenure decisions. But on the whole you’re looking for people who’ll go get money (for supporting yet more work that, again, beyond training people, will mostly die within the walls of the department) and the academic prestige. To most of the people who’ll have anything to do with a university — not to mention most who won’t — these things don’t register. In that sense the profs are playing weird tennis.

We do keep coming back around to social definitions of merit, don’t we. Look, why not talk about food. Everybody needs food, that’s an easier target. I’d suggest vaccine-making but you’re only going to wind up at that immigrant lady who got stuck as a soft-money lab assistant forever and a day.

]]>