Opening for a summer student

I’m seeking a talented student for summer of 2009, to work with me in developing and experimenting with a new open-source web application.  I’m open to students from anywhere, though MIT students will receive special consideration for funding reasons.

The web app — tentatively called “Worldview Manager” — is intended to help people ferret out hidden contradictions in their worldviews.  Think of a kindly, patient teacher in a philosophy seminar who never directly accuses students of irrationality, but instead uses Socratic questioning to help them clarify their own beliefs.

The idea is extremely simple (as of course it has to be, if this app is to attract any significant number of users).  The user selects a topic from a list, which might include the following at the beginning:

Climate Change
The Singularity
Computational Complexity
Interpretation of Quantum Mechanics
Quantum Computing
Gay Rights
Gifted Education
Foundations of Mathematics
Strong AI and Philosophy of Mind
Utilitarian Ethics
Animal Rights
Art and Aesthetics

Users will also be able to contribute their own topic files.  (The above list is biased toward those topics about which I feel like I could write a topic file myself.)

After choosing a topic, the user will be presented with a sequence of statements, one at a time and in a random order.  For example, if the topic is Foundations of Mathematics, the statements might include the following:

Math is a cultural construct.
Math privileges male, linear thinking over female, intuitive thinking.
The Continuum Hypothesis is either true or false, even if humans will never know which.
There’s a sense in which integers, real numbers, and other mathematical objects “existed” before humans were around to name them, and will continue to exist after humans are gone.

The user can indicate her level of agreement with each statement by dragging the cursor.

Now the topic file, in addition to the statements themselves, will also contain lists of pairs or sometimes triples of statements that appear (at least to the writer of the topic file) to be in “tension” with one another.  From time to time, the program will search the user’s previous responses for beliefs that appear to be in tension, point out the tension, and give the user the opportunity to adjust one or more beliefs accordingly.  For example, the user might get a message like the following:

You indicated substantial agreement with the statement

If a scientific consensus on climate change existed, then society would have to act accordingly.

and also substantial agreement with the statement

The so-called “consensus” on climate change simply reflects scientists’ liberal beliefs, and therefore does not necessitate action.

These views would seem to be in tension with each other.  Would you like to adjust your belief in one or both statements accordingly?

That’s about all there is to it.  No Bayesianism, no advanced math of any kind (or at least none that the user sees).

As you may have gathered, the writing of topic files is not a “value-neutral” activity: the choice of statements, and of which statements are in tension with which other ones, will necessarily reflect the writer’s interests and biases.  This seems completely unavoidable to me.  The goal, however, will be to adhere as closely as is practical to Wikipedia’s NPOV standard.  And thus, for example, any well-written topic file ought to admit “multiple equilibria”; that is, multiple points of view that are genuinely different from one another but all more-or-less internally consistent.

The student’s responsibilities for this project will be as follows:

  • Write, debug, and document the web app.  This sounds straightforward, but it’ll be important to get the details right.  I’m not even sure which development tools would be best—e.g., whether we should use Java or JavaScript, do all computation on the server side, etc.—and will rely on you to make implementation decisions.
  • Write topic files.  I can create many of the files myself, but it would be great if you could pitch in with your own ideas.
  • Help run experiments with real users.
  • Help write up a paper about the project.

If there’s time, we could also add more advanced functionality to Worldview Manager.  Your own ideas are more than welcome, but here are a few possibilities:

  • Present statements to the user in a non-random order that more rapidly uncovers tensions.
  • Allow users to register for accounts, and save their “worldviews” to work on later.
  • Give users the ability to compare worldviews against their friends’, with large disagreements flagged for special consideration.
  • Give users the ability to use a local search or backtrack algorithm to decrease the total “tension” in their worldviews, while changing their stated beliefs by the minimum possible amount.
  • Enable adaptive follow-up questions.  That is, once two beliefs in tension have been uncovered, the user can be queried more specifically on how she wants to resolve the apparent contradiction.

I’m looking for someone smart, curious, enthusiastic, and hard-working, who has experience with the development of web applications (a work sample is requested).  Grad students, undergrads, high school students, nursery school students … it’s what you can do that interests me.

I expect the internship to last about three months, but am flexible with dates.  Note that in the year or so since I started at MIT, I’ve already worked with six undergraduate students, and three of these interactions have led or will lead to published papers.

If you’re interested, send a cover letter, cv, and link to a work sample to aaronson at csail mit edu.  If you want to tell me why the Worldview Manager idea is idiotic and misguided, use the comments section as usual.

Update (10/15): In a somewhat related spirit, Eric Schwitzgebel at UC Riverside points me to a study that he and a colleague are conducting, on whether professional philosophers respond differently than laypeople to ethical dilemmas.  Shtetl-Optimized readers are encouraged to participate.

68 Responses to “Opening for a summer student”

  1. Brian Gilbert Says:

    This idea is idiotic and misguided.

    Okay, got that out of the way. No, seriously, I’m looking forward to the results of this. It looks like it could be interesting and enlightening, though it will (of course) require that the author(s) of these “worldview files” do fairly admit to the existence of multiple points of equilibrium. I know you already acknowledged that, I just had to reiterate simply because I know that you and I agree on so very little and that I’m interested to see what this project reveals about myself and others.

  2. Will Maier Says:

    I’m not a student (between undergrad and grad) at the moment, but
    this strikes me as a fascinating idea. I’d suggest a few tweaks,
    though. First, rather than impose a category structure, it might be
    more interesting to allow users (or an editor) to tag each
    statement. This would allow unforeseen relationships between
    statements to bubble up (even for statements that have no defined

    Second, I would allow users (or, again, an editor) to vote on the
    degree of tension between two statements producing a weighted
    triple. Again, this would increase the amount of participation and,
    hopefully, make the result more dynamic.

    This structure lends itself to representation as a DAG. My
    expectation is that the graph would organize itself into two main
    clusters of densely connected vertices with little or no connection
    between the two clusters, though I may just be channeling the
    conventional wisdom about our bipolarized political and cultural

    Assuming you can come up with some incentive for participation, this
    sounds like a great project. Perhaps the interface would succeed
    best as a widget on some social networking hub (about which I know
    rather little)…

  3. Chris Says:

    So it’s a never-ending philosophy survey?

  4. g Says:

    You might want to take a look at these …

    … which do something along the lines you’re describing, on a small scale. They’re advertised as games rather than as philosophy seminars, which is probably a more effective way of getting people to take part.

  5. Scott Says:

    g: Thanks for the link! That is indeed extremely close to what I had in mind. I just took the checkup, and had a “tension score” of 7%, with exactly two answers in tension with each other:

    You agreed that:
    In certain circumstances, it might be desirable to discriminate positively in favour of a person as recompense for harms done to him/her in the past
    And disagreed that:
    It is not always right to judge individuals solely on their merits

    Exercise for the reader: define the word “merits” so as to reconcile these beliefs…

  6. Scott Says:

    Alright, I also completed the God test without biting any bullets. 🙂

  7. roland Says:

    If one defines the word “merits” broader the answer tends to become meaningless.

  8. Mgccl Says:

    Formalize statements into first order logic, but show them in normal language. Then link the statement with the logic construct and check if they have contradiction.

    For example.

    If a scientific consensus on climate change existed, then society would have to act accordingly.
    ∃(scientific consensus on climate change)→(society would have to act accordingly).

    The so-called “consensus” on climate change simply reflects scientists’ liberal beliefs, and therefore does not necessitate action. [this one has two parts]
    ∀(scientific consensus on climate change)→¬(society would have to act accordingly)
    ∀(scientific consensus on climate change)→(scientists’ liberal beliefs)
    ∀(liberal beliefs)→(society would have to act accordingly)

    of course, instead of those words in the logic expression, those are just some global variables.
    In this case, implement the entire system only requires implement a first order logic system, and of course, a fuzzy logic system too because people can agree partially or something?

  9. Z Says:

    Consider this statement:

    >Math privileges male, linear thinking over female, intuitive thinking.

    Now suppose that I agree that “linear” (I don’t know exactly what that means, but I am interpreting it to mean “logical”) thinking is good for math, but I don’t agree that “male=linear” and “female=intuitive”. How should I rate the statement in this case? One should be very careful that the statements are well-formed, in the sense that they can be assigned a ‘true’ or ‘false’ value.

    Also, inconsistencies consisting of just two statements seem too simplistic – those two statements would just have to be the (rephrased) negations of each other. In any case, good luck with the project!

  10. Scott Says:

    Mgccl: Not worth the effort.

  11. Scott Says:

    Also, inconsistencies consisting of just two statements seem too simplistic – those two statements would just have to be the (rephrased) negations of each other.

    Z: Not true! For example, you could have a general claim and a specific counterexample to that claim.

    I predict that when we do the experiments, we’ll find no shortage of tensions involving 2 statements only.

  12. Mohammad Says:

    nice idea. i’d suggest implementing a facebook application to get more responses.

    thanks, g, for the links!

  13. roland Says:

    But a reasonable intelligent person notices logical inconsistencies in a pair of sentences anyway, so that situation is not very illuminative.

  14. Aspiring Vulcan Says:

    Great idea. Making it into a Facebook app is also a good idea, since it’ll get you lots of participants.

  15. Scott Says:

    Roland: Reasonably intelligent people notice inconsistencies once brought to their conscious awareness, though they often respond by rationalizing them. What interests me are the “latent” or “implicit” inconsistencies in people’s worldviews. I conjecture that not only is our mental landscape strewn with these latent inconsistencies, but “managing” them—that is, preventing them from being noticed at inconvenient times—is one of the central features of human cognition. Careful inconsistency-management seems like the key to accepting supernatural, self-serving, or politically correct beliefs (all of which can have enormous survival value) while simultaneously keeping some sort of grip on reality.

  16. Mohammad Says:

    ps. an additional advantage of implementing this on FB is that you’ll get additional data on how people’s world views are correlated with that of their friends (perhaps some of our “inconsistencies” can be explained by our desire to conform with our potentially diverse social circle). you might be able to publish another paper based on this!

    pls let us know of your results!

  17. Peter de Blanc Says:

    I’m worried that topic creators will write multi-part statements, and link them to things which contradict only part of the statement.

  18. Cody Says:

    Thanks for the links g, those were fun.
    The project sounds very interesting, I wish I had the programming background to apply. Also I second (third?) the facebook stuff, though obviously that could take a back seat to the bulk of the project.

    It might be interesting to add a feedback mechanism to refine the questioning and sort of document and flush out the various equalibria of world views. I can imagine there could be a lot of interesting directions to move in with that project.
    Good luck!

  19. Lewis Powell Says:

    You might want to use any of a number of introductory philosophy readers as a good way to generate some standard sets of statements for topic files, since many of the articles contained in those will address the sorts of issues you have in mind, and will already have arguments which rely on the sort of general principles you seem to have in mind.

    For example, Peter Singer’s article “Famine, Affluence, and Morality” can be a launching point for a more neutral presentation of the relationship among one’s moral intuitions, rather than as a means of arguing for a particular moral stance (link:—-.htm).

  20. michael vassar Says:

    Thanks Scott!
    I have actually had a project closely resembling this under development for SIAI for the last 4 months. It’s not nearly as ambitious as what you describe above, but should still be an interesting thing for you to look at when thinking this through. I will e-mail you contact info for its creators.

  21. asdf Says:

    I’m trying to remember, something very similar to that program was written a few years ago, but it was rather unfriendly. It quizzed people about their theological beliefs and tried to trap them in contradictions. I may surf around for a link later.

  22. asdf Says:

    Yeah, here:

  23. Jeremy Says:

    Worldview Manager should be made into a social news site. Think about that.

  24. James Says:

    Great idea! But it does bring an old line to mind: Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.

  25. Charles Says:

    I thought that the philosophersnet questions and the tension resolution were both very poor. The tensions detected seemed to be nothing more than explicitly programmed pairs, with no attempt at deeper interactions or fuzziness/nuances.

    I would hope that any project you do addresses these issues. One possibility would be to give several choices for each answer:

    1. Math privileges linear thinking over intuitive thinking.
    3. Math does not privileges linear thinking or intuitive thinking.
    5. Math privileges intuitive thinking over linear thinking.
    N. No knowledge/opinion

    Or even a triangle of feelings: top left = view #1, bottom left = view #2, right center = no opinion; middle left would be “I have an opinion between the two” and tending rightward would indicate increased uncertainty.

    I might mark a question about the privatization of Social Security toward the right to indicate that while I have feelings, I harbor significant uncertainty about the rightness of my understanding of the issue and about the ‘correctness’ of my preference.

  26. Job Says:

    Basically then a node/edge graph, where nodes are statements and edges are “builds-upon” and “opposes” relationships right?

    You would disagree with statement X if you disagree with one or more statements X builds upon (maybe quantify the level of disagreement by the number of disagreeable built-upon nodes), or you could also disagree with X if you agree with Y and Y builds upon one of the nodes X opposes (directly or indirectly).

    To find out what statements to present the user, have the user pick a starting statement and then display adjacent nodes within a given radius, expanding/moving the radius as it progresses, letting the user know what else he or she disagrees or agrees with (allowing the user to correct the result by entering statements and connections, thereby acquiring more content).

    It could get complicated (in a good way), but entering the statements with “builds-upon” and “opposes” relationships, instead of “tension-with” relationships, is less ambiguous and more conducive to NPOV, because statements can be deconstructed into fragments where the disagreement is close to evident.

  27. math idiot Says:


    The Nobel Prizes for science and medicine have been announced for this year. So, what is the Nobel prize equivalent for Computer Science?

  28. Syvert Says:

    Is it contradictory?

    This is slightly off-topic, but I think a nice test case for the program would be the following. (Maybe it really is uninteresting because i dont understand enough about how the program would work. In that case please have me excused (the same goes for the language (I am not a native user of english))).

    Take late 18th century conservatism (in the sense that for instance the result of the french revolution was not bad in and of itself, but the way it happened was not ok either). Conservatism in the sense that one should be careful with too big changes in a short period of time, and that if one finds a complex thing one should not make too big changes before one knows more about the complex thing. Now, exclude the bigotry (of course). Merge it with a market-oriented free trade view, but allow for keynesian sympathies and the belief that progressive taxation is in principle a fair system. Combine it with a non-materialistic (in the scientific sense) non- view, but still believing that science is the most important way forward for the good of the human species.

    How would the program go along to find contradictions? I guess this example includes a few ambiguities and som unclear statements, but is this kind of complexity (relatively speaking), in principle workable within the scope of the proposed program?

  29. Syvert Says:

    Regarding the entry above it should be: Combine it with a non-materialistic (in the scientific sense) and non-“everything-is-reductionistic”-view, but still believing that science is the most important way forward for the good of the human species.

  30. Scott Says:

    So, what is the Nobel prize equivalent for Computer Science?

    Turing award.

  31. Syvert Says:

    Ok. I get it. Nice try though?

  32. SnowLeopard Says:

    A major problem I see with this project is that contradictions and inconsistencies are much harder to pin down in a philosophical (or quotidian) context than they are in a mathematical one. I foresee the major frustrations of using such a program being (1) doubting there’s really a contradiction because I object to the wording of one of the statements; (2) doubting there’s really a contradiction because I object to the relevance of one of the statements; (3) doubting there’s really a contradiction because one of the statements may or may not have the meaning or logical implications the programmer said it did; and (4) doubting there’s really a contradiction because I believe one or more of the statements to be meaningless. I see this all the time as an attorney. There are deeper and richer problems, as well, such as the fact that people’s actual private opinions often depart radically (and heretically) from the orthodoxy of the party/religion they nominally espouse. And is the Christian Trinity supposed to be a contradiction or not? These sorts of issues raise the specter of the diagnostic procedure ending up reflecting the programmer’s preconceptions or prejudices in subtle ways. None of this is intended to discourage the project — I congratulate you on its ambition, which is entirely consistent with the scope of my own projects. But to get it right, I imagine it’ll take much, much longer than three months.

  33. John Sidles Says:

    Hmmm … it’s been mighty slow this past week in the old blogo-sphere … and darn you Scott for addicting me (and doubtless many other folks too) to Intrade market quotations … which I too now check “about 200,000 times per day.” 🙂

    One natural-but-unanswered question about this web-app has to do with utility: after the “tensions” in one’s “world-view” have been reconciled … what’s next?

    The answer cannot be: “A beautiful planet with six-going-on-ten billion people living on it slides inexorably into the dumpster, but at least folks feel better about it.”

    Is this one of the tensions that needs to be adjusted?

  34. Pat Cahalan Says:

    From the God quiz, which I’m mangling through to see if I agree with the quiz itself, I’ve already found one pretty major blooper:

    “The contradiction is that on the first ocassion (Loch Ness monster) you agreed that the absence of evidence or argument is enough to rationally justify belief in the non-existence of the Loch Ness monster, but on this occasion (God), you do not.”

    There is only an contradiction here if you assume an equivalency between physical and metaphysical constructs, which is a pretty huge assumption.

    Absence of physical evidence regarding the existence of a physical entity over time is compelling evidence in and of itself that the physical object does not exist (we must assume that physical objects have some sort of effect on their environment, and thus their existence can be inferred by indirect observation, like black holes, lacking any such indirect observations in addition to direct observations is evidence that the physical construct itself is unlikely to exist).

    Absence of physical evidence regarding the existence of a metaphysical entity is neither here nor there; one cannot assume that a metaphysical entity will be observable through indirect observation.

    This is not a contradiction if you consider the term “evidence” to be by its very nature observable or indirectly observable physical phenomena.

  35. Pat Cahalan Says:

    “In saying that God has the freedom and power to do that which is logically impossible (like creating square circles), you are saying that any discussion of God and ultimate reality cannot be constrained by basic principles of rationality. This would seem to make rational discourse about God impossible.”

    This again is based upon a whole ‘nuther bunch of huge assumptions; if one assumes instead that “God”, Itself/Himself/Herself/Themselves, is a rational being, then the “freedom and power” to violate rationality is largely irrelevant. God may have the power to act irrationally, this does not mean that God is necessarily irrational, perverse, or insane, and it therefore does not follow that the reality itself is irrational.

    Rational discourse about a omniscient, omnipotent being is of course possible if one assumes that the omniscient, omnipotent being is itself rational.

    (certainly this is itself an assumption)

    “To reject rational constraints on religious discourse in this fashion requires accepting that religious convictions, including your religious convictions, are beyond any debate or rational discussion. This is to bite a bullet.”

    One is not necessarily rejecting rational constraints on religious discourse by saying that God has the *ability* to be irrational; this would only be true if one says that God *is* irrational. It’s perfectly possible to have a logical, rational discussion between two different theological constructs if one of the base principles in both constructs is that God is not batshit insane.

    This test was created by someone who hasn’t read much in the way of metaphysical philosophy 🙂

  36. Pat Cahalan Says:

    Oh, germane to the original post – I think this is a very, very interesting idea. I find myself often in the position of trying to dig through people’s contradictions in their philosophy (it is distressingly common for people to hold as base principles two concepts that are irreconcilable).

    I think it would be of a much higher value, intellectually speaking, if you can get some pretty high quality exams to start. You need some people who are already pretty well versed in some complex thinking to design tests that do a good job of maintaining consistency.

  37. John Sidles Says:

    Ok, that’s enough philosophical ramblings, Pat.

    It’s truth-or-dare time … Who scored better? You or your wife/significant other/your cat? 🙂

    I’ll go first. My wife creamed me on both tests.

  38. Pat Cahalan Says:

    I’ll bounce it to my wife and see. The cat(s) disdain such things, being above consideration of such trivial matters as omnipotent beings.

    I have a tension quotient of 7%. I took two hits on the God exam.

  39. Pat Cahalan Says:

    @ Scott

    > Reasonably intelligent people notice inconsistencies once
    > brought to their conscious awareness, though they often
    > respond by rationalizing them.

    This reminds me of a friend’s old .sig, that went something like this:

    I myself manage to hold large numbers of wholly irreconcilable views simultaneously, without the least difficulty. I do not believe others are less versatile.
    — Rushdie, “Shame”

  40. KaoriBlue Says:

    “Reasonably intelligent people notice inconsistencies once brought to their conscious awareness, though they often respond by rationalizing them.”

    Humans seem to use belief as a tool to motivate action (no, I’m not just talking about religion), and there’s positive feedback between the two. Believing in/supporting view A leads to action X; believing in/supporting view B leads to action Y. Even though A and B may be incompatible, X and Y can both be viewed as positive/favorable/etc, simultaneously reinforcing A & B. Rationalization seems like a good way to sidestep having to change one’s behavior as a result of mere logical inconsistencies.

    Perhaps this is evolutionary advantageous for individuals and/or societies… just what sort of Pandora’s box is Scott trying to open?

  41. John Sidles Says:

    Consistency in moral reasoning?

    Perhaps it’s time for us to consider the words of Socrates … Socrates Fortlow that is … yes, I was pleased to discover that everyone’s favorite hard-case hero of philosophy noir is back in action in Walter Mosely’s new book The Right Mistake: The Further Philosophical Investigations of Socrates Fortlow.

    Every new Walter Mosely book provides a rich vein of informatic quotes for my database. For example ….

    Socrates is sitting in the Redondo Beach jail, considering whether to take the stand to testify in his own defense. The charge is murder.

    Socrates’ friend Chaim Zetel (who is a rag-picker, and much more) visits him. The following dialog ensues:

    Chaim: The one thing, the only thing, they cannot ask of us is silence, my friend. We all die. We are all dragged away. But we do not have to go quietly. We owe it to our children and to our friends and even to our enemies to speak out.

    Socrates: Why the enemy>?

    Chaim: He needs to hear the truth too.

    Socrates: But what if I’m not sure about what is true?

    Chaim: Even if you are not the truth is still there.

    How is what Chaim and Socrates are talking about relevent to this thread?

    Perhaps part of the relevance is that philosophical truth—for us mortal mammals—demands broader foundations than logical axioms can easily capture … and is richer for it?

    Or maybe, part of the relevance is that taking philosophy seriously can incur larger-than-expected informatic costs? `Cuz hey, “Denken ist Schwer!” … a sentiment that Socrates Fortlow proves to be very familiar with!

    Almost certainly, part of the relevance of Socrates Fortlow is that we are all of us very lucky to be alive, in a generation in which thoughtful, erudite, wonderfully transgressive, and enormously entertaining authors like Walter Moseley are writing. 🙂

  42. Tim Says:

    I think pointing out contradictions that the author sees is asking to debilitate the program. It would be much more interesting (and much more robust) to allow the user to identify for themselves opinions which might be in conflict. All that would be required is the database of assertions. Give the users random pairings of 2 or 3 assertions. For each pair of assertions allow the user to mark which assertions are in conflict. If the number of assertions is very small (say 3 or 4) then it is not too much to ask the user to judge the consistency or inconsistency of all assertions. Then the survey of agreement or disagreement is done as above and readjustment is allowed, this time also allowing a person to alter what assertions are inconsistent with each other.

    The trouble with this is of course that if you want to consider a reasonable number of assertions then asking the user to judge the consistency of every pair is simply not doable. The problem is mitigated by asking each user to judge for consistency only some subset of assertions and to carry over judgements of inconsistency from one user to another, constructing some sort of inconsistency ratio (number of times judged inconsistent)/(number of times judged)

    even this of course cannot hope to give a complete sampling of the consistency of the assertions. But of course the level of detail of inconsistency data which could be recovered this way is much greater than one could possibly hope to generate by allowing the author control of what is supposed to be logically consistent.

    Additionally this has the benefit that the app comes off being more a vehicle for the beliefs of the user(s) than a didactic teaching tool

  43. Mark Says:

    Have you considered the possibility that, in the same way a logical deduction is being equated with truth, understanding a thing is just an illusion? If a thing is logical, that only means that it appeals to the reasoning facility of the brain, not that it’s the truth.

    It was perfectly logical to a shepherd tending his flock thousands of years ago, to deduce that the earth on which he stood was stationary. Of course, it was not the truth.

    And have you noticed in history all the things that were considered, at the time, to have been the arrival of an understanding? The “aha!” following a period of learned inquiry, was never anything BUT a sensation.

    It plain amazes me, how today more than ever, the reproducibility of various phenomenon are being bundled together as truth. It’s no such thing. And the illusion is more than that, so many people really think that man has arrived somewhere, as if the knowledge we have acquired by now is really somewhere. I mean, a hundred years ago we were still groping for answers, but NOW, oh yeah NOW, we are really there. It’s an illusion. Even your own individual sensation that you understand something, it is an illusion.

    When you “feel” you understand something, your brain is merely delivering to your emotions the sensation that it is satisfied what you previously “understood” has now agreed to find a confortable place for the new thing you have just acquired.

    And everyone is the same, from babysitter to physician, in that, a new piece of information that doesn’t “fit” is either ignored or “put on the shelf for a while”. If we were so smart, why couldn’t we make more of the things that don’t fit instead of taking only some of what we observe and simply rearranging them until the most pieces fit? We’re not so smart.

  44. Mark Says:

    The god test seems like nothing more a switch statement somewhere, and depending upon your choices responds with the same old canned arguments. I hope your project does it one better than that.

    It is an inconsistency to conclude there is no god, or that god is not good, because there is evil (if god is good then why doesn’t he stop pain blah blah blah). Since pain and suffering is a human experience, I can ask just as well “If there is no god, then where do GOOD things come from?” If you leap from there to declare that only if there were no god or that god is not good, could both good and evil exist, then you have created a bigger inconsistency. Because don’t forget the first question proceeded from the point pain and suffering was indeed a human experience.

    And it is an inconsistency to conclude our observations of the universe do not provide any evidence there is a god. If the universe is essentially beyond our finding it out, I mean if we hardly understand spit about the visible universe and there is such a wide disagreement among the learned about many things, and there is NO hope we can ever know the unknowable, how can you conclude therefore there is no god? If for the sake of argument there is a god who created it all, and sustains it all, and if he did not want to be found out, it would be impossible to find god out, in every sense of the word. The only way anyone could possibly know god then, would be if he were to reveal himself to you.
    And then you would have to consider the possibility that he has indeed done so, and to those he has not revealed himself the believers then appear to be irregular and having no objective basis for their belief in god.

    Be sure to consider these arguments in your switch statement, if it is going to perform better than the kindergarten variety linked to above.

  45. Peter Says:

    In the Meno of Plato Socrates uses something like the “Socratic Method” to get the slave to knowledge he did not have before. And what!? Socrates uses his “method” to conclude that the slave must have had a life before this one.
    It seemed to Meno that this was very rational…

    Socrates was against the Sophists: for them winning with all means was the ultimate in a debate, not thinking through a problem to get to knowledge.

    To conquer an irrational belief with the Socratic Method? Impossible! If the belief is irrational? Ever tried to convince a neurotic?

    Socrates himself makes it clear that his method only works for knowledge, not for virtues/opinions (like the great statesman Themistocles could not teach his virtue to his son: virtue does not belong to the field of knowledge ).

    Suppose now by way of a miracle that the young Themistocles used your final version of this program and indeed, became clear in his mind that his opinions were irrational: gone was Themistocles!

    Would the Spartans have defended Thermopylae? Who could have argumented the consequences of their belief or virtue?

    The desire for such a program is irrational. So, be irrational (:-))

  46. John Sidles Says:

    Peter says: “The desire for such a program is irrational …”

    With respect, Peter, the desire definitely is not irrational!

    Because, from the viewpoint of simulation theory, Scott’s program is attempting to simulate the behavior of a wise philosopher (or counselor). And surely the desire to construct such a simulation is rational, for reasons (reaching into my database) we find articulated in Feynman’s Simulating Physics with Computers:

    “The discovery of computers and thinking about computers has turned out to be extremely useful in many branches of human reasoning. For instance, we never really understood how lousy our understanding of languages was, the theory of grammar and all that stuff, until we tried to make a computer which would be able to understand language. We tried to learn a great deal about psychology by trying to understand how computers work. There are interesting philosophical questions about reasoning, and relation, observation, and measurement, and so on, which computers have stimulated us to think about anew, with new types of thinking.”

    IMHO, Feynman’s observations are (in themselves) sufficient to justify Scott’s project as being wholly logical and admirably consonant with a scientific tradition that began with Spinoza and Leibniz and continues to the present day.

  47. Peter Says:

    There is no wise philosopher – as one can observe in the Meno of Plato when Socrates concludes to a life before this one.

    Mathematics is just a place where it becomes clear how a human may think. Computers only go for the calculable. And the mathematical truths a computer can produce are at most countable infinite. But there are uncountable infinite truths.
    Humans have access to, at least by way of mathematics, to something like the uncountable infinite. How is that possible?

    Spinoza and Leibniz have a way of pre-Cantor thinking: 1, 2, .. and oops: there is (only one) infinity. Philosophy after Cantor has challenges other than can be formulated in a computer. Although Spinoza , Leibniz , Kant and Hegel are seminal for new possibilities.

    So, there your student comes again. Let us say, his name is Paul, Paul Cohen. He claims that the Choice Axiom and the ZFC are two different matters. You look into your computer and what? No more Paul Cohen?!

    I stay with my point and will focus: such a program is the desire of a Preussian teacher, who wants to control everything (see the impact of Wilhelm von Humboldt and what became of it; he preferred an open way of education: the gymnasium).

    It is about a “program”, but the desire is to tell that the student is irrational. I cite: ” Think of a kindly, patient teacher in a philosophy seminar who never directly accuses students of irrationality…”. Education with a hammer!
    Besides: Feynman tells “…which computers have stimulated US to think about anew…”

    The question is as old as Socrates story with the slave in the Meno: how can a student find ways of thinking he never dreamed of?
    Last remark: Socrates proves a mathematical truth, counter intuitive for his era (read the Meno). How would you do that in teaching?

  48. Pat Cahalan Says:

    > I stay with my point and will focus: such a program is
    > the desire of a Preussian teacher, who wants to
    > control everything

    This presupposes that there is an assumption is that the program can lead itself to enlightenment, as opposed to merely pointing students in a direction for further investigation.

    Pointing out that someone has a potential conflict due to two (or more) philosophical axioms being difficult to reconcile is different from assuming that those philosophical axioms are always irreconcilable *or* that one of the two axioms is in fact “the Truth”, and attempting to shoehorn the student into following a predetermined path.

    If the program merely points out inconsistencies, without passing judgment on which side of the inconsistency is the “right” one, it is up to the student to figure out how to reconcile the inconsistency.

  49. Peter Says:

    Merely pointing out inconsistencies was not what Socrates did. He was after knowledge. In the case of my example, the axiom of choice and ZFC, the only thing the program could say is that ZFC includes the axiom of choice already. So, the program only can state: you are inconsistent. It never can express the point that the choice axiom is “separate” from ZF.
    The program is at best only like a book with all the truths indexed in it, like a table which proofs the countable infinity for the integers. And the worse part is that with inconsistent but fruitful ideas it dismisses these ideas.
    To state it differently: it belongs to the creativety of humans that they again and again find truths which are not in such a table. I am really amazed that such a program is endorsed by people who teach about (n-dimensional) Turing machines.
    Happily enough, such a machine does not always halt. As a metaphor: the table really can go to countable infinity.

  50. Oh'Common Says:

    I regard the idea of the system quite populist and the whole project meaningless.
    But the real issue here is: what does this system has to do with computer science?

    I think the answer is outright “nothing”. But if someone really insists on implementing this project, then it should be considered maybe in social sciences not CS.

  51. John Sidles Says:

    Oh’Common asks: “What does this system has to do with computer science?

    We can answer by borrowing a phrase from Dave Bacon’s Quantum Pontiff blog: “This is a battle for the soul of future computer scientists.”

    That battle being, is computer science still seriously grappling with the broad but ill-defined challenges of artificial intelligence? Or is computer science narrowing-down to become a rigorous theorem-proving discipline whose main focus is complexity and computation?

    One mark of Scott’s blog being a Bohr-style “Great Blog” is that it can be read either way. 🙂

  52. Simina Says:

    This is awesome, now I wish I had more recent experience with web development.

  53. Andrew Marshall Says:

    Sounds like the multiple-choice-quiz-from-hell.

    You might try to get people on to participate. Those people endure sometimes hundreds of these kinds of questions.

    The major obstruction to the usefulness of this program: usually intelligent people who have spent any time thinking about these philosophical questions will have nuanced ways of understanding language and context, different from both yours and your program’s. In this case, you need a system that can understand the user’s meaning. That requires intelligence.

    On the other hand there are people who have not spent much time thinking about these philosophical questions. I doubt many of them care about making their world view consistent.

  54. Gil Kalai Says:

    Dear Scott,

    I do not understand your example:

    “You indicated substantial agreement with the statement

    1) If a scientific consensus on climate change existed, then society would have to act accordingly.

    and also substantial agreement with the statement

    2) The so-called “consensus” on climate change simply reflects scientists’ liberal beliefs, and therefore does not necessitate action.

    These views would seem to be in tension with each other. Would you like to adjust your belief in one or both statements accordingly?”

    Why are these two statements seem to be in tension with each other?

    Agreeing with 2) suggests that the person do not think that there is a consensus on the matter, point 1) asserts that had a consesus existed, the society should have acted. Why is there a tension here?

    best, Gil

  55. Job Says:

    Oh’Common: “What does this system has to do with computer science?
    I think the answer is outright ‘nothing’”

    Really, nothing at all? It’s a software system, by definition it’s implementation, design, capabilities and limitations are well in the field of Computer Science.

    “Uhm, say what does this ‘computerized’ system of yours implementating a solution to a common problem in the science of computarizability, i say what does it have to do with the field of study and analysis of problems in, uhhm, in uhhm, the science of computerizationality and technotetrics or Mahcrosoft Werd or the installing of, uhm, RAM, say?”

  56. Scott Says:

    Gil: I don’t read (2) as asserting that there’s not a consensus, but rather that a consensus exists and is irrelevant.

  57. Gil Kalai Says:

    I see. Well, I suppose that the main difficulty would be in the interpretation of the statements themselves. Actually all the statements you gave as examples are quite ambiguous and can be understood in different ways (or not at all).This may be the main part of the project which is not “value-neutral”.

    The notion of “level of agreement” is quite problematic by itself, and especially so since your statements are usually compound. Anyway, it is interesting.

  58. KaoriBlue Says:

    I actually agree with Gil – I think there’s a significant difference between a “scientific consensus” and a “consensus of scientists”. This project sounds like a neat idea, but one should probably take great pains to formalize the statements (in some manner) if the hope is to make it more than an interesting curiosity. Sorry if that sounds harsh, but I believe it to be true.

    And just to be clear, I’d say that there is a “scientific consensus” that the climate is changing. However, considering how poor the existing models are, I wouldn’t make a stronger statement than that (especially in regards to what needs to be done).

  59. rrtucci Says:

    This problem screams out Bayesian treatment. Here is how I would model it.

    Consider a naive Bayesian network with two rows of nodes:
    a top row of N hypothesis nodes given by h=(h_1,h_2,…h_N) and a bottom row of M evidence nodes give by e=(e_1,e_2,…e_M). Assume the arrows of this Bayesian net point from the h to the e nodes. Assume all nodes can have only two states: true or false. Assume a priori, the probability of all h nodes is true. Let each e node correspond to one of your questionaire questions. Define tension in terms of the posterior probability P(h|e), as follows

    tension = \sum_i P(h_i=false | e)

    Since, according to complexity theorists, PAC learning is so far superior to Bayesianism, I would like to see how the PAC formalism would handle this 🙂

  60. rrtucci Says:

    “Assume a priori, the probability of all h nodes is true.”
    I meant: Assume all h nodes have a priori probability 1 of being true

  61. KaoriBlue Says:

    Well ok, I apologize if my last comment was offensive. I just really agree with the folks that are saying there’s a need to somehow formalize the statements being made.

  62. John Armstrong Says:

    Gil, Serge would be proud of your fine parsing. We have indeed been misled.

  63. Gil Says:

    Three remarks,

    1) I simply did not understand the English (and the use of the term ‘so called’): I thought that when you say ” The ‘so called’ concensus on X” rather than “The consensus on X” or “The scientific concensus on X,” this means that you disagree that there is a consensus on X.

    2) I think it is a nice project and it resembles some experiments in social science about people’s preferences and ideas (on more mundane matters).

    3) I think that often the main difficulties with “worldviews” or with views on even much more limited issues is to get the factual matters right and not so much to have a profound conceptual understanding of matters which is free from “tensions” and “contradictions.” (This is a little related to our conceptual/technical old discussion.)

  64. Raghavendra Says:

    Your proposal is interesting to me but for a different reason. An implicit assumption that you make is that people have world views, i.e. ‘world view’ is a cultural universal. This is a contested claim. Researchers from the Ghent University, Belgium have done scientific research on the universality of ‘world view’ and convincingly argue that ‘world views’ are found only in traditions that belong to semitic religions. Here is a research thesis that dissects the very notion of ‘world view’:

    S. B. Balagangadhara, “The Heathen In His Blindness: Asia, The West and The Dynamic of Religion”.

  65. Scott Says:

    Raghavendra, by “worldview” I meant nothing more than a person’s beliefs about various questions—beliefs that (one might hope) are mostly consistent with each other. And I emphatically reject the idea that logical consistency matters only in “traditions that belong to semitic religions.”

  66. John Sidles Says:

    Stimulated by Raghavendra’s fine post, my wife and just spent a pleasant early morning hour discussing our experiences at the University of Chicago in light of Prof. Balagangadhara’s two fundamental questions (p. 458):

    Q1. How can one describe the other?
    Q2. How can one accommodate such descriptions in one’s experiential world?

    Here’s our shared executive summary of our graduate school experiences: (1) studying physics was like climbing a infinitely high mountain, (2) studying Egyptotology was like walking into a vast and intricate jungle, (3) studying sociology and anthropology was like paddling into a dense fog bank.

    None of these three graduate experiences inculcated anything that approximated a “world view”, nor were they designed to accomplish this.

    The bottom line is, when it comes to synthesizing a world view, it’s not realistic to expect that graduate school will help all that much.

    The practical reality is, you’re pretty much on your own. Would you really want it any other way?

    At best, graduate study prepares you to enter the world with your eyes open.

  67. William R. Baron Says:

    Hey sounds like an interesting program. I dont know if you are still interested in finding an architect for this project or not. If you are drop me a line.


  68. James P. H. Fuller Says:

    Two thoughts, neither of them my own:

    “The test of a first-fate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise.” – Scott Fitzgerald

    “Do I contradict myself? Very well, then, I contradict myself. I am large, I contain multitudes.” – Walt Whitman