Grad students and postdocs and faculty sought
I’m eagerly seeking PhD students and postdocs to join our Quantum Information Center at UT Austin, starting in Fall 2018. We’re open to any theoretical aspects of quantum information, although if you wanted to work with me personally, then areas close to computer science would be the closest fit. I’m also able to supervise PhD students in physics, but am not directly involved with admissions to the physics department: this is a discussion we would have after you were already admitted to UT.
I, along with my theoretical computer science colleagues at UT Austin, am also open to outstanding students and postdocs in classical complexity theory. My wife, Dana Moshkovitz, tells me that she and David Zuckerman in particular are looking for a postdoc in the areas of pseudorandomness and derandomization (and for PhD students as well).
If you want to apply to the UTCS PhD program, please visit here. The deadline is December 15. If you specify that you want to work on quantum computing and information, and/or with me, then I’ll be sure to see your application. Emailing faculty at this stage doesn’t help; we won’t “estimate your chances” or even look at your qualifications until we can see all the applications together.
If you want to apply for a postdoc with me, here’s what to do:
- Email me introducing yourself (if I don’t already know you), and include your CV, your thesis (if you already have one), and up to 3 representative papers. Do this even if you already emailed me before.
- Arrange for two recommendation letters to be emailed to me.
Let’s set a deadline for postdoc applications of, I dunno, December 15?
In addition to the above, I’m happy to announce that the UT CS department is looking to hire a new faculty member in quantum computing and information—most likely a junior person. The UT physics department is also looking to hire quantum information faculty members, with a focus on a senior-level experimentalist right now. If you’re interested in these opportunities, just email me; I can put you in touch with the relevant people.
All in all, this is shaping up to be the most exciting era for quantum computing and information in Austin since a group of UT students, postdocs, and faculty including David Deutsch, John Wheeler, Wojciech Zurek, Bill Wootters, and Ben Schumacher laid much of the intellectual foundation of the field in the late 1970s and early 1980s. We hope you’ll join us. Hook ’em Hadamards!
Unrelated Announcements: Avi Wigderson has released a remarkable 368-page book, Mathematics and Computation, for free on the web. This document surveys pretty much the entire current scope of theoretical computer science, in a way only Avi, our field’s consummate generalist, could do. It also sets out Avi’s vision for the future and his sociological thoughts about TCS and its interactions with neighboring fields. I was a reviewer on the manuscript, and I recommend it to anyone looking for a panoramic view of TCS.
In other news, my UT friend and colleague Adam Klivans, and his student Surbhi Goel, have put out a preprint entitled Learning Depth-Three Neural Networks in Polynomial Time. (Beware: what the machine learning community calls “depth three,” is what the TCS community would call “depth two.”) This paper learns real-valued neural networks in the so-called p-concept model of Kearns and Schapire, and thereby evades a 2006 impossibility theorem of Klivans and Sherstov, which showed that efficiently learning depth-2 threshold circuits would require breaking cryptographic assumptions. More broadly, there’s been a surge of work in the past couple years on explaining the success of deep learning methods (methods whose most recent high-profile victory was, of course, AlphaGo Zero). I’m really hoping to learn more about this direction during my sabbatical this year—though I’ll try and take care not to become another deep learning zombie, chanting “artificial BRAINSSSS…” with outstretched arms.
Comment #1 October 28th, 2017 at 9:44 am
In another life, I would have loved to do this. Was happily jealous for all those in the Bay Area for your recent talk and happily jealous for all those who will get to work with you on these topics 🙂
Comment #2 October 28th, 2017 at 3:07 pm
Are you willing and able to supervise Philosophy grad students?
Comment #3 October 28th, 2017 at 3:36 pm
Do you have any recommendations for a TCS person who wants to know more about deep learning? Let’s assume the TCS person knows basic computational learning theory, like the PAC model and VC dimension.
Comment #4 October 28th, 2017 at 4:44 pm
Haribo Freak #2: No, I don’t have such an arrangement with UT’s philosophy department! However, if a philosophy PhD student wanted to do something related to TCS, and it made sense to all involved, I could co-supervise such a student with a primary adviser in philosophy (same as with any department at UT, I think). Again, this would be something to get in touch with me about after one had already been admitted to philosophy or whichever other department.
Comment #5 October 28th, 2017 at 4:51 pm
Anon #3: I’m a TCS person in exactly the situation you describe—one who hasn’t yet advanced to the next stage, of “knowing more”! So I might be ill-placed to give you advice. FWIW, though, what I’m planning to do, as soon as I have time, is simply to read some of the papers that already exist, giving (or purporting to give) theoretical explanations for deep learning’s success! When I do that, I’ll be extremely lucky to have Adam Klivans as a trusted guide—someone who not only knows the relevant learning theory literature, but is also a native speaker of complexityese. Besides Adam’s own papers, I certainly have it on my stack to read Tali Tishby et al.’s papers about the “information bottleneck principle,” and am open to other suggestions. (But no, I’m not going to offer evaluative comments about any of these papers before I’ve actually read them. 🙂 )
Like with anything else, I’m guessing that I’ll have an affinity for any work that states and proves clear theorems that I can understand, or that at least formulates clear open problems. That doesn’t mean such work is “objectively” the most important—it only means that I, personally, am likelier to be able to understand and ideally even contribute to it than to more empirically oriented work.
Comment #6 October 28th, 2017 at 6:05 pm
Anon #3,
One usefull link you may enjoy:
https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap
Comment #7 October 28th, 2017 at 7:22 pm
For the Anon asking about deep learning. I found the freely available recent book by Goodfellow, Bengio and Courville useful. http://www.deeplearningbook.org/
Comment #8 October 28th, 2017 at 8:30 pm
Scott #5:
What about the Wasserstein Generative Adversarial Networks paper from ICML 2017 last year? I found the theory a bit hard to process but you would be able to follow it.
Comment #9 October 28th, 2017 at 9:27 pm
I think Philipp Krähenbühl’s classes are excellent. You can find links to them at the bottom of his webpage:
http://www.philkr.net/
Comment #10 October 28th, 2017 at 9:43 pm
Regarding machine learning,
There’s a definite deep connection between 9 different knowledge domains:
Algebra&Number Theory–>Thermodynamics–>Machine Learning–>Probability&Statistics–>Neuroscience–>Decision Theory–>Programming–>Networking&Electronics–>Economics
If you followed the arrows across all 9 levels of abstraction and understood the connections between all 9 domains, I think you’d probably have penetrated to the heart of deep learning.
Neuroscientist Anil Seth just recently co-published a fantastical paper that explains the ‘free energy principle’, a virtuoso display in which they follow my recommended approach of integrating multiple knowledge-domains to punch through to deep understanding.
http://www.sciencedirect.com/science/article/pii/S0022249617300962
I strongly urge everyone to read this, highly highly recommend!
Comment #11 October 29th, 2017 at 12:48 am
Scott,
Talking about deep learning, how would you advice a beginner (with a mathematical background in calculus, probability, some linear algebra) to go about to be able to get up to the front of this theoretical research and read the papers. I was asking this because you have taught it to yourself, though your math background would be different.
Comment #12 October 29th, 2017 at 1:50 am
Ashley #11: I was just explaining above that I haven’t yet taught it to myself.
You could start with the links helpfully provided by Jay, CC, and Adam Klivans, all of whom probably (or in Adam’s case, certainly) know more about it than me!
Comment #13 October 29th, 2017 at 2:51 am
Sorry, I saw only the comment #1 while I was asking the question. I don’t know how that happened – but maybe I had this page open for some time on my computer and didn’t refresh the page before I asked the question.
Comment #14 October 29th, 2017 at 1:37 pm
I also found an introduction to deep learning by Michael Nielsen (author of the first textbook on quantum computing):
http://neuralnetworksanddeeplearning.com/
Comment #15 October 29th, 2017 at 6:17 pm
I know you often bunch together multiple unrelated things in blog posts, so I was hoping you’d talk about something other than advertisements for positions in your university. And indeed you did. Wow! A book by Avi Widgerson is great news!
I won’t be able to read it very soon, because I just found out that the ed. Iványi Antal “Algorithms of informatics” book has had a third volume published back in 2013. I totally missed that news, so now I’ll have to catch up and read at least some parts of that third volume, and probably also re-read parts of the first two volumes with a more mature view. But I’ll definitely find the time for Avi’s book after that.
Comment #16 October 29th, 2017 at 11:38 pm
The phrase “I are” makes me unhappy despite the “along with”. I’d say “am”, unless this is a subtle allusion to the many-worlds interpretation.
Comment #17 October 30th, 2017 at 1:14 am
John #16: Thanks for that important correction! It’s fixed now.
I wish to state for the record that, at UT’s Quantum Information Center, we welcome applicants of every race, gender, sexual orientation, religious orientation, disability status, and preferred interpretation of quantum mechanics.
Comment #18 October 30th, 2017 at 5:54 am
@scott 17 did you purposely miss out age?
Comment #19 October 30th, 2017 at 8:36 am
AJ #18: Age, veteran status, hair color, Mac/Windows/Linux, whiteboard/blackboard/Beamer/PowerPoint… (must use TeX though)
Comment #20 October 30th, 2017 at 9:56 am
deep learning zombie, chanting “artificial BRAINSSSS…” with outstretched arms.
I so wish I had seen this earlier, there is a Halloween costume in there somewhere.
Comment #21 October 30th, 2017 at 10:40 am
I’ve been wanting to work through “Foundations of Data Science” by J. E. Hopcroft, Arvin Blum, and R. Kannan as a way to get more solid with the math:
https://www.cs.cornell.edu/jeh/book.pdf
Comment #22 October 30th, 2017 at 2:29 pm
Scott #17, “and preferred interpretation of quantum mechanics.” –> really? you’d have no qualms with a postdoc who believed in superdeterminism?? 😉
Comment #23 October 30th, 2017 at 7:11 pm
Atreat #22: Well, that’s not really an “interpretation”; it’s just a confusion… 😀
Comment #24 October 30th, 2017 at 8:58 pm
Here’s my wikibook on ‘Machine Learning’ (Links: 150)
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Learning
Generally, for any given well-developed knowledge domain, there’s at most a couple of hundred (~200) key concepts that need to be mastered- find these key concepts, then break it down – draw a concept map with lines radiating outwards from the 200 core concepts into sub-concepts.
Rather than trying to learn knowledge in tiny sequential pieces, it’s best to get hit with the big picture right away (the concept mapping or mind mapping approach). Get thrown right in the deep end. ‘Super-clicking’ : Many concepts have to be integrated into a coherent whole.
I devised a map of all explanatory (deep) knowledge, with 27 core knowledge domains , each of around 200 core concepts , so that’s ~ 5 400 concepts in total to sum up all human deep knowledge.
Comment #25 October 31st, 2017 at 9:39 am
Scott @19: emacs vs. vi?
Comment #26 October 31st, 2017 at 9:50 am
Jon #25: Even Notepad. We are large. We contain multitudes.
Comment #27 October 31st, 2017 at 4:39 pm
Could you bring yourself to take on a hard-core Trump supporter who wore his MAGA cap to your office?
Comment #28 October 31st, 2017 at 9:40 pm
Wilhelm #27: Interesting question. For me, the specific issue with wearing a MAGA cap to the office is that—unless the wearer were to disclaim this meaning—it basically amounts to a statement (or does it not?) that other participants in our group, for example those from Iran, ought to be deported from the US. So it’s not just a political expression unrelated to science, but a rejection of one of the preconditions for our doing science.
This, of course, is also the argument people made for firing James Damore, but in the case of MAGA, it strikes me as a hundred times truer. Damore never said or insinuated that women as a class, or any specific woman, had no place at Google (indeed he said the opposite), but wearing a MAGA cap does say or insinuate that many current members of the American scientific community who know exactly who they are have no place in the US, because that’s what Trump has actually tried to implement, and continues to fight the courts to be able to implement.
Even then, though, I wouldn’t just summarily expel a MAGA hat wearer from our group—especially not if they were doing great research! I would try to engage them in a dialogue about the issue above.
Comment #29 November 1st, 2017 at 8:12 am
#13, Ashley:
Deep learning is a branch area of the rich fields of recurrent neural networks and machine learning (information theory and stochastic processing are related, more traditional fields). You might want to go to some of the IEEE talks in these areas to get a sense of current work and new directions, and who is working on these.
If you are a student, the registration fee at IEEE talks is usually pretty inexpensive.
(FYI: The traditional progression toward machine learning, after calculus is to take classes in third or fourth year, and in graduate school, in stochastic processes, information theory, neural networks, and machine learning.)
Comment #30 November 2nd, 2017 at 10:46 am
“Make Classical Computers Great Again”
Comment #31 November 11th, 2017 at 8:30 pm
Scott #28: As far as I know, Trump has never called for the expulsion of Iranians who are already in the country. In any case, and even if he had, I think someone should be allowed to wear a MAGA cap without fear of anything.
True tolerance means tolerating the intolerant as long as they aren’t being actively disruptive. If they are shouting down speakers at a conference because of their nationality or actively telling people to go back to wherever that’s one thing. But a political statement worn on clothing?
If anything will change someone’s mind about the ban, it will be Iranian theoretical computer scientists who are willing to work with them even if they are wearing a Trump cap. My guess is that most Iranian theoretical computer scientists should be used to dealing with people with odious opinions, being from a country whose government loves to chant “Death to America” and “Death to Israel.” Recall that Noether once laughed about one of her students wearing Hitler Youth apparel.
Unfortunately, almost half of the American population supports this ban. I wonder how many would support it if they had the opportunity to interact with Iranians in the United States. I do not think insisting that these people remain closeted helps in any way.
Comment #32 July 6th, 2018 at 11:08 pm
hey doc aaronson did you get around to reading tali tishby’s paper and the rebuttal and the counter-rebuttal about the information bottleneck.
My colleagues and I (unnnamed theoretically oriented machine learning people) were sorta skeptical about the interpretations in the paper and I am curious what you thought since you seem willing to debunk things that are currently popular.