xuenay: (Default)
[personal profile] xuenay
Haven't posted in a while, again. Here's an interview with Mike Blume, a long-time Visiting Fellow. I'll shortly be making a more personal post, as well describing the new points system we've began using recently.

--------------

Kaj:
The description of Mike on the 2009 Visiting Fellows page says that he 'is a Ph.D. student in experimental particle physics at the University of  California at Santa Barbara. He holds a B.S. in Physics from the University of California at Irvine.' As you can probably guess, he's been around here for a while, as he was already in last summer's Visiting Fellows program. When I got here he was taking care of SIAI's sysadmin needs while at the same time interning with Rolf Nelson and his startup. Recently he's been moving some of the sysadmin duties over to Louie Helm, as Mike is currently seeking to test his l33t coding skillz in the greater Silicon Valley economy.

Mike, would you say that this description is accurate, or is there something you'd like to add to it?

Mike Blume:
I think it's fairly accurate, though obviously I'm not really a PhD student anymore.

Kaj: I could ask about something else, but I don't think our readers would appreciate being left hanging right that. So a few words on how come you stopped with that?

Mike: Hmm, that's a good question. One answer would be that upon reflection, it didn't seem like the best way to reduce existential risk. I'd gotten into physics with a sort of "knowledge for its own sake" ethos. And I still love that ethos, and find it beautiful, but it turned out there were things that needed to be taken care of first.

Kaj: So how have you been reorienting yourself to actually reduce existential risk better?

Mike: First of all by re-orienting myself to get *anything* done better, and that's still an ongoing process.  I've always had difficulty with organization, with self-discipline, with Getting Things Done, and I've been working very hard over the past year at overcoming those, at learning strategies, at learning mindsets, that help me to perform better in those areas. Anna Salamon and Rolf Nelson have both been very helpful to me in that regard.

Aside from that, I've tried a few different roles at the institute to see where I'd be able to make a strong contribution. over the summer I kibitzed on a paper with Steven Kaas and Andrew Hay. I got to learn some interesting economics and do some nifty math modeling, but it turned out it probably didn't make sense to have me specialize in paper-writing, as there are people (like Steven) who are much more productive at it than I am.

But I've always really liked computers so I wound up taking on a lot of tech support jobs in the house. I set up a copy of Less Wrong that could run internally, so we could discuss projects and then set up a virtual private network, so that folks working off site could connect. Now that some of that's set up, there probably won't be as much for me to do in that capacity -- it's mostly maintenance.

But we can always use money. I figure if I can get a steady software development job, I can probably take care of food and housing for two or three full-time interns, so that's what I'm working on right now. I'm interning temporarily with Rolf Nelson, who's building a startup called Honest AI. Honest AI is going to be a site where people can see excerpts from reviews of popular science books, both positive and negative, and hopefully have the most persuasive ones on either side brought to their attention. By bringing good arguments to the fore, we hope to improve rationality, and help people educate themselves.

So I'm designing the front-end for that, and that's given me a lot of experience, and I'm using that to try to get a more permanent job now, with a lot of help from Rolf.

Kaj: So, how'd you initially hear about SIAI, and how did you end up taking part in the Visiting Fellow program?

Mike: Well, my roommate told me to read this comic called XKCD -- some of your readers may have heard of it ;) XKCD mentioned this nifty site called reddit, and reddit started linking to these really fascinating, thoughtful articles on human cognition and rationality, by this guy named Eliezer Yudkowsky. This was back when Eliezer was blogging on Overcoming Bias.

His articles made *very very* uncomfortable reading for me at the time because I still considered myself a Christian. I'd had the theism v. naturalism debate in my head already, and all the necessary arguments had been made, and then I'd basically shut it off, because atheism was scary, and, y'know, maybe an argument would come along, and Christianity would seem plausible again, so let's just wait for that to happen. Reading Eliezer's writing on biases and self-deception made me realize that I was fooling myself, that I basically already knew the right answer, and I should grow up and accept it. So that was helpful.

I kept reading OB, and recommending cool articles to my friends, and learning lots and lots and was vaguely aware on the side that Eliezer was an AI researcher, and worked for this Singularity Institute thing. Eventually Eliezer split off to start Less Wrong, and I started posting, mostly with questions I had (how do we raise rationalist children? how honest should we actually be, esp. with people who wish us ill?). I wound up becoming a fairly highly ranked author on the site, which felt good.

People started organizing Less Wrong meetups, and Anna proposed one when she and Steve Rayhawk were passing through Santa Barbara. And the next day Anna sat down with me and talked about existential risk, about how with billions of lives in the balance, even having a one in a thousand chance of changing the outcome was like saving millions of lives with certainty. so I went up to the grad physics office and asked for a leave of absence and moved up to Santa Clara to help out with the visiting fellows program for the summer and, well, wound up sticking around =)

Kaj: Cool. =) There are a lot of people living here in the house. In general, how do you find the practical sides of being here and living together with all these other folks?

Mike: Let's see. By and large it's surprisingly easy. I mean, yeah, sometimes people make a mess, or the dishes don't get done, or whatever, but by and large I think I'd definitely prefer this to living alone. The company's great, and there are interesting conversations at all hours. Whatever you want to know about, there's probably someone around who can point you in the right direction. And as daunting as the shopping trips are, I like the food situation, it's fun to put some food in a pot and be able to feed 10 people for a night. And conversely it's nice to have people making stuff and not have to worry about dinner.

Also, living in a group like this has pushed me to take on responsibilities I hadn't before, and I think that's made me stronger. When I first got here, there was one car, it was a stick, and only Anna could drive it. Justin had a license, and I don't think anyone else did. So I wound up learning to drive and finally getting my license (at 24) so I could help out there, and also stepping up my tech knowledge quite a ways so I could pitch in with the house network.

Kaj: Having been here for a while, how do you see the Visiting Fellow program? What's the best thing that it's achieving, anything where it'd have potential it hasn't quite lived up to yet? What do you see as its purpose?

Mike: Michael Vassar has compared this house to a university, and I don't think he's wrong. I think its primary purpose is to make people more capable, more rational, and more aware existential risks are a very, very hard problem. I think all the strategies we have involve delegating the problem in one way or another. Eliezer wants to delegate the problem to a smarter being of our creation and maybe he'll manage that -- I hope he does.

Another strategy is to delegate the problem to a community of our creation to a large, connected group of smart, rational, capable people continuing to attack the problem and thinking of solutions we haven't yet. I think one of the primary goals of the visiting fellows program is and should be to seed that community. I think by and large it does well at that, but there are definitely areas where we can improve. Developing specific, technical knowledge of what makes humans most effective, ie. Intelligence Amplification, and things in that area. Developing better strategies for communicating knowledge and especially developing better strategies for making people more personally effective -- helping people to be more disciplined, more organized, more productive. I guess those are the main things I'd want to improve on.

Kaj: The slightly unstructured nature of the program, with people basically being free to do whatever they want, is something that some of us (including me - though I've grown to prefer it this way) have had a bit of difficulty adapting to. Do you have any thoughts about that?

Mike: Hmm, not sure, working with Rolf has been a bit more structured than that, so I'm not really sure. I think the points system is pretty nifty, since it lets people pick their own projects, and gives them heuristics about which will be most valuable. But again, not something I've interacted with directly.

Kaj: Oh yeah, I should tell my readers more about the points system. It's a recent development, so I haven't really covered it yet. I'll make a post covering that, as well as my most recent personal feelings, shortly.

Moving from the Visiting Fellow program to the Singularity in general, what are your thoughts about the likely timeframes and our chances of making it? Will we have a Singularity in 10, 20, 50 or 500 years, and will we transform ourselves into digital sentiences eternally running within Jupiter Brains or will we end up welcoming our robotic overlords before being soundly squashed and turned into paperclips?

Mike: This seems like a good place for aumanncy. Folks smarter than me seem to think our chances of extinction are quite high -- something like 60% and that if there aren't smarter-than-human intelligences by the end of the century, it's probably because we had some sort of massive depression or war that upset the progress of science.

Kaj: Alright. Mm, what else to ask... you mentioned the Honest AI startup before. Are you free to tell us more about it, or are the details non-public for now?

Mike: Free to tell you a fair bit, I think =) Just asked Rolf if I know anything confidential and he doesn't think so.

Kaj: Cool. Well, go ahead and tell us all the juicy details, then. =)

Mike: lol =) So right now it's Rolf and me with our laptops in the sunroom. We're coding mostly in Python, using the Django framework, and hosting on Google App Engine. Rolf's designing the backend -- the part that goes out and scrapes the internet and finds cool reviews of books, either praising them or panning them and loads all the information into a database. I'm working on the front-end, which reads that information from the database, and presents a pretty website that the user can interact with, leave comments on, and so forth. If you (or your readers) like, you can check it out here: http://honestai-host.appspot.com/demo.

So today I'm working on profiling -- trying to figure out which requests users might make which are likely to take up a lot of time and resources on the server, and then figuring out how to streamline those, either by making the code more efficient, or making better use of caching.

Kaj: Neat. Tell us a bit more about the actual functionality of the site: you said it would allow people to see the most persuasive reviews of different books? How's that differ from say Amazon, where I can see all the positive and negative reviews that different people found most useful?

Mike: So the goal here is to provide editorial reviews -- excerpts from newspaper articles, things like that -- which we figure are likely to be higher quality that most Amazon reviews. Amazon currently excerpts newspaper reviews, but those are always positive, and then the user reviews are for-or-against. If a respected science writer writes an article in the New York Times claiming that a book's thesis is bogus, we want to see that front and center.

Kaj: Well, I think I'm almost done with my questions. Anything you want to bring up before we conclude?

Mike: Hmm, don't think anything comes to mind.

Kaj: Alright. In that case, thanks for the interview!

Mike: Any time, thanks for asking me =)
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 11:13 pm
Powered by Dreamwidth Studios