Apr. 16th, 2010

xuenay: (Default)
Yesterday, I:

- Wrote a blog post ( http://lesswrong.com/lw/23r/the_concepts_problem/ )
- Again wrote some personal e-mails
- Got my intro speech from Justin

Today, I:

- Will either make a new LW post or research one

From today onwards, I will attempt to have a new LW post up on an average of once per two days. We'll see for how long it lasts.


Yesterday, I realized a cause for my insecurities here. I hadn't been given any guidelines of how much I should be achieving, so to make up for that, I was imposing myself requirements that were possibly way too strict. So during our daily ten-minute meeting, I asked Anna about it. She was reluctant to give a direct answer ("you should do the best you can - what use would it be to place a minimum requirement on you?"), so I reworded it as "well, the Singularity Institute did have some goal in mind when the Visiting Fellows program was instituted, right?" That got me a longer answer (disclaimer: I've probably forgotten some parts of it already). Part of the purpose is to simply bring together people with potential and interest in SIAI/existential risk and improve their skills so that they can benefit the organization's cause even after having left the program. On the other hand, it would also be good if we "did stuff". She specifically mentioned that some people thought Less Wrong was dying (a claim which surprised me, personally) and that it'd be good to get more quality posts up there, especially ones of a slightly more technical nature. Furthermore, we should try to look productive so as to inspire new visitors to be productive as well, plus to build a growth atmosphere in general.

Justin also explained to me his conception of what SIAI's long-term strategy should look like. Briefly, growth -> (global) intelligence enhancement -> uploads -> Friendly AI. Right now, the organization should concentrate on outreach and teaching activities and seek to grow, then attempting to leverage its size and resources for a general raising of the sanity waterline as well as for global intelligence enhancement. Eventually, we should get the technology for uploads and for uploading FAI programmers, who could then hopefully build FAI. That's a rather ambitious plan, which I found myself mostly agreeing with. I do think that IA methods are sorely needed in general, and that a partially upload-mediated Singularity would be the safest one, if it's possible. Notably, it makes the operational assumption that real AI is still several decades away. That may or may not be true, but if somebody does have an almost finished AI project in their basement, there probably isn't very much we can do in any case. Justin's going to discuss his plans with more of the SIAI leadership.

People are optimistic about SIAI's growth prospects. Michael Vassar was here some days back, and he mentioned 40% yearly growth as a realistic goal. On the downside, the rapid growth SIAI has had so far has also left things in a somewhat chaotic state, without a cohesive large-scale strategy and different members of the organization being somewhat out of touch of what the others are doing. Justin is hoping to get that fixed.

We finally shared our mindmaps. The instructions for those had been rather loose, so everyone had rather different-looking ones. (Alicorn didn't have a mindmap at all, but rather a long outline and a list of questions.) Jasen's was probably the scariest. He discussed the threat of bioterrorism, and thought it possible that in some decades, synthetic biology might allow for the creation of diseases that no human's immune system can defend against. Furthermore, mixing together e.g. a disease that is very good at infecting people and a disease with a long incubation period might become possible and easy even before that. Until now, biowarfare has been relatively hard, but future breakthroughs could make it possible even for your basic graduate student to create such nightmare combinations. Also, there apparently are real death cults (or at least individuals out there), which doesn't exactly help me feel more safe.

I thought the presentations were good, though, and got a bunch of ideas for various things I could write about. For now, I've set myself the goal of just writing a lot here. We'll see how that goes.

Before I came here, I was feeling rather burnt out on my studies. I was figuring that I'd spend several months abroad, concentrate purely on whatever I'd be doing there and not think about my studies. Then I'd come back home, spend one month doing mostly nothing but relaxing, and then return to the studies filled with energy. Unfortunately, as good as that plan sounded, it doesn't seem to be working so far. I'm spending a lot of time worrying about the courses I should be taking after I get home, wondering which ones are the ones I should be taking, whether I should switch majors to CS after my bachelor's or stay in cognitive science, wondering whether it was mistake to come here and forgoing the chance to finish some more courses and to maybe net a summer job... meh.

Previously (like two paragraphs ago - this entry was composed over a period of several hours), I was thinking that I'd been spending most of my time here trying to get some academic writing done, in the hopes I could get enough publications together that I could pass them off as my Master's thesis in a year or two. But now I'm increasingly getting the feeling that I really don't want to do a Master's degree after getting the Bachelor's done. Unfortunately, the Master is the norm in Finland, so trying to get some kind of a job with just a Bachelor is going to be tricky. So maybe I should concentrate more on deepening my programming skills and maybe contributing to some open source project while here, to get something to show on a resume...

December 2018

S M T W T F S
      1
2345678
910 1112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 5th, 2025 12:27 pm
Powered by Dreamwidth Studios