what is this post about?
Not all of my writing can be found here on my blog and over the recent past, I have written more and more on other platforms. Therefore, I decided to publish this summary of posts with links and short teasers.
I intend to write such an overview post whenever there is a decent number of posts to talk about.
- The AI safety starter pack is a 5-minute overview for everyone who wants to start thinking about or working on AI safety/alignment.
- In What success looks like we describe possible future scenarios in which AI leads to good possible futures and we collect and distill many variables about which factors might have a causal influence on decreasing risks from advanced AI. I think the post is a decent overview for people interested in high-level AI strategy or policy.
- I have been working a lot with EPOCH, a new organization that models high-level trends in ML. Most prominently, we have investigated how much compute is necessary to train large AI models and how the price-performance of GPUs has changed over time. There were also some smaller investigations into compute, that you can find on my LW profile.
- For our AI safety camp project, we polled a lot of ordinary people on moral questions related to AI alignment. Our results and their implications can be found in our post Reflection mechanisms as an alignment target: a Survey.
- I think Eliciting Latent Knowledge is a promising approach to alignment and have thus written a summary/distillation for it.
- Tom Lieberum and I have been playing around with GPT-3 to investigate how good its causal understanding is.
- I wrote some fortified essays on Metaculus for a challenge on transformative AI. The first essay looks at take-off speeds of transformative AI, the second looks at how accurate our predictions were over the last year and the third looks at predictions that will resolve in the next five years.
- We compiled a list of many possible EA megaprojects
- We looked into the implications for EA and EA goals of the new German government’s plans.
- I wrote a brief post on an EA analogy called “the train to crazy town” which was originally coined by Ajeya Cotra on her 80K podcast appearance.
- I asked a couple of questions (and sometimes suggested some answers), e.g. where would we set up a new EA hub, what is the right ratio between mentorship and direct work or how many EAs failed in high-risk, high-reward projects.
One last note
If you want to get informed about new posts you can follow me on Twitter.
If you have any feedback regarding anything (i.e. layout or opinions) please tell me in a constructive manner via your preferred means of communication.