crazy-pages:
thejaymaniac:
imsobadatnicknames2:
despazito:
I was going to say it’s amazing this manages to annihilate both roko’s basilisk and pascal’s wager in one fell swoop, but on second thought, that’s just because roko’s basilisk is literally just pascal’s wager reskinned for tech guys.
Roko’s Basilisk actually does a really good job of demonstrating how ridiculous Pascal’s Wager is in my experience
For a community of practice supposedly about dwelling on how easy it is to be wrong about things, and the inherent uncertainties you need to apply to your beliefs because of the intractable nature of our own biases, a lot of the rationalists really fell hook, line, and sinker for an idea which only possibly makes sense if they assume their beliefs about the future of AI are absolutely accurate.
And specifically that a literally unfathomably intelligent entity will definitely parse morality and ethical decision making in exactly the way they do, and make the same ethical calls that a very specific subset of them would, because their ethical frameworks are for sure the most rational form of ethics and therefore anything sufficiently intelligent would see things through their lens.
Related: The moment I dropped out of the rationalist community was when I realized Yudkowsky was claiming that sufficiently “rational” people don’t need the scientific peer review process, or similar collective error correction systems, because that’s for handling the mistakes caused by people’s biases. And anybody sufficiently who’s practiced rationality enough clearly wouldn’t need that anymore, because they are fully aware of and capable of compensating for all of their own biases.
So there I was, walking home while reading one of his essays, going “…no? I thought the whole point of learning about our own biases and fallible cognition was that this is why, the whole reason why, we need to emphasize collective error correction. Why we need to make decisions collectively, why we need to empower others to check ourselves, precisely because these irrational parts of human cognition can’t ever be fully excised and believing otherwise is the greatest failure mode of rationality you can possibly fall into?”
Then I reread that part of the essay a few times to make sure I was reading it right, sighed deeply, closed it, and resolved to reread a bunch of Rationalism stuff with that in mind to check if I’d picked up any associated beliefs with that idiocy, and talk about this stuff with a friend for a while. Because trying to clean out your brain all on your own is a fool’s game for suckers.
(via crystallinehorror)