"Rationalism" is up there with "Objectivism" in terms of "definitionally funny things to call your own belief system".
"Yeah man I've been doing some thinking and philosophy and I've come up with a framework called Being Right"
it's a very funny thing to call rationalism in particular, a pseudo-religious belief that hinges mostly on thought experiments and the eventual existence of god-AI we can totally predict in advance.
I was going to say it's amazing this manages to annihilate both roko's basilisk and pascal's wager in one fell swoop, but on second thought, that's just because roko's basilisk is literally just pascal's wager reskinned for tech guys.
Roko's Basilisk actually does a really good job of demonstrating how ridiculous Pascal's Wager is in my experience
For a community of practice supposedly about dwelling on how easy it is to be wrong about things, and the inherent uncertainties you need to apply to your beliefs because of the intractable nature of our own biases, a lot of the rationalists really fell hook, line, and sinker for an idea which only possibly makes sense if they assume their beliefs about the future of AI are absolutely accurate.
And specifically that a literally unfathomably intelligent entity will definitely parse morality and ethical decision making in exactly the way they do, and make the same ethical calls that a very specific subset of them would, because their ethical frameworks are for sure the most rational form of ethics and therefore anything sufficiently intelligent would see things through their lens.
Related: The moment I dropped out of the rationalist community was when I realized Yudkowsky was claiming that sufficiently "rational" people don't need the scientific peer review process, or similar collective error correction systems, because that's for handling the mistakes caused by people's biases. And anybody sufficiently who's practiced rationality enough clearly wouldn't need that anymore, because they are fully aware of and capable of compensating for all of their own biases.
So there I was, walking home while reading one of his essays, going "...no? I thought the whole point of learning about our own biases and fallible cognition was that this is why, the whole reason why, we need to emphasize collective error correction. Why we need to make decisions collectively, why we need to empower others to check ourselves, precisely because these irrational parts of human cognition can't ever be fully excised and believing otherwise is the greatest failure mode of rationality you can possibly fall into?"
Then I reread that part of the essay a few times to make sure I was reading it right, sighed deeply, closed it, and resolved to reread a bunch of Rationalism stuff with that in mind to check if I'd picked up any associated beliefs with that idiocy, and talk about this stuff with a friend for a while. Because trying to clean out your brain all on your own is a fool's game for suckers.
ethics of making AI images aside, I do find a bit amusing the kinds of sob stories and mental gymnastics people make up to pretend like drawing is this super technical skill with an impossibly high barrier of entry when its like one of the first hobbies toddlers pick up
suddenly a lot of people think they got the next Lord of the Rings in their head but they were never able to turn their stories into anything tangible because the evil elitist artists are hogging all the talent and skill and they need a bajilion years of training or something as if one of the most popular manga and anime of the past decade wasn't made by a guy that draws like this
thinking about the time i saw some cool butch4femme art and was about to reblog it when i checked op's tags and they read #genderbend #wincest
tumblr users would never survive in the real world
sam and dean are doing lesbian incest in the real world?