Untilted

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna

If you held a gun to my head and forced me to have an opinion on “AI art”* it would be this:

The way most people** are using it currently, the experience of generating images with Stable Diffusion and similar things is closer to being art curation than to being an artist as such.

Given this, it becomes clear - judging by what people will post in a e.g. a forum thread where AI images are not explicitly disallowed - that many of the people doing this have not developed the primary skill of the curator: discernment.

Which is all to say: if you are going to post this stuff, it ought to be good. You need some standards, tighter ones than you’d apply to images made the old-fashioned way, because the flaws are that much less forgivable, and a lot of people just don’t seem to have these yet.



(*not the algorithms themselves, which are honestly much more interesting. You’ve found a way to perturb white noise in the direction of being Goku? There’s like a Gokuward vector field on the space of bitmaps or something? Something like an RG flow whose fixed points are Goku off of Dragonball Z?!)

(**not everyone! some people are using it as part of, like, doing art. This isn’t about them. Not a hard-and-fast distinction, I know.)

yaldabaothadeez
yaldabaothadeez

So, putting some things I've been talking about recently together: this is what I understand is going on with Kelly bets and the whole "ergodicity economics" thing:

  • You have some system with randomness and a dynamic - what happens next depends on what went before. We'll model this as a discrete stochastic process X_n for n=1 , 2, ... but you could make it continuous instead. For Kelly, this dynamic was compound interest: X_n = X_{n-1} * [random variable]
  • Actually, you have a choice several such systems, and you want to know which one is the "best" in some sense - assume bigger values mean better - but obviously X_n is a random variable, it looks you have to make some decision about what to measure (e.g. a "utility function" in the sense of von Neuman-Morgernstern)
  • But wait! If you're really lucky, X_n might converge to some deterministic f(n) (converge in probability? almost surely?) and then you can just choose the option whose f(n) is growing fastest.
  • This absolves you of having to find a vNM utility function, as the fastest-growing option will always be the best, eventually, so you can stop thinking about probability (Assuming you care about what happens asymptotically, or at least after enough rounds that it's a good approximation)
  • The easy way to show X_n converges seems to be to find an invertible monotonic function u such that Y_n = u(X_n) is a sum of n i.i.d random variables (i.e. Y_n = Y_{n-1} + A, for the same random variable A each time), and then apply the law of large numbers: as n gets large Y_n -> nE(A) and so X_n -> u^-1(n E(A)) - so, assuming u is increasing, you pick the option with the biggest E(A).
  • (The continuous versions of this I've seen seem to want to map the variable into Brownian motion with a constant drift and variance)
  • This feels like overkill to me? We don't need i.i.d variables for the law of large numbers, just suitably bounded growth in the variance, and that's not the only way things can converge.
yaldabaothadeez

So it’s not so much that the dynamics give you a utility function, it’s that the dynamics mean it (eventually) doesn’t matter what your function is

So, putting some things I’ve been talking about recently together: this is what I understand is going on with Kelly bets and the whole “ergodicity economics” thing:

  • You have some system with randomness and a dynamic - what happens next depends on what went before. We’ll model this as a discrete stochastic process X_n for n=1 , 2, … but you could make it continuous instead. For Kelly, this dynamic was compound interest: X_n = X_{n-1} * [random variable]
  • Actually, you have a choice several such systems, and you want to know which one is the “best” in some sense - assume bigger values mean better - but obviously X_n is a random variable, it looks you have to make some decision about what to measure (e.g. a “utility function” in the sense of von Neuman-Morgernstern)
  • But wait! If you’re really lucky, X_n might converge to some deterministic f(n) (converge in probability? almost surely?) and then you can just choose the option whose f(n) is growing fastest.
  • This absolves you of having to find a vNM utility function, as the fastest-growing option will always be the best, eventually, so you can stop thinking about probability (Assuming you care about what happens asymptotically, or at least after enough rounds that it’s a good approximation)
  • The easy way to show X_n converges seems to be to find an invertible monotonic function u such that Y_n = u(X_n) is a sum of n i.i.d random variables (i.e. Y_n = Y_{n-1} + A, for the same random variable A each time), and then apply the law of large numbers: as n gets large Y_n -> nE(A) and so X_n -> u^-1(n E(A)) - so, assuming u is increasing, you pick the option with the biggest E(A).
  • (The continuous versions of this I’ve seen seem to want to map the variable into Brownian motion with a constant drift and variance)
  • This feels like overkill to me? We don’t need i.i.d variables for the law of large numbers, just suitably bounded growth in the variance, and that’s not the only way things can converge.
argumate
argumate

I keep chewing on this topic but people really seem to struggle to accept that intelligence is computable (in the technical sense of the term), presumably for the same reason they struggle to accept that humans are made of atoms (or "humans are complex assemblages of molecules").

but the consequence of this worldview is that anything that can be computed is not intelligence, so we end up with this god of the gaps situation where the definition of intelligence is constantly shrinking to exclude things that computers can now do.

a lot of religious people just say intelligence = soul and end the discussion there, which is obviously crazy but actually more defensible than the muddled middle who theoretically accept how the universe works but in practice retreat into vague platitudes that make no sense at all.

argumate

"that's not intelligence, that's just an algorithm! applied statistics!" well what did you think intelligence was, the divine spark?

argumate

there is a common equivocation between intelligence and agency, where a roomba can seem more alive than ChatGPT despite being less intelligent as it appears to have a more obvious purpose and move in a way that expresses some sort of desire: it is literally trying to get somewhere and do something, and of course a cat or mouse has far more intelligent agency than any AI system despite not understanding verbal language at all.

our sense of humanity rests on the trinity of intelligence, agency, and ineffable moral worth, and as computation consumes intelligence we shift our attention ever more to what we believe remains unique to us.

yaldabaothadeez

Synthesis: humans are not intelligent