Avatar

Untilted

@yaldabaothadeez / yaldabaothadeez.tumblr.com

Should probably put something here now, huh?

If you held a gun to my head and forced me to have an opinion on "AI art"* it would be this:

The way most people** are using it currently, the experience of generating images with Stable Diffusion and similar things is closer to being art curation than to being an artist as such.

Given this, it becomes clear - judging by what people will post in a e.g. a forum thread where AI images are not explicitly disallowed - that many of the people doing this have not developed the primary skill of the curator: discernment.

Which is all to say: if you are going to post this stuff, it ought to be good. You need some standards, tighter ones than you'd apply to images made the old-fashioned way, because the flaws are that much less forgivable, and a lot of people just don't seem to have these yet.

(*not the algorithms themselves, which are honestly much more interesting. You've found a way to perturb white noise in the direction of being Goku? There's like a Gokuward vector field on the space of bitmaps or something? Something like an RG flow whose fixed points are Goku off of Dragonball Z?!)

(**not everyone! some people are using it as part of, like, doing art. This isn't about them. Not a hard-and-fast distinction, I know.)

So, putting some things I've been talking about recently together: this is what I understand is going on with Kelly bets and the whole "ergodicity economics" thing:

  • You have some system with randomness and a dynamic - what happens next depends on what went before. We'll model this as a discrete stochastic process X_n for n=1 , 2, ... but you could make it continuous instead. For Kelly, this dynamic was compound interest: X_n = X_{n-1} * [random variable]
  • Actually, you have a choice several such systems, and you want to know which one is the "best" in some sense - assume bigger values mean better - but obviously X_n is a random variable, it looks you have to make some decision about what to measure (e.g. a "utility function" in the sense of von Neuman-Morgernstern)
  • But wait! If you're really lucky, X_n might converge to some deterministic f(n) (converge in probability? almost surely?) and then you can just choose the option whose f(n) is growing fastest.
  • This absolves you of having to find a vNM utility function, as the fastest-growing option will always be the best, eventually, so you can stop thinking about probability (Assuming you care about what happens asymptotically, or at least after enough rounds that it's a good approximation)
  • The easy way to show X_n converges seems to be to find an invertible monotonic function u such that Y_n = u(X_n) is a sum of n i.i.d random variables (i.e. Y_n = Y_{n-1} + A, for the same random variable A each time), and then apply the law of large numbers: as n gets large Y_n -> nE(A) and so X_n -> u^-1(n E(A)) - so, assuming u is increasing, you pick the option with the biggest E(A).
  • (The continuous versions of this I've seen seem to want to map the variable into Brownian motion with a constant drift and variance)
  • This feels like overkill to me? We don't need i.i.d variables for the law of large numbers, just suitably bounded growth in the variance, and that's not the only way things can converge.

So it's not so much that the dynamics give you a utility function, it's that the dynamics mean it (eventually) doesn't matter what your function is

So, putting some things I've been talking about recently together: this is what I understand is going on with Kelly bets and the whole "ergodicity economics" thing:

  • You have some system with randomness and a dynamic - what happens next depends on what went before. We'll model this as a discrete stochastic process X_n for n=1 , 2, ... but you could make it continuous instead. For Kelly, this dynamic was compound interest: X_n = X_{n-1} * [random variable]
  • Actually, you have a choice several such systems, and you want to know which one is the "best" in some sense - assume bigger values mean better - but obviously X_n is a random variable, it looks you have to make some decision about what to measure (e.g. a "utility function" in the sense of von Neuman-Morgernstern)
  • But wait! If you're really lucky, X_n might converge to some deterministic f(n) (converge in probability? almost surely?) and then you can just choose the option whose f(n) is growing fastest.
  • This absolves you of having to find a vNM utility function, as the fastest-growing option will always be the best, eventually, so you can stop thinking about probability (Assuming you care about what happens asymptotically, or at least after enough rounds that it's a good approximation)
  • The easy way to show X_n converges seems to be to find an invertible monotonic function u such that Y_n = u(X_n) is a sum of n i.i.d random variables (i.e. Y_n = Y_{n-1} + A, for the same random variable A each time), and then apply the law of large numbers: as n gets large Y_n -> nE(A) and so X_n -> u^-1(n E(A)) - so, assuming u is increasing, you pick the option with the biggest E(A).
  • (The continuous versions of this I've seen seem to want to map the variable into Brownian motion with a constant drift and variance)
  • This feels like overkill to me? We don't need i.i.d variables for the law of large numbers, just suitably bounded growth in the variance, and that's not the only way things can converge.

I keep chewing on this topic but people really seem to struggle to accept that intelligence is computable (in the technical sense of the term), presumably for the same reason they struggle to accept that humans are made of atoms (or "humans are complex assemblages of molecules").

but the consequence of this worldview is that anything that can be computed is not intelligence, so we end up with this god of the gaps situation where the definition of intelligence is constantly shrinking to exclude things that computers can now do.

a lot of religious people just say intelligence = soul and end the discussion there, which is obviously crazy but actually more defensible than the muddled middle who theoretically accept how the universe works but in practice retreat into vague platitudes that make no sense at all.

"that's not intelligence, that's just an algorithm! applied statistics!" well what did you think intelligence was, the divine spark?

there is a common equivocation between intelligence and agency, where a roomba can seem more alive than ChatGPT despite being less intelligent as it appears to have a more obvious purpose and move in a way that expresses some sort of desire: it is literally trying to get somewhere and do something, and of course a cat or mouse has far more intelligent agency than any AI system despite not understanding verbal language at all.

our sense of humanity rests on the trinity of intelligence, agency, and ineffable moral worth, and as computation consumes intelligence we shift our attention ever more to what we believe remains unique to us.

Synthesis: humans are not intelligent

Imma keep it real with you, Tumblr ceo Matthew Charles Mullenweg, this ain't gonna solve this website's declining revenue stream

Intellectually, I understand that it's just another kids' media franchise designed to shift toys, but still: walking around a supermarket on the other side of the world, in the year 2025, and happening to meet the eyes, gazing out from a box of cereal, of the internet's own Twilight Sparkle - one does have to suppress a twitch

A robot may not injure a human being nor, through inaction, allow a human being to come to harm.

You might think this is Asimov's first law of robotics, and it's definitely how I remember it. But for whatever reason he actually consistently wrote "or" in the middle there, which, uh... doesn't mean the same thing.

So this is obvious, but: maximizing expected log wealth isn't enough.

If I offered you some kind of super-st-petersberg lottery with a 2^(-n) chance of winning 1.000000001^(2^n) dollars, you'd still be an idiot for paying your entire billion-dollar fortune for a ticket, despite what Kelly says.

Posting more as confusion then as an agument:

Kelly criterion says f* = p - q/b, where f is the percentage of our money we should bet, p is the chance of winning, and the other variables are non-negative.

So we never bet more of our wealth than p, which in the given scenario is 2^(-n). So Kelly already never bets our entire fortune on a miniscule chance of winning ungodly amounts.

On the other hand, you're right that expected log wealth increases to infinity the higher we raise the reward, which means that there's a high enough reward that would convince us to bet 99.999% of our fortune if that's what we're maximizing.

So Kelly criterion is not plain "maximize expected log wealth"?

Kinda? That's what it's doing implicitly: Kelly assumes you're making the same bet over and over again, with a fixed fraction of your wealth, and you're trying to maximize the growth rate in the t-> infinity limit, which will generally mean picking the option with the largest geometric mean (i.e. exp of the expected log). I think the particular version you've got is if there's a p chance of getting b>1 times the money you put in, and a q =1-p chance of losing it.

Working under the cut, in case you haven't seen it before and so that I can check I'm not spouting nonsense

Nerdsniped myself. So, sanity check: can I come up with a plausible answer to "how much should you pay for a ticket to the silly game in the OP?"?

Under the cut is stuff I wrote as I thought about it. There's a punchline at the end, but I might just be bad at this

Exasperated superhero having to explain for the third time this week that, no, he doesn't have fire powers, just generic glowy energy powers that do beams and forcefields The glow just happens to be orange. Yes, it can be any color. You wouldn't be asking this if it were blue or pink but that's just as arbitrary.

Anonymous asked:

Spaceships ARE boats, and that's why they SHOULD be full of ladders, chords and rigging you have to clamber over.

Science fiction is one long argument between people who think spaceships are like 19th century sailing ships and people who think they’re like ww2 aircraft carriers and battleships.

Avatar
You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.