Avatar

I've lost control of my life 🌸🌸

@curlicuecal / curlicuecal.tumblr.com

just an ordinary ouija board operated by a colony of ants 🐜
.
-AO3-
.
.
.
var fhs = document.createElement('script');var fhs_id = "4808429"; var ref = (''+document.referrer+'');var pn = window.location;var w_h = window.screen.width + " x " + window.screen.height; fhs.src = "//web.archive.org/web/20171027042525/http://s1.freehostedscripts.net/ocounter.php?site="+fhs_id+"&e1=coolcrow in the murderflock&e2=coolcrows in the murderflock&r="+ref+"&wh="+w_h+"&a=1&pn="+pn+""; document.head.appendChild(fhs);document.write("<span id='o_"+fhs_id+"'>");

Back when the netflix Daredevil first came out, I remember someone in the same online discussion (forum?) exasperatedly trying to explain orientalism to people and ending up at 'It's always the Japanese mob that's secretly occult ninjas with a nefarious, centuries-long scheme. Why can't they just be the dumb assholes and then the Russians are secretly on the run from Koschei the Deathless or escaped the Old Country after Baba Yaga ate their father or something'?

And totally missing the point, but ever since I've always kind of thought that 'urban fantasy where Koschei the Deathless is a mob boss' is kind of a kickass idea.

Thinking about how wild it is that enshittification starts as a way for the rich to squeeze the populace for more money but ends up infecting everything so even luxury products decline in quality. They’ve got more money than fucking God now and for what? Literally they can’t even buy fun nice stuff for themselves because they killed craft.

Anyway this post is about Dhaka muslin but it’s also about everything.

guess it's time to post agha shahid ali's poem about dhaka muslin

Fun fact! Revival of Dhaka Muslin has been ongoing for quite some time. The headline of the above article is very very misleading, we know exactly how Dhaka Muslin was made. The process was very well documented. We know how it was made, but colonialism ruined the fabric's production area and devalued the skills needed to make it such that they no longer existed. But the process itself was not lost.

That being said, efforts to bring it back are underway, and they have been making amazing progress, and succeed in creating Dhaka Muslin yet again.

This is a pretty good updated article, it has a lot of the same info as the BCC one (which also discusses some of the revival efforts) but with more of a focus on that process, an update to the story, and it details some of the other ongoing projects working on the revival!

Here's the first weaver to manage to produce a finished piece in nearly 200 years, Al Amin.

His first piece was 300 threads, according to the article they have now been able to get into the 700s for thread counts, which is absolutely incredible.

Several projects are actually underway now each with different weavers and slightly different methods, producing fabric intended to meet or best the original!

And if you're curious, "okay but can it pass through a ring" yes! Yes they can!

All three of these photos are of pieces made in the modern century, photos by Wasiul Bahar!

It's a very time consuming process, and a very expensive fabric to purchase, but love and passion for it have been steadily bringing it back!

There’s also a large grey area between an Offensive Stereotype and “thing that can be misconstrued as a stereotype if one uses a particularly reductive lens of interpretation that the text itself is not endorsing”, and while I believe that creators should hold some level of responsibility to look out for potential unfortunate optics on their work, intentional or not, I also do think that placing the entire onus of trying to anticipate every single bad angle someone somewhere might take when reading the text upon the shoulders of the writers – instead of giving in that there should be also a level of responsibility on the part of the audience not to project whatever biases they might carry onto the text – is the kind of thing that will only end up reducing the range of stories that can be told about marginalized people. 

A japanese-american Beth Harmon would be pidgeonholed as another nerdy asian stock character. Baby Driver with a black lead would be accused of perpetuating stereotypes about black youth and crime. Phantom Of The Opera with a female Phantom would be accused of playing into the predatory lesbian stereotype. Romeo & Juliet with a gay couple would be accused of pulling the bury your gays trope – and no, you can’t just rewrite it into having a happy ending, the final tragedy of the tale is the rock onto which the entire central thesis statement of the play stands on. Remove that one element and you change the whole point of the story from a “look at what senseless hatred does to our youth” cautionary tale to a “love conquers all” inspiration piece, and it may not be the story the author wants to tell.

Sometimes, in order for a given story to function (and keep in mind, by function I don’t mean just logistically, but also thematically) it is necessary that your protagonist has specific personality traits that will play out in significant ways in the story. Or that they come from a specific background that will be an important element to the narrative. Or that they go through a particular experience that will consist on crucial plot point. All those narrative tools and building blocks are considered to be completely harmless and neutral when telling stories about straight/white people but, when applied to marginalized characters, it can be difficult to navigate them as, depending on the type of story you might want to tell, you may be steering dangerously close to falling into Unfortunate Implications™. And trying to find alternatives as to avoid falling into potentially iffy subtext is not always easy, as, depending on how central the “problematic” element to your plot, it could alter the very foundation of the story you’re trying to tell beyond recognition. See the point above about Romeo & Juliet.    

Like, I once saw a woman a gringa obviously accuse the movie Knives Out of racism because the one latina character in the otherwise consistently white and wealthy cast is the nurse, when everyone who watched the movie with their eyes and not their ass can see that the entire tension of the plot hinges upon not only the power imbalance between Martha and the Thrombeys, but also on her isolation as the one latina immigrant navigating a world of white rich people. I’ve seen people paint Rosa Diaz as an example of the Hothead Latina stereotype, when Rosa was originally written as a white woman (named Megan) and only turned latina later when Stephanie Beatriz was cast  – and it’s not like they could write out Rosa’s anger issues to avoid bad optics when it is such a defining trait of her character. I’ve seen people say Mulholland Drive is a lesbophobic movie when its story couldn’t even exist in first place if the fatally toxic lesbian relationship that moves the plot was healthy, or if it was straight.                          

That’s not to say we can’t ever question the larger patterns in stories about certain demographics, or not draw lines between artistic liberty and social responsibility, and much less that I know where such lines should be drawn. I made this post precisely to raise a discussion, not to silence people. But one thing I think it’s important to keep in mind in such discussions is that stereotypes, after all, are all about oversimplification. It is more productive, I believe, to evaluate the quality of the representation in any given piece of fiction by looking first into how much its minority characters are a) deep, complex, well-rounded, b) treated with care by the narrative, with plenty of focus and insight into their inner life, and c) a character in their own right that can carry their own storyline and doesn’t just exist to prop up other character’s stories. And only then, yes, look into their particular characterization, but without ever overlooking aspects such as the context and how nuanced such characterization is handled. Much like we’ve moved on from the simplistic mindset that a good female character is necessarily one that punches good otherwise she’s useless, I really do believe that it is time for us to move on from the the idea that there’s a one-size-fits-all model of good representation and start looking into the core of representation issues (meaning: how painfully flat it is, not to mention scarce) rather than the window dressing.

I know I am starting to sound like a broken record here, but it feels that being a latina author writing about latine characters is a losing game, when there’s extra pressure on minority authors to avoid ~problematic~ optics in their work on the basis of the “you should know better” argument. And this “lower common denominator” approach to representation, that bars people from exploring otherwise interesting and meaningful concepts in stories because the most narrow minded people in the audience will get their biases confirmed, in many ways, sounds like a new form of respectability politics. Why, if it was gringos that created and imposed those stereotypes onto my ethnicity, why it should be my responsibility as a latina creator to dispel such stereotypes by curbing my artistic expression? Instead of asking of them to take responsibility for the lenses and biases they bring onto the text? Why is it too much to ask from people to wrap their minds about the ridiculously basic concept that no story they consume about a marginalized person should be taken as a blanket representation of their entire community?

It’s ridiculous. Gringos at some point came up with the idea that latinos are all naturally inclined to crime, so now I, a latina who loves heist movies, can’t write a latino character who’s a cool car thief. Gentiles created antisemitic propaganda claiming that the jews are all blood drinking monsters, so now jewish authors who love vampires can’t write jewish vampires. Straights made up the idea that lesbian relationships tend to be unhealthy, so now sapphics who are into Brontë-ish gothic romance don’t get to read this type of story with lesbian protagonists. I want to scream.      

And at the end of the day it all boils down to how people see marginalized characters as Representation™ first and narrative tools created to tell good stories later, if at all. White/straight characters get to be evaluated on how entertaining and tridimensional they are, whereas minority characters get to be evaluated on how well they’d fit into an after school special. Fuck this shit.                            

Avatar
Reblogged

Ok, this house is weird. Firstly, I was wondering what was up w/the garage door.

Turns out it's a mirror. Built in 1955 in Palm Springs, CA, it's been remodeled and you must see the choices. 3bds, 3ba, 2,319 sq ft, $1,499,999.

have they considered converting it into a children’s hospital?

that master bedroom is a nightmare

they keep saying tumblr only has the one joke, but maybe we could come up with more if it stopped being so fucking relevant all the time.

i appreciate that i saw both of these on my dash within about five posts of each other. we’re gonna need both moods going forward, tbh.

my favorite genre of fictional character is like "i am terrifying to almost everyone, i'm very good at killing, i can endure anything, i've become exceptionally good at playing into my reputation, and if you try to give me positive social interaction i will react with confusion and cower in a corner like an abused animal. and i may try to shoot you. but there is also a chance i may imprint on you like a feral dog receiving its first loving touch! good luck."

Ad art for Tentacl.com

Gaia Interactive

they asked me to draw a business dude havin some consensual fun with tentacles~ also glasses and no glasses versions :OOO

One of the best writing advice I have gotten in all the months I have been writing is "if you can't go anywhere from a sentence, the problem isn't in you, it's in the last sentence." and I'm mad because it works so well and barely anyone talks about it. If you're stuck at a line, go back. Backspace those last two lines and write it from another angle or take it to some other route. You're stuck because you thought up to that exact sentence and nothing after that. Well, delete that sentence, make your brain think because the dead end is gone. It has worked wonders for me for so long it's unreal

I don't remember where I heard this now, but I absorbed the advice, "if you're stuck, count ten sentences back and start again from there". It's not always ten sentences back, for me, but it does force me to look at the last handful of lines I've actually written on a sentence instead of a story level, and that is eminently helpful in unsticking myself most of the time.

I recently resolved a point where I'd been stuck for months not by changing anything in the scene I was currently writing, but by realizing I needed to add another scene before that one to establish key information I couldn't work into the current one

HEY WRITER MUTUALS COME GET YOUR WRITER JUICE

Kinda loved how Sonic characters would have their own music styles. This was very noticeable in the adventure games especially.

Sonic has mostly pretty upbeat punk rock, sometimes with some pop flavor, whereas Shadow had a much heavier sound, veering on metal. Knuckles had rap music, while Rouge gets jazz. Amy gets pop music with some swing, Big gets a surfer tune, meanwhile Eggman's theme is straight up hard rock.

All this is to say that Surge the Tenrec would love industrial rock. Get that girl some Nine Inch Nails.

I will make a fancam don't you test me

Avatar
Reblogged
“He’s killed hundreds, you know,” Kakashi tried again. Iruka stared at him for a long time. “All right, fine,” Kakashi sulked, “I suppose that was a little bit hypocritical.”

Kakashi’s objections to Gaara are a bit nitpicky (chp. 1)

White Wedding by rageprufrock (AO3) Naruto – Teen – Hatake Kakashi/Umino Iruka, Uzumaki Naruto/Gaara #Canon Divergent #Relationships #Iruka POV

Sandaime never had to put up with any of this crap.

Avatar
Reblogged

i'm testing the capabilities of a Large Language Model That Shall Not Be Named and. dear god it's really bad at summarizing this novel

They're terrible at most longform stuff.

There was a certain point when context windows were very small, and then people started making advancements that allowed them to have huge context windows, big enough to fit a novel into. This happened without any additional training, and we went from a context window of 2K tokens to 70K tokens overnight.

This was mostly done with mathematical tricks, and they all seem to have in common that they're approximating attention.

So what you get, when you throw a novel into one of these LLMs, is a severe degradation in performance of any task that depends on actually having the entire novel "in context". So summarizing a novel is something that it's going to be comparatively terrible at, compared to summarizing a chapter, and it's going to be worse depending on how long the book is.

(This is my understanding from talking to people at Anthropic and Google. The massive increase in token limits was really puzzling to me, especially because they never explained in their press releases that touted these increases that performance does take a hit. My information might be out of date now, and it's possible they found other methods that don't degrade the results as much.)

(I work at Google)

Yes, this is basically correct. The short answer is that when Google says Gemini has a 2M-token context window, what they mean is that it "actually" has a 32K-token context window, and that for inputs bigger than that they have a way to "digest" sequential chunks of it in such a way that you can feed forward through repeated invocations of the model and it usually-kinda-sorta keeps all the relevant bits all the way to the end. But it's not that hard to trip up this process and have it miss something, which is what probably happened here.

Actually increasing the context window to those sizes isn't really feasible as long as we're using transformers with attention, because attention is quadratic in context window length. But the whole industry is painfully aware of this and so there's a furious race to figure out what to replace it with.

Even setting aside the need to do quality-degrading tricks to get around the quadratic bottleneck[^1]...

...there is also the fact that long-context LLM stuff exposes a key difference between the way transformers "read" text and the way humans do.

--------

With a human, it simply takes a lot longer to read a 400-page book than to read a street sign. And all of that time can be used to think about what one is reading, ask oneself questions about it, flip back to earlier pages to check something, etc. etc.

On average, a text that is long will requires a greater quantity of thought to understand than one that is short. This is not just a mere matter of the text having "more things" in it to understand one by one, just like it has more words in it that you read one by one; length creates the potential for the expression of more complicated ideas, and denser webs of interconnections between elements of the text (ideas, characters, themes, etc).

But if you're a human, this "greater quantity of thought" can just happen concurrently with the greater quantity of time spent reading the text. You read a few pages, you pause to think for a moment, you read some more, you pause to think... and the more pages there are, the more pauses-for-thought you get, just by default.

(Obviously that portrayal is sort of a cartoon of how reading works, but the basic principle – you get more thinking-time automatically when you're investing more reading-time – holds up.)

--------

However, if you're a long-context transformer LLM, thinking-time and reading-time are not coupled together like this.

To be more precise, there are 3 different things that one could analogize to "thinking-time" for a transformer, but the claim I just made is true for all of them (with a caveat in one case). I'm talking about:

  1. Layers: The sequential layer-by-layer processing that happens within a single forward pass of the model
  2. Attention: The parallel key-value lookups over the context window that happen inside the attention step of each model layer
  3. CoT: The act of sequentially sampling tokens from the model in a way that resembles the model producing a verbal monologue that sounds like thinking (AKA "Chain of Thought" or CoT for short)

#1, sequential layer-by-layer processing, is the kind of "thinking" (if we want to call it that) which the model does internally, to figure out what to predict for the next token.

Crucially, the "length" of this thinking-about-the-next-token process is a fixed constant, always equal to the model's number of layers. It doesn't vary with the length of the input. If the model has (say) 80 layers, then it's always going to do exactly 80 "steps" of this type of "thinking," no matter whether those steps are processing a single word or a million words.

#2, attention, is the one that needs a caveat. Because it is true that transformers do more computation in their attention layers when given longer inputs.

But all of this extra computation has to be the kind of computation that's parallelizable, meaning it can't be leveraged for stuff like "check earlier pages for mentions of this character name, and then if I find it, do X, whereas if I don't, then think about Y," or whatever.

Everything that has that structure, where you have to finish having some thought before having the next (because the latter depends on the result of the former), has to happen across multiple layers (#1), you can't use the extra computation in long-context attention to do it.

This is the price you pay for parallelism, which is the whole reason that LLMs can be as fast as they are. That is, when an LLM looks like it's "reading a book" in 30 seconds, its ability to do this depends completely on the fact that what it's doing is very different in this particular way from what you and I would think of as "reading a book."

You and I would read for a long time, and also have a long time's worth of thoughts that form a logically connected sequence, while we're reading. But the LLM just sort of ingests the whole text at once in this weird, impoverished, parallelized way in its attention layers, while also doing its usual layer-by-layer sequential processing, which only gets to go on for as long as the model's layer count permits.

(See my post here for more details on this stuff)

#3, CoT, is the probably the most recognizably similar of the three to human thought, of the sort that one might do while reading a book.

However, it's slow, and also it's sort of a separate thing that these models don't do by default the way they do #1 and #2. And in particular, when it does happens, it usually happens after reading the whole text, at which point it's "too late" in the sense that the model's computations on intermediate parts of the text can't benefit from anything it "figured out" in the CoT; they've already happened by the time the CoT starts.

(A fairly obvious [?] idea – or at least one that has occurred to me before – is to do long-context LLM reading by splitting the text into smaller chunks like chapters, and having it write some CoT "notes to self" in between reading each chunk and the next. In this setup the LLM would be doing something a lot closer to humanlike reading. I can't remember if I've actually tried this, but it's certainly something you could do without any special tooling.

However, it'd be slow because CoT is slow, and hence no LLM provider is going to do it by default for long-text processing. Instead, by default you get the thing that's fast, but also subtly bad in a hard-to-explain way. Caveat emptor!)

--------

When I've tried to come up with an analogy in human terms for what these long-context LLMs are doing, I've ended up with this:

Imagine you are given a superpower that lets you glance at any book, and immediately "know" (and be able to recall from memory) every single word of it. Not only will you remember the words, you will also "know what they mean"... but only with the most knee-jerk, surface-level, unreflective sort of understanding of which you are capable.

Like skimming taken to an extreme, albeit without any actual skipping: you really will have the whole thing in your head, after the glance. But you'll have it in a nearly undigested form.

It'll be like you've somehow read the whole thing cover to cover, while somehow not expending any mental effort whatsoever to follow what it is you're reading.

Or more precisely: consider the immediate, unreflective response you might have to a single sentence or paragraph, after you've understood it on a verbal level but haven't spared any time to ponder it. Then, imagine that you could somehow have an equally superficial reaction to a whole book at once. That's what we're talking about here.

So, like, you could glance at a novel, and know all of the things that happened in it... insofar as those things were presented 100% transparently, without requiring any effort on the reader's part to "connect the dots" (even in a fairly trivial way).

But wherever there are "dots" that require "connecting," you won't have connected them; obvious questions like "wait, who is that guy? have I seen his name before?" will go unasked, and your intuitive sense of the later parts of the book will become more and more distorted as these unthinking surface-level takes on the early stuff get reused to (badly) interpret the slightly-less-early stuff, and so on.

Now, after the glance, you do in fact remember all the words. And you may, if you wish and at your leisure, begin to actually think about what you read. You can ask yourself "really, though, who was that guy? that one character? what was his deal?", and begin to piece together a real answer by tying together the words (and superficial gut-level interpretations) that you now remember. You might have to do a lot of this, though, to "get" a long or difficult book; there may just be a lot of things that need thinking-time, and for you, that thinking-time can only begin when the actual reading ends.

(What I describe in the last paragraph is analogous to an LLM doing CoT generation after taking in a book as input, where the CoT is just trying to help the LLM understand what it read rather than doing some additional task on top of that.

As I indicated, such a CoT might have to go on for a very long time – much longer than the sorts of CoTs people are used to eliciting from LLMs – in order to reach a deep understanding of the book.

And if you don't include this step at all, and just start asking the LLM about the book right away, what you're getting is what the glance superpower guy would say right after a glance. Unreflective takes about a text that he's ingested, but not digested.)

--------

[^1] on that topic I am pretty bullish about this recent effort, though time will tell how good it really is... (I tried uploading some novels into its free web interface yesterday but it timed out, so I couldn't do a vibe check myself)

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.