Every Night I Dream Deeply

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
image
image
image

Questionable Wiring

I’m sure its fine. If it were a problem, someone would have inspected it.

The image(s) above in this post were made using an autogenerated prompt and/or have not been modified/iterated extensively. As such, they do not meet the minimum expression threshold, and are in the public domain. Prompt misplaced.

public domain art public domain free art auto-generated prompt unreality nijijourney v6 generative art ai artwork midjourney nijijourney niji
therobotmonster
therobotmonster

If you've enjoyed a big swig of my Gatorade Sports Broth...

Or any of the other stuff I hurl into the void to baffle and amuse whether through here, Melinoë Labs or DeepDreamNights, money is never not-tight, and tips/gifts are welcome.

Thank you for enjoying, and remember:

image

Doom Toots as He Pleases.

therobotmonster

I need help with rent and with my phone bill $300 for the former, $50 for the latter.

Please help. I'm doing the best clowning I can under the circumstances.

therobotmonster

Literally anything helps.

image

My Dye Was Not Color-Safe

And the rain could not be avoided.

The image(s) above in this post were made using an autogenerated prompt and/or have not been modified/iterated extensively beyond cleanup and color-correction. As such, they do not meet the minimum expression threshold, and are in the public domain. Prompt misplaced.

public domain art public domain free art auto-generated prompt unreality nijijourney v6 generative art ai artwork midjourney nijijourney niji

In a hilarious irony…

A robot mistook me for one of its own and I caught a ban from Midjourney of all places for using 3rd party access/automation when it was just me, in pain and with insomnia, stimming with MJ and sorting my recent project raw gens.

I’ve filed an appeal, but because a ban blocks you from the discord server, and all the tech support goes through discord, it will be anywhere up to 2 weeks(!?!) for it to be processed.

Normally, I’m very positive on Midjourney as an easy-access AI art system, but this customer service snafu is a poor oversight and/or user hostile design.

If you catch a ban in error you should be able to contact the support team.

midjourney my life ai art

Assembling your Cast in Vidu: “My Reference” First Impressions and Tutorial

(Disclaimer: I am in the Vidu Creative Partner Program)

For a long time, one of the major barriers for any kind of generative video is character consistency. The solutions were either using image-to-video for start frames or having a general reference option, usually the former, sometimes both.

image

Vidu uses both, but have recently updated their reference system. We’re starting to get to the level of complex interface that I’ve been talking about in previous blogs.

Vidu’s reference-to-video and other similar features were a good start, but with important limitations: You only had three images to reference per generation, and the AI had to guess who was what based on your text prompt. Combined with some quirks around how it handles model-sheet type images (more on that later) and it was a good start, but required a lot of finagling.

image

Vidu updated with a major improvement to the reference tool, one that lets you build profiles for each person or location you’re using, consisting of up to three images, a short style prompt (about 1 line) and a description-prompt. These are saved to your profile so you can call them up at any time (saving a lot of redundant prompting of character details).

The robot will fill in the description for you, but I suggest editing it to your needs.

image
image

You can use up to three references per shot (scale between references is a little tricky), and, in a very helpful feature, you reference those characters by tag in the prompt body.

In the old reference system you’d tend to have to use a single shot for a character, then a model sheet, because if you did just the model sheet the characters would tend to “twirl” while animating in chaotic and unnatural ways, meaning you’d generally lose two slots to one character, but here you can load everything in one place.

I’ve only played with it a little bit thus far, but here are my basic recommendations:

  • The first image should be a single shot of the character, and the second and third images can be model-sheet character assemblages. If you’re doing a reference specifically for talking head shots, you’ll want the face-shot in the first slot, and full-body reference in the latter ones.
  • Try and use multiple-angle reference where at all possible. This just increases stability of the character in general, even if they don’t show the alternate angles being shown. I have a tutorial for using Vidu to produce this kind of reference here.
  • Use the extra prompt space for specific visual instruction. The more direction you give, the less chaos and weirdness you get. (Assuming that’s your goal.)
vidu ai vidu vidu CCP ai tutorial ai video ai animation reference prompting ai assisted art tyrannomax kitty concolor
image

Making a Monster - A Midjourney/Photoshop Tutorial

image
image

Today, I’m going to be breaking down how I use Midjourney for character design.

I’ve recently figured out what I want to do with my joust at the Poke/Digi/Rancher-of- Mon type concept. A lot will be coming from that soon, I don’t have a name yet, but the mathematical formula is:

“oops, all Gardevoirs” multiplied by LadyDeviMon + the square root of MOTU over Bluth.

A couple of BioCritters are getting ported over to this new concept, specifically the Waifusaurus evolution branch:

image
image
image

Which was itself a parody of pokemon that are essentially just ladies anyhow, so its probably more accurate to say Waifusaurus spawned unnamed LadyMon Project.

I made most of the first BioCritters in Dall-E3 through Bing Image creator, and Bing makes finding old prompts a pain, and while Straifu, the Lois Griffin parody form, is the subject of today’s process, they followed the same prompt format I used with the Flintstones-inspired base Waifusaurus form:

vintage animation cell, a slender dinosaur-anthro housewife on flinstones, resembles humanoid dino the dinosaur, blue dinosaur-lady, purple tigerskin housedress, holding rolling-pin-made-of-rock 1963, in the style of 1960s hanna-barbera TV animation, character cel on white background, posed in a determined ready fighting stance

However, I have a specific look for this new project in mind, so its time to evolve the design.

Step one was to start with basic prompting. I built a new prompt that described what I wanted:

fullbody original production cel, white border all around, vintage animation cel, lavender humanoid woman-creature with large t-rex legs and tail, fan of feathers at the end, wearing teal button up blouse, bob haircut, clawed hands, lois griffin as a pokemon, female character design vintage cartoon screen capture (1993) by AKOM and TOEI , white background, beautiful variable-width black line art with cel shaded vintage cartoon color, painted backdrop, official media, UHQ 1996, official media, UHQ

I ran this in Niji 6, using the style moodboard I’d made for the purpose: –p m7298241701452185637 - largely a mix of full body character designs I’d generated in the style I wanted and 1980s animation model sheets. Moodboards are an expanded version of style prompting, which I outline here.

image

Examples from runs that produced nothing remotely like what I wanted. I could spend some time tinkering with the prompt to get closer but character prompting is right there, why not just load the original design in?

image

Yeah, Midjourney/NijiJourney cannot, as of v6, grok abstract cartoony art styles, and character reference always pulls at least a bit of art style as one of its limitations, so the general grotesqueness of its interpretations of these designs leak through.

Now, there’s still several things I could do here. The easiest would be to take the straifu on the right and use a combination of in-painting and gradually swapping out the character prompt for a mix of “closer” options over a series of variations until it became something relatively close.

But around about now, I started getting ideas on how I wanted her to look. I liked the idea of a sort of “dinotaur”, so I rendered up a regular rex, and combined it with one of the first-wave prompt-only failures and a recolored head in the style I was going for in photoshop.

image
image
image

This was a Q&D mockup, only intended for prompting purposes.

image
image
image

Using a combination of the mockups, the original design, and various results from their iteration-chain, I was able to get very close to the basic concept, only to run into two major issues: One, the huge lower body effect wasn’t coming across as intentional (either disappearing into standard thicc-cartoon milfness or looking like AI screwups) and two: the design was boring when divorced from its bug-eyed cartoon aesthetics.

Now, you can do a lot with a flawed design, but a boring one means you need to start re-conceptualizing.

Which begins under the fold:

Keep reading

ai tutorial midjourney midjourney v6 nijijourney character design dinosaur dinosaur-anthro fakemon waifu lois griffin family guy parody dinosaurs parasaurolophus vidu vidu CCP vidu AI ai art ai assisted art multimedia art art tutorial photoshop long post do you like the color of the AI