The Third Attention Animation in 720p

Unless someone finds a mistake in the final rendering, this is the finished cartoon. I'll upload the 4K to https://archive.org/details/@danl999 when I'm sure this one didn't end up with any broken parts.

There's nothing in here you shouldn't be doing and seeing daily!

I suppose moving the assemblage point that fast is a bad thing to do until it's not possible to keep going in your flesh body.

But you could move it 2 levels without any serious harm, and you'd get to see what it must be like when you can move it the entire distance.

/media/1mujrbn/7midrfg3jzjf1.png

31 Comments

[-]
u/TechnoMagical_Intent 6 points 2025-08-19 15:16

I'll upload the 4K...when I'm sure this one didn't end up with any broken parts.

And a YouTube upload, as well.

A minor thing I noticed is the six seconds of silence at the very start which makes you question if your speakers or headphones are muted.

But if you're that impatient, who cares if you're watching it anyway.

The Full Animations List

[-]
u/danl999 3 points 2025-08-19 16:02

I'll remove some of that in the 4K version.

It's the "prerendering" time to let the wind blow some dust across the screen.

[-]
u/TechnoMagical_Intent 3 points 2025-08-19 21:28

The only other thing I would suggest is using a gender neutral pronoun @11:53

“Their life” instead of “his life.” So as not to alienate the ladies.

[-]
u/danl999 2 points 2025-08-19 21:36

Ah, good point.

I might edit the 4K version and do that, but still leave the 720 alone since I just posted it all over.

[-]
u/pumpkinjumper1210 1 points 2025-08-19 16:37

The youtube is here:

[-]
u/Resident-Kangaroo-85 1 points 2025-08-19 16:54

Do the violet swirls actually appear when things form?

[-]
u/danl999 7 points 2025-08-19 18:03

I'm not really sure what part of the cartoon you're referring to.

Give me the video minutes and seconds and I'll look.

But I tend to be as accurate as I can, with the "assets" I have (animation tools).

And in general, when something forms from the darkness, there's often a flow of purple puffs just before the thing appears. Usually slightly lower than your stomach, flowing out and up towards what's going to manifest.

That's not surprising. It's your energy body.

You REALLY DO, "restore" your energy body using Tensegrity!

Who could have guessed!?!!

Carlos wasn't just making up colorful descriptions of pretend magic, the way the Chinese do. You should see some of the whoppers Chi Gung masters tell. They're so famous for their lies, that they have them on morning shows in Taiwan, as a bit of a joke. It's sort of like interviewing Santa Claus as if he were real.

It's a pity Carlos was gone before we had fun stuff to ask him about, based on direct experiences.

I've been exploring going directly from awake into dreaming, using Silent Knowledge.

I have a ton of questions, and no one to ask.

You almost find, once you start to practice such things, that you're already in dreaming and had simply blocked it out of your mind.

I suspect that's really just a "history" of you having already been inside a dream, popping up because of the technique.

And you weren't actually "already in there". It's just part of the waking dream. The history of that place.

But it sure feels like you were already there, and had refused to notice it.

[-]
u/Resident-Kangaroo-85 1 points 2025-08-19 18:06

5:25

[-]
u/danl999 4 points 2025-08-19 18:18

No. You can't make a rule on what happens going in and out of the IOB world.

Fairy used to set it up for me so that I could take 1 long step right inside it, through my solid bedroom wall.

For real. She did that more than a few times.

At the time, I suppose I didn't appreciate how amazing that really was!

I'd gotten used to an IOB producing red zone magic for me.

And to get back from her world, to my own, all I had to do was take a single step back and I was again in my room.

From my room, I could look inside it like there was a glass barrier there where my wall used to be.

But from the inside, I couldn't see back into my own world. I just had to assume that taking a step back would return me to my room.

That pink swirl is just the "Whirl.iParticle" asset of iClone 8. Which is normally green.

/media/1mujrbn/ghwsuecpp0kf1.png

[-]
u/millirahmstrudel 4 points 2025-08-19 17:00

thanks for the video.
i would have a suggestion. when i first watched it at 0m49s in chalcatzingo an illustration is shown. when I saw the illustration for the first time, it seemed a little too short to me. i then watched it again using the pause button, and now that i'm familiar with it, i don't feel that way anymore. but when watching it for the first time, it would have preferred to watch it for 1-2 seconds longer.

at 9m57s a sound can be heard to illustrate the burning with the fire from within. the noise comes as a surprise and startled me. was that the intention or is there a reason for that specific sound? if not, maybe a sound of some form of fire would be a possibility (i.e.: flamethrower sound - ). i don't know if there would be any sound at all in reality if a sorcerer chooses to go in that way.

[-]
u/danl999 5 points 2025-08-19 18:06

That's the sound I heard once, when I moved my assemblage point 3 levels in one second.

I'd wager a small bet you'd hear the same sound!

[-]
u/millirahmstrudel 1 points 2025-08-19 20:49

"That's the sound I heard once, when I moved my assemblage point 3 levels in one second."

wow. i would never have imagined that.

[-]
u/danl999 7 points 2025-08-19 21:12

It sort of makes sense. That sound is what you might hear when electrocuted.

And there's no actual sound involved in that.

It's just what the ears perceive when the nerves are all overly excited.

Or is it the 60Hz flowing through you?

At any rate, that sound isn't exactly right.

What I heard was closer to large gongs all going off at the same time, in a small space, rapidly.

Cholita has a gong...

I never got around to asking her why.

Maybe she summons her stray cats using it?

At least one of them (more likely 2) seem worried about where she's gone off to.

I find them hanging out in the driveway at 3AM lately.

Mewing like they're calling out to a lover.

[-]
u/isthisasobot 1 points 2025-08-19 17:50

Awesome, beautiful, and crystal clear!

[-]
u/More-Thing-1158 1 points 2025-08-20 08:06

Awesome. And If you still got the script accessible, please share it too !

[-]
u/danl999 4 points 2025-08-20 11:45

I just write those in Eleven Labs, when I need one. That's based on what's happening in the cartoon, and what I "see" in the air when I practice that evening.

So there's no script.

I think Carlos did that too, as he was figuring out how to give us a path.

He started out with single movement Tensegrity, then realized it wasn't working so he came up with the long forms.

I have no doubt, he didn't really design all that himself. He just "saw" what to do each day, and did it.

He alludes to it in that final interview for a spanish language publication.

I believe he elaborated more than he ever had, making that publication important to us.

He was setting the record straight on what Tensegrity is, because he had to go.

[-]
u/xi8t 1 points 2025-08-22 05:56

It would be cool if in the future you could reimagine the composition with some AI Video Model that could enhance all the effects and quality.

[-]
u/danl999 2 points 2025-08-22 11:41

That's inevitable.

I love the YouTube AI videos, like this retro sci-fi video.

I'm hoping to put a projector in my talking teddy bear's eye, so that it can project pictures when the child asks.

But I haven't given up on running gigantic AIs inside it, using a trick I've been trying to understand.

Most AIs are currently stateless, but I've run across a way to make them store past calculations so that they don't have to do those again, for the current situation.

[-]
u/TechnoMagical_Intent 1 points 2025-08-22 14:49

It's an hour and half long! 🤯. Composed of 10 second generated segments, from the looks of it.

I've never seen someone put together one of that length before...

[-]
u/danl999 4 points 2025-08-22 16:11

And there's the limitation of AI animation creation for now.

It loses track of what it was told to make, if it goes too long.

So someone had to put together 10 second clips. The AI couldn't keep generating something with a more consistent plot.

However, if you remove the randomizing factors imposed on AIs, such as "temperature", it will repeat precisely the same output, for the same input.

We just don't have access to that level.

I feel that's one of the big flaws of drawing AIs so far. They randomize them, so that you can't try to "fine tune" a request, and get it to change just the things you need changed.

Each request gets permutated by random numbers.

[-]
u/xi8t 3 points 2025-08-24 02:11

I think 10 seconds could be enough to generate 1 magical pass and then concatenate many of them into a long form. It can be done with ComfyUI. By applying ControlNet on short vid with magical pass using depth map or canny for example. ComfyUI is a good tool to avoid that effect of "randomization" and can actually create workflows that could work pretty well for particular tasks.

[-]
u/danl999 3 points 2025-08-24 13:42

Certainly AIs will be able to do that eventually, but I doubt they can right now.

Even mocap of a video is currently impossible with AI.

It's just hype by people selling mocap software.

I even bought a $4000 mocap suit, and found it also doesn't work well enough to be worth bothering with.

I also discovered, no one wants to buy it from me. I couldn't even give it away.

I guess I was the only one who didn't realize the mocap industry is still largely a fake.

Why not try what you say?

[-]
u/xi8t 3 points 2025-08-26 01:16

This is what I got from trying. I found the following workflow image and tried to use it how it is by default but it didn't work as expected. I changed node for generating depth images to DepthAnythingV2 and the quality is much better. Also ControlNet strength should be calibrated appropriately. The Sampler settings also seem to be a problem, it had 8 CFG, "flowmath_pusa" scheduler and 30 steps as default but it doesn't work properly for this model (or maybe the issue in something else?). I played with different values of params for CFG, scheduler, steps and it got just a bit better but still it is not how it is supposed to work. I couldn't find the recommended sampler settings for the WAN model that was in this workflow, this is why I had to experiment with it. The cloud server costs $0.94/hour (5090 RTX on RunPod) and I just ran out of funds to keep it working.

I think it is possible to make it work through more trials and adjusting the workflow (maybe try different WAN models), there is also another ControlNet that can work with canny edges, sometimes it can produce more accurate results for copying position (the one in this workflow uses depth map instead). I am new at this and still learn stuff but I've seen good results with these techniques.

Here is google drive, I included 2 result videos, script for downloading models, json file with workflow, videos of depth map and reference.

I also found this gif here and it looks kind of smooth. I opened workflow where it was made and it looks way more scarier than the one I was experimenting with.

/media/1mujrbn/01hz12tll9lf1.gif

[-]
u/xi8t 1 points 2025-08-26 01:21

/media/1mujrbn/z10004ktm9lf1.gif

[-]
u/danl999 2 points 2025-08-26 11:18

She's doing the pass wrong, and puffs don't flow like that.

[-]
u/xi8t 1 points 2025-08-26 17:59

Sure thing, I mentioned settings sampler, the picture looks weird because of it

[-]
u/danl999 2 points 2025-08-26 11:17

It's a transgendered wonder "woman"?

[-]
u/xi8t 1 points 2025-08-26 18:07

I think it's because it's trying to fill the mask where the masculine character was and fails to make correct proportions, happens a lot with Stable Diffusion models.

[-]
u/danl999 3 points 2025-08-26 19:11

I plan to put one inside my teddy bear, for the "LCD eye projector".

Those are only around $3!

And putting that AI in there is free. Doesn't cost anything to use it.

So maybe I'll have a better understanding of what they're capable of, when I have to test one to see what side instructions I can give it.

/media/1mujrbn/aizwy9fqxelf1.jpeg

[-]
u/xi8t 1 points 2025-08-26 19:32

I've seen many of your posts where you explain what you're working on. It is still hard to imagine heavy models working with this setup as they require a lot of computational power and even 1 good CPU still won't produce desired effects.

[-]
u/danl999 3 points 2025-08-26 20:53

The typical FPGA I've used in the past is 1 million times faster than a "good" CPU.

I calculated that in the past for other designs I made using a much smaller FPGA device, and my competitor came up with the same number.

This FPGA I'm using for the teddy bear is 10 times the size and double the speed.

So it should be around 20 million times faster than the fastest Pentium on specific jobs, like AI "inference".

Thus I'm confident I can run it in real time.

However, if it's a little too slow, there's tricks you can use.

The same tricks humans use when they respond, "Uh..." before answering.

Such as the child asks a question and if the number of tokens in the question is too high, you activate the text to speech generator that makes the teddy bear speak and say, "That's a really interesting question. Let me think about it..."

Using canned phrases selected by a tiny AI "expert on stalling tactics".

You can infer an answer from one of those, in milliseconds.

Saying that phrase gives you 3 more seconds to answer and no one will complain it's too slow.

The truth is, AIs that are finished and not subject to a need for more development, such as you'd imbed in a robot, are best run by custom hardware, rather than expensive and wasteful GPU cards.

I predict $20 chips that run something which is more helpful than ChatGPT, in toys or appliances.

How about an AI in your fridge that can take a list of what you have, what kind of food you like, and give you a recipe for dinner. And draw a picture of it on a little LCD screen.

Or a microwave oven which right out of the box explains how to use it, or anything you ask your microwave about how to use it or how long to cook something.

Even a talking door knob that can identify your face, and tell you who entered or left through that door?

It's coming.