WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

LoKr trained with use_tucker=True throws ERROR on some output_blocks layers #11245

@saltysalrua

Description

@saltysalrua

Custom Node Testing

Expected Behavior

LoKr models trained with use_tucker=True should load without errors.

Actual Behavior

When loading a LoKr model trained with use_tucker=True parameter, multiple ERROR messages appear indicating tensor view incompatibility. The model loads without crashing,

Steps to Reproduce

testflow.json

1.Train a LoKr model using kohya-ss with use_tucker=True in network_args, or download this example file from google drive:https://drive.google.com/file/d/1FbsDsmfl9nIveEbalX_TXr0ixvR6QEl_/view?usp=sharing
2.Load the model in ComfyUI using LoraLoader node
3.Observe the ERROR messages in console

Debug Logs

PS D:\ComfyUI-aki-v1.7> .\python\python.exe -u comfyui\main.py --disable-all-custom-nodes
D:\ComfyUI-aki-v1.7\python\Lib\site-packages\torch\cuda\__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
Checkpoint files will always be loaded safely.
Total VRAM 8151 MB, total RAM 31968 MB
pytorch version: 2.9.1+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5070 Laptop GPU : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 14385.0
working around nvidia conv3d memory bug.
Using pytorch attention
Python version: 3.13.11 (tags/v3.13.11:6278944, Dec  5 2025, 16:26:58) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.4.0
ComfyUI frontend version: 1.33.13
[Prompt Server] web root: D:\ComfyUI-aki-v1.7\python\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 8151 MB, total RAM 31968 MB
pytorch version: 2.9.1+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5070 Laptop GPU : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 14385.0
Skipping loading of custom nodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type V_PREDICTION
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load SDXLClipModel
loaded completely; 95367431640625005117571072.00 MB usable, 1560.80 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load SDXLClipModel
loaded completely; 5478.80 MB usable, 1560.80 MB loaded, full load: True
Requested to load SDXL
Unloaded partially: 1560.80 MB freed, 0.00 MB remains loaded, 446.76 MB buffer reserved, lowvram patches: 0
ERROR lokr diffusion_model.output_blocks.1.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.0.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.2.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.3.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.4.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.5.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.6.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.8.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
ERROR lokr diffusion_model.output_blocks.7.0.in_layers.2.weight view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
loaded completely; 5441.67 MB usable, 4897.05 MB loaded, full load: True
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00,  6.47it/s]
Requested to load AutoencoderKL
Unloaded partially: 454.01 MB freed, 4443.08 MB remains loaded, 16.72 MB buffer reserved, lowvram patches: 212
loaded completely; 726.79 MB usable, 159.56 MB loaded, full load: True
Prompt executed in 12.67 seconds

Stopped server
[W1210 23:27:39.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())
PS D:\ComfyUI-aki-v1.7>

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Potential BugUser is reporting a bug. This should be tested.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions