WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions training/lora/example/redpajama-incite-chat-3b_merge.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

peft_model_path = 'outputs/redpajama-incite-chat-3b-sample-lowrank'

config = PeftConfig.from_pretrained(peft_model_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
device_map='auto')

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_path)

model = model.merge_and_unload()

model.save_pretrained('outputs/redpajama-incite-chat-3b-sample-lowrank-merged')