Configure a parameter-efficient fine-tuning run end-to-end: pick a base model, supply training data, and tune your LoRA hyperparameters.
user/dataset_name
tatsu-lab/alpaca
train
validation
test
Click to upload or drag a file here
Accepted: .jsonl, .json, .csv, .parquet, .txt (max 5 GB)
text
prompt
q_proj
v_proj
gate_proj
up_proj
down_proj
none
all
lora_only
CAUSAL_LM
SEQ_CLS
SEQ_2_SEQ_LM
TOKEN_CLS
adamw_torch
adamw_8bit
adafactor
cosine
linear
constant
bf16
fp16
fp32
4-bit (nf4)
8-bit