Files
LLM-Labs/content/labs/lab-5-dataset-generation-and-fine-tuning.md
T
2026-03-22 16:17:20 -06:00

482 lines
25 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
<!-- breakout-style: instruction-rails -->
<!-- step-style: underline -->
<!-- objective-style: divider -->
# Lab 5 - Dataset Generation and Fine Tuning
In this lab, we will:
* Explore public datasets
* Generate a dataset with Kiln.ai
* Fine-tune Gemma3 with LLaMA Factory
## Objective 1 Explore: Public Datasets
While fine tunes may not have the same level of impact as in the early days of LLMs, they can still provide hyper specialized capabilities to enable small LLMs such as those we've used throughout the course to compete with large, closed LLMs such as ChatGPT and Gemini. For use cases where data needs to be private, where the costs of a closed model are too high, or we want a model that is focused for a specific RAG dataset.
There are multiple ways to generate a useful dataset, including but not limited to:
| # | Method | Typical usecase | Key advantage |
|---|--------|-----------------|----------------|
| 1 | **Manual data collection** | Surveys, interviews, domainexpert annotation | Highest specificity; fully controlled quality |
| 2 | **Web scraping** | Harvesting public articles, forum posts, code snippets | Scalable; leverages existing web content |
| 3 | **APIs & databases** | Accessing structured resources (e.g., Wikipedia API, PubMed) | Structured data; often welldocumented |
| 4 | **Crowdsourcing** | Largescale labeling (e.g., image bounding boxes) | Costeffective for repetitive tasks |
| 5 | **Data augmentation** | Expanding a small set of images or text | Improves diversity without new collection |
| 6 | **Public datasets** | Readymade corpora from repositories like HuggingFace | Immediate availability; often preprocessed |
| 7 | **Synthetic data generation** | Simulated sensor readings, procedurally generated text | Useful when real data is scarce or sensitive |
Let's at least quickly touch on option 6, **Public Datasets**. While they may vary in quality, they're a great way to jumpstart a particular focus for a fine tune. Many are found on https://huggingface.co/datasets, and we can see there are over 400k datasets readily accessible for many different tasks, from many different providers, including [OpenAI](https://huggingface.co/datasets/openai/gsm8k), [Nvidia](https://huggingface.co/datasets/nvidia/Nemotron-CrossThink), and more. Much like with models, there are numerous tools we can utilize to filter these datasets, such as on format, modality, or license.
<figure style="text-align: center;">
<img
src="https://i.imgur.com/kdnBCyL.png"
width="600"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
Example Datasets.
</figcaption>
</figure>
#### Explore a dataset (GSM8K)
Navigate to [GSM8K](https://huggingface.co/datasets/openai/gsm8k). Much like how models have **model cards**, datasets have **dataset cards**. These perform a similar job, providing:
1. Tags
2. Example data & a *Data Studio* button for interacting with the dataset on **HuggingFace** directly.
3. Easy Download Links (although we can also use `git clone`)
4. The Description
<figure style="text-align: center;">
<img
src="https://huggingface.co/datasets/openai/gsm8k/resolve/main/docs/assets/gsm8k-card.png"
width="600"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
Dataset Model Card Contents.
</figcaption>
</figure>
At the heart of each data set is the pairing of *input* and *result*. In the case of math, this is relatively easy, as these are quite literally *question* and *answer* pairs to math problems.
Larger datasets, such as [Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), utilize more complicated structures, but all still fundamentally follow this same principle. In the case of [Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), the inputs are titles and summaries of web pages, with links to the precise web page as scraped from the internet. Feel free to explore a subset of this **15 Trillion Token** dataset below:
<div style="text-align: center; width: 100%;">
<iframe
src="https://huggingface.co/datasets/HuggingFaceFW/fineweb/embed/viewer/sample-10BT/train"
frameborder="0"
width="100%"
height="600px"
style="max-width: 100%; border: 1px solid #ddd; border-radius: 4px;"
></iframe>
</div>
#### Openweight vs. opensource
One last note on public datasets. A common misconception is that *open weight* models are **open source**.
- **Openweight** models (e.g., Gemma, DeepSeekR1, Qwen) provide publicly released checkpoints but **do not** include permissive sourcecode licenses.
- **True opensource** LLMs remain rare; the only notable example at time of writing is **INTELLECT2**, which was built via a distributed "SETI@Homestyle" effort.
Unfortunately, **INTELLECT2** does not favorably compare to existing *open weight* models such as **Gemma**, **DeepSeek R1**, **Qwen**, or other bleeding edge models. When using these *open weight* models for corporate purposes, review the license!
<br>
---
## Objective 2: Synthetic Dataset Generation
If you can, I strongly encourage you to try and find ready made, or easily massaged datasets that do not require synthetic data. You'll often obtain better results with less effort this way. Afterall, the original frontier ChatGPT family of models merely scraped the entire internet, every book, scientific papers, and other "pre made" raw data to help generate their first dataset. However, this is often unrealistic, as at minimum, we need **1000** input-output pairs in order to begin fine tuning, so...
### Why Use Synthetic Data?
| Reason | Explanation |
|--------|-------------|
| **Data scarcity** | Niche domains (e.g., MITREATT&CK classification) often lack ≥ 1000 labeled examples. |
| **Scalability** | A single large model can produce thousands of examples in minutes, saving manual effort. |
| **Quality control** | By generating with a *larger* model than the target (e.g., Gemma12Bqat → Gemma4B), you can distill richer responses within specific domains. |
| **Iterative refinement** | Kiln lets you rate or repair each pair, turning noisy outputs into a clean training set. |
<div class="lab-callout lab-callout--warning">
<strong>Rule of Thumb:</strong> Never generate data with a model that is smaller than the model you plan to fine-tune.
</div>
---
### Execute: Install & Launch KilnAI
### 1. Install & Launch KilnAI
If you haven't yet, download [Kiln AI](https://github.com/Kiln-AI/Kiln) and run the installer for your OS.
1. **Open Kiln**. It should automatically go to `http://localhost:3000` in your browser.
2. Click **`Get Started`**.
<figure style="text-align:center;">
<img src="https://i.imgur.com/hJNehuE.png" width="400"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Welcome screen click "Get Started".</figcaption>
</figure>
3. Choose **`Continue`** (or **`Skip Tour`** if you prefer).
4. Dismiss the newsletter prompt (optional).
Kiln is now ready for configuration.
### 2. Connect Kiln to Ollama
1. In Kiln's lefthand **Providers** panel, click **`Connect`** under the Ollama entry.
<figure style="text-align:center;">
<img src="https://i.imgur.com/vEwUszl.png" width="600"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Connect to a local or remote Ollama instance.</figcaption>
</figure>
2. Click **`Continue`** to confirm the connection.
<div class="lab-callout lab-callout--info">
<strong>Tip:</strong> If you have access to a commercial LLM (for example, OpenAI GPT-4o), you can point Kiln to that endpoint for higher-quality synthetic data by replacing the Ollama URL in <strong>Providers → Connect</strong>.
</div>
---
### 3. Create a Kiln Project
1. Kiln will prompt you to **Create a Project**. Enter any descriptive name (e.g., `MITREATTACKFineTune`).
<figure style="text-align:center;">
<img src="https://i.imgur.com/8CLEp9s.png" width="400"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Name your project.</figcaption>
</figure>
2. Press **`Create`**. You are now inside the project workspace.
---
### 4. Define the FineTuning Task
1. Click **`Add Task`** and fill out the form with the details below.
* **Task name:** `ATT&CK Classification`
* **Goal:** "Finetune Gemma34B so it can map a textual scenario to the correct MITREATT&CK technique."
* **System prompt (autofilled):** Kiln will prepend this text to every generation request.
<figure style="text-align:center;">
<img src="https://i.imgur.com/43o2s0Y.png" width="400"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Task definition screen.</figcaption>
</figure>
2. Click **`Save Task`**. The task now appears in the lefthand **Tasks** list.
---
### 5. Kiln Main Interface Overview
| Sidebar item | Primary use |
|--------------|------------|
| **Run** | Manually generate one inputoutput pair at a time (useful for quick checks). |
| **Dataset** | View, edit, export, or import the entire collection of pairs. |
| **SyntheticData** | Bulkgenerate pairs using a model of your choice. |
| **Evals** | Run automatic evaluation against a heldout test set. |
| **Settings** | Projectlevel configuration (e.g., default model, output format). |
When you first open a project, Kiln lands on the **Run** page.
---
## 6 Manual Generation (Run Page)
1. In the **Run** view, set the parameters as shown below (you may substitute a larger model if your hardware permits).
<figure style="text-align:center;">
<img src="https://i.imgur.com/vvW0wjk.png" width="600"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Configure the Run settings.</figcaption>
</figure>
2. Type a **scenario description** (e.g., "An attacker dumps LSASS memory using Mimikatz") and click **`Run`**.
3. Kiln sends the prompt to the selected Ollama model (by default `gemma3:12bitqat`).
4. When the model returns an answer, you can **rate** it from 1 ★ to 5 ★.
*5 ★* → Accept and click **`Next`**.
*< 5 ★* → Click **`Attempt Repair`**, edit the response, then **`Accept Repair`** or **`Reject`**.
<figure style="text-align:center;">
<img src="https://i.imgur.com/wqVsYMk.png" width="600"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Rate a correct response with 5 ★.</figcaption>
</figure>
5. Repeat until you have a handful of highquality pairs. This manual step is optional but useful for seeding the dataset with "goldstandard" examples.
---
### 7 Bulk Synthetic Data Generation
#### 7.1 Open the Generator
1. In the sidebar, click **`Synthetic Data``Generate Fine-Tuning Data`**.
<figure style="text-align:center;">
<img src="https://i.imgur.com/l6OiUeP.png" width="600"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Enter the bulkgeneration workflow.</figcaption>
</figure>
#### 7.2 Generate TopLevel Topics
1. Click **`Add Topics`**. This will generate top level topics that follow broad MITRE ATT&CK categories.
2. Choose **`Gemma3:12bitqat`** (or any larger model you prefer).
3. Set **Number of topics** to **8** and click **`Generate`**.
<figure style="text-align:center;">
<img src="https://i.imgur.com/e6MvhSj.png" width="400"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Select model & number of topics.</figcaption>
</figure>
4. Review the generated list. Delete any unsatisfactory topics (hover → click the trash icon) or click **`Add Topics`** again to generate more. Alternatively, if additoinal depth is required, click **`Add Subtopics`** to drill down deeper into any of the high level topics created by Gemma initially.
<figure style="text-align:center;">
<img src="https://i.imgur.com/wHNv3Om.png" width="800"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Final set of 8 topics.</figcaption>
</figure>
#### 7.3 Create Input Scenarios for All Topics
1. With the topics selected, click **`Generate Model Inputs`**. Ensure **`Gemma3:12bitqat`** is still chosen, and then affirm your selection.
Kiln now asks the model to produce a short *scenario description* for each topic.
2. After the model finishes, review the generated inputs. You may edit any that look off.
#### 7.4 Generate Corresponding Outputs
1. Click **`Save All Model Outputs`**. Kiln now runs the model a second time—this time using each generated input as the prompt—to produce the *output* (the ATT&CK technique label).
<figure style="text-align:center;">
<img src="https://i.imgur.com/A47GRVr.png" width="800"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Produce the "output" side and store the pair.</figcaption>
</figure>
2. The full inputoutput pairs are automatically added to the project's dataset.
#### 7.5 Review the Completed Dataset
1. Switch to the **`Dataset`** tab.
2. You should see a table of 64 (8topics × 8samples) pairs. Clicking any row opens the same **Run** view, where you can **rate**, **repair**, or **delete** the pair.
<figure style="text-align:center;">
<img src="https://i.imgur.com/DnyXYJO.png" width="800"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Dataset overview with generated pairs.</figcaption>
</figure>
---
### 8. Dataset Export (Create a Fine-Tune)
1. Once you are satisfied with the dataset, you can export it to numerous forms of JSONL via the **Fine Tune → Create a Fine Tune** button.
2. Kiln will first ask what format it would like our data to be exported to. We can leave the default setting of *Download: OpenAI chat format (JSONL). Next, select *Create a New Fine-Tuning Dataset.*
3. Kiln supports splitting our generated data into a number of buckets, including *`Training`* *`Test`* and *`Validation`*. Each of these dataset segments is critical to a great fine tune, but at our generated 64 examples, we don't have the luxury of creating a split. As such, under **`Advanced Options`**, select *100% training*, and click *Create Dataset*.
<figure style="text-align:center;">
<img src="https://i.imgur.com/vp6jobS.png" width="400"
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
<figcaption>Dataset overview with generated pairs.</figcaption>
</figure>
4. We can ignore all further options, and select *Download Split*. A new .jsonl file will be saved!
---
## Objective 3: Fine Tuning with LLaMA Factory
There are many popular options for performing finetunes, although many have their drawbacks:
* [Unsloth](https://unsloth.ai) is the most popular solution, but currently does not support multi-gpu setups without a commercial license.
* [Axoltl](https://axolotl.ai) is built off of Unsloth, and does support multi-gpu setups, but often lags behind Unsloth in features and capability.
* Both these options are also CLI only. While not the end of the world, it does mean we need to learn how these tools work
While I encourage you to explore both of these tools, they are unfortunately out of the scope for this lab. Instead, we're going to use a project that tries to make these tools easier to use - [LLaMaFactory](https://github.com/hiyouga/LLaMA-Factory). To do so, we'll need to perform some additional setup of our lab environment:
### Explore: Touring LLaMa Factory
Although LLaMa Factory does its best to simplify the fine tuning process, there are still many dials and knobs to turn! Lets take a brief tour of the most important options:
1. Model Selection - This area allows us to select any model that we're interested in finetuning. LLaMa factory will handle downloading the FP16 version of the model from **HuggingFace** for us. Note that for fine tuning, while you can fine tune an already quantized model, you'll often obtain a better result as measured by perplexity by starting with the "raw" model.
2. Quantization Selection - Without much better hardware, we will usually be training **LoRA**s (Low-Rank Adapters). These will slightly nudge the parameters of the model in the direction we're interested in. If we need additional headroom, we can instead **quantize the base model** (e.g., reduce its precision from 16-bit to 4-bit) and then apply **LoRA** to the quantized model, generating a **QLoRA** (Quantized LoRA). This approach combines the efficiency of quantization with the parameter-efficiency of LoRA.
3. Dataset Selection - This is where we can utilize our custom made dataset. Unfortunately, adding these datasets is a rather manual effort. This lab has already pre-loaded our dataset for us, but the steps are listed in COME UP WITH SOMEHWERE TO DO THAT.
4. Train Settings - This is where we can configure exactly how our model will be trained. The majority of these settings can stay default, until you've a specific need that pushes you down the rabbit hole. In particular, we'll be interested in
* **Learning Rate** - Controls how large an adjustment to the model's weights are made during each step
* **Epoch** - Determines the number of times the training algorithm will iterate over the entire dataset (aka repeats training 3 times by default). Critical to help avoid under or over fitting.
* **Cutoff length** - Equivalent to Ollama's context. As always, larger context training requires more memory.
* **Batch Size** - Can speed up training, as long as we have the hardware to support.
* **Warmup Steps** - The number of initial training steps during which the learning rate gradually increases to the set target. Helps with stability.
<figure style="text-align: center;">
<img
src="https://i.imgur.com/zbQ17cp.png"
width="800"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
Fine Tuning Settings
</figcaption>
</figure>
### Execute: LLaMa Factory Fine Tuning
Set the following before we start to fine tune Gemma:
1. **Model**: `Gemma-3-4B`
2. **Chat template**: `Gemma3`
3. **Learning Rate**: `5e-6`
4. **Dataset**: `mitre`
5. **Warmup Steps**: `100`
* Scroll to the bottom of the page, and click `Preview command`. The WebUI is merely a front end for constructuing `llamafactory-cli` commands, and this shows exactly what will be run.
* When done reviewing, next click `Start`. It will take some time for LLaMa Factory to start its process, as it will first need to download the full `FP16` raw `Gemma-3-4B` model files.
<figure style="text-align: center;">
<img
src="https://i.imgur.com/r7dfG2k.png"
width="600"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
LLaMa Factory CLI Generated Command & Start
</figcaption>
</figure>
**Monitor the loss graph** | The graph is measuring **Loss** per **Training step** (roughly 8k steps, 2.5k examples * 3 epochs), or put simply, how different the model's predicted answer is from our data. This should gudually, logarithmically slope downwards if training is working.
#### What to Look for in the Loss Curve
- **Steady decline** → model is learning.
- **Rapid flattening early** → learningrate may be too low or the model is underparameterized.
- **Very flat near the end** → possible overfitting; consider reducing the number of epochs or adding regularization.
If the curve behaves unexpectedly, you can stop the job, adjust the **learningrate** or **warmup steps**, and restart from the latest checkpoint.
<div style="
display: flex;
justify-content: center;
align-items: flex-start;
gap: 32px;
width: 100%;
max-width: 1200px;
margin: 0 auto;
padding: 10px;
box-sizing: border-box;
">
<div style="text-align: center; flex: 0 0 auto;">
<img
src="https://i.imgur.com/4n6G3Db.png"
width="700px"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
LLaMa Factory Fine Tuning View
</div>
</div>
<div style="text-align: center; flex: 0 0 auto;">
<img
src="https://i.imgur.com/9NYEjpA.png"
width="400px"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
Loss Curve Upclose
</div>
</div>
</div>
Once completed, we can scroll back up and
1. Select Chat
2. Select our newly trained **LoRA** checkpoint. This name of this checkpoint will match the date that you performed the lab.
3. Click `Load Model`
<figure style="text-align: center;">
<img
src="https://i.imgur.com/Z2Hpa2S.png"
width="600"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
Load Model for Chat
</figcaption>
</figure>
Scrolling down will show all the options for interaction with the model, as we'd expect in most other interfaces. We have options for changing inference perameters, such as Top-P or Temperature, as well as a location for us to input our system prompt. If we're looking to test the model's accuracy with our fine tune, we normally need to ensure these values match the desired endstate values as closely as possible, but we're only going to set the system prompt, as that is most critical for our finetune.
Set the system prompt to the one we selected when using **Kiln.ai** - "Given a description of an attack technique, tactic, or procedure, the model should return only a MITRE ATTACK ID and Name."
| Test Prompt | Expected Output Format |
|------------|------------------------|
| "A malicious actor uses PowerShell to download a file from a remote server." | `T1059.001 PowerShell` |
| "The adversary exfiltrates data via a compressed archive sent over HTTP." | `T1567.001 Exfiltration Over Web Services` |
| "Credential dumping is performed using Mimikatz." | `T1003.001 LSASS Memory` |
<figure style="text-align: center;">
<img
src="https://i.imgur.com/ArMfy4j.png"
width="600"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
Test prompt
</figcaption>
</figure>
If we're happy with our final model, lastly we can export the model for easy import into Ollama.
### Export the FineTuned Model
1. Switch to the **Export** tab.
2. Choose a directory on your local machine (or a mounted drive) where you want the exported files to live.
3. Select one of the following output formats:
- **FP16 Safetensors** a highquality checkpoint you can load again with LLaMAFactory or HuggingFace.
- **GGUF (4bit)** a compact file ready for import into **Ollama** or other GGUFcompatible runtimes.
<div style="
display: flex;
justify-content: center;
align-items: flex-start;
gap: 32px;
width: 100%;
max-width: 1200px;
margin: 0 auto;
padding: 10px;
box-sizing: border-box;
">
<div style="text-align: center; flex: 0 0 auto;">
<img
src="https://i.imgur.com/7rAbX33.png"
width="700px"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
Export Model
</div>
</div>
<div style="text-align: center; flex: 0 0 auto;">
<img
src="https://i.imgur.com/5GBXu0i.png"
width="400px"
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
Local File Location
</div>
</div>
</div>
<br>
---
## Conclusion
In this lab, we completed a full fine-tuning workflow:
1. **Dataset Generation** - We explored public datasets on HuggingFace and used Kiln AI to generate a synthetic dataset for MITRE ATT&CK classification.
2. **Fine Tuning** - We used LLaMA Factory to fine-tune Gemma-3-4B on our generated dataset.
3. **Validation & Export** - We tested the model with sample prompts and exported the fine-tuned model in both FP16 and GGUF formats.
If all has gone well, then the model should be much more accurate at identifying MITRE ATT&CK codes from user input scenarios. If not, additional experimentation may be necessary to produce a good fine tune. Playing with the parameters we've discussed, improving and expanding our dataset, or even fine tuning a larger or better base model can also help affect our success rate.