Lab 5 Moved to Unsloth
This commit is contained in:
@@ -7,7 +7,7 @@
|
||||
In this lab, we will:
|
||||
* Explore public datasets
|
||||
* Generate a dataset with Kiln.ai
|
||||
* Fine-tune Gemma3 with LLaMA Factory
|
||||
* Fine-tune Gemma3 with Unsloth Studio
|
||||
|
||||
## Objective 1 Explore: Public Datasets
|
||||
|
||||
@@ -48,7 +48,7 @@ Navigate to [GSM8K](https://huggingface.co/datasets/openai/gsm8k). Much like ho
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://huggingface.co/datasets/openai/gsm8k/resolve/main/docs/assets/gsm8k-card.png"
|
||||
src="https://i.imgur.com/Y55FAPV.png"
|
||||
width="600"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
@@ -73,10 +73,16 @@ Larger datasets, such as [Fineweb](https://huggingface.co/datasets/HuggingFaceFW
|
||||
#### Open‑weight vs. open‑source
|
||||
One last note on public datasets. A common misconception is that *open weight* models are **open source**.
|
||||
|
||||
- **Open‑weight** models (e.g., Gemma, DeepSeek R1, Qwen) provide publicly released checkpoints but **do not** include permissive source‑code licenses.
|
||||
- **True open‑source** LLMs remain rare; the only notable example at time of writing is **INTELLECT‑2**, which was built via a distributed "SETI@Home‑style" effort.
|
||||
<br>
|
||||
|
||||
Unfortunately, **INTELLECT‑2** does not favorably compare to existing *open weight* models such as **Gemma**, **DeepSeek R1**, **Qwen**, or other bleeding edge models. When using these *open weight* models for corporate purposes, review the license!
|
||||
- *Open‑weight* models (e.g., Gemma, DeepSeek R1, Qwen) provide publicly released checkpoints but **do not** include permissive source‑code licenses.
|
||||
- True **open‑source** LLMs remain rare; there are very few models that freely share their Dataset and Training pipeline. Examples are **INTELLECT‑2**, which was built via a distributed "SETI@Home‑style" effort, or Nvidia's **Nemotron 3** family of models.
|
||||
|
||||
<br>
|
||||
|
||||
Unfortunately, **INTELLECT‑2** does not favorably compare to existing *open weight* models such as **Gemma**, **DeepSeek R1**, **Qwen**, or other bleeding edge models. **Nemotron 3** also is behind the State of the Art (SOTA) models, but instead serves as a showcase on how anyone can train models using Nvidia hardware.
|
||||
|
||||
Regardless of model type though, when using any *open weight* model for corporate purposes, review the license for allowed use!
|
||||
|
||||
<br>
|
||||
|
||||
@@ -84,7 +90,7 @@ Unfortunately, **INTELLECT‑2** does not favorably compare to existing *open we
|
||||
|
||||
## Objective 2: Synthetic Dataset Generation
|
||||
|
||||
If you can, I strongly encourage you to try and find ready made, or easily massaged datasets that do not require synthetic data. You'll often obtain better results with less effort this way. Afterall, the original frontier ChatGPT family of models merely scraped the entire internet, every book, scientific papers, and other "pre made" raw data to help generate their first dataset. However, this is often unrealistic, as at minimum, we need **1000** input-output pairs in order to begin fine tuning, so...
|
||||
If you can, I strongly encourage you to try and find ready made, or easily massaged datasets that do not require synthetic data. You'll often obtain better results with less effort this way. After all, the original frontier ChatGPT family of models merely scraped the entire internet, every book, scientific papers, and other "pre made" raw data to help generate their first dataset. However, this is often unrealistic, as at minimum, we need **1000** input-output pairs in order to begin fine tuning, so...
|
||||
|
||||
|
||||
### Why Use Synthetic Data?
|
||||
@@ -105,10 +111,14 @@ If you can, I strongly encourage you to try and find ready made, or easily massa
|
||||
|
||||
### 1. Install & Launch Kiln AI
|
||||
|
||||
If you haven't yet, download [Kiln AI](https://github.com/Kiln-AI/Kiln) and run the installer for your OS.
|
||||
If you haven't yet, download [Kiln AI](https://github.com/Kiln-AI/Kiln/releases/tag/v0.18.1) and run the installer for your OS.
|
||||
|
||||
1. **Open Kiln**. It should automatically go to `http://localhost:3000` in your browser.
|
||||
2. Click **`Get Started`**.
|
||||
<div class="lab-callout lab-callout--info">
|
||||
<strong>Tip:</strong> These steps were designed for <strong>Kiln v0.18</strong>. While compatible with newer versions, v0.18 features a polished, simplified UI ideal for this lab. Note that Kiln undergoes active development with frequent UI changes across versions.
|
||||
</div>
|
||||
|
||||
1. **Open Kiln**. It should automatically go to `http://localhost:3000` in your machine's browser.
|
||||
2. Click **`Get Started`**.
|
||||
|
||||
<figure style="text-align:center;">
|
||||
<img src="https://i.imgur.com/hJNehuE.png" width="400"
|
||||
@@ -125,11 +135,15 @@ Kiln is now ready for configuration.
|
||||
|
||||
1. In Kiln's left‑hand **Providers** panel, click **`Connect`** under the Ollama entry.
|
||||
|
||||
<figure style="text-align:center;">
|
||||
<div class="lab-callout lab-callout--warning">
|
||||
Use your Ollama instance IP to connect (I.E. http://<STUDENT IP>:11434). You must be connected to the VPN for this to work.
|
||||
</div>
|
||||
|
||||
<figure style="text-align:center;">
|
||||
<img src="https://i.imgur.com/vEwUszl.png" width="600"
|
||||
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
|
||||
<figcaption>Connect to a local or remote Ollama instance.</figcaption>
|
||||
</figure>
|
||||
</figure>
|
||||
|
||||
2. Click **`Continue`** to confirm the connection.
|
||||
|
||||
@@ -157,7 +171,7 @@ Kiln is now ready for configuration.
|
||||
1. Click **`Add Task`** and fill out the form with the details below.
|
||||
|
||||
* **Task name:** `ATT&CK Classification`
|
||||
* **Goal:** "Fine‑tune Gemma‑3‑4B so it can map a textual scenario to the correct MITRE ATT&CK technique."
|
||||
* **Goal:** "Given a description of an attack technique, tactic, or procedure, return only an accurate MITRE ATT&CK ID and Name in the format: "ID# - Technique". "
|
||||
* **System prompt (auto‑filled):** Kiln will prepend this text to every generation request.
|
||||
|
||||
<figure style="text-align:center;">
|
||||
@@ -227,11 +241,11 @@ When you first open a project, Kiln lands on the **Run** page.
|
||||
#### 7.2 Generate Top‑Level Topics
|
||||
|
||||
1. Click **`Add Topics`**. This will generate top level topics that follow broad MITRE ATT&CK categories.
|
||||
2. Choose **`Gemma‑3:12b‑it‑qat`** (or any larger model you prefer).
|
||||
2. Choose **`Gemma-3n-2B`**.
|
||||
3. Set **Number of topics** to **8** and click **`Generate`**.
|
||||
|
||||
<figure style="text-align:center;">
|
||||
<img src="https://i.imgur.com/e6MvhSj.png" width="400"
|
||||
<img src="https://i.imgur.com/SHh8v0y.png" width="400"
|
||||
style="display:block; margin-left:auto; margin-right:auto; border:5px solid black;">
|
||||
<figcaption>Select model & number of topics.</figcaption>
|
||||
</figure>
|
||||
@@ -246,7 +260,7 @@ When you first open a project, Kiln lands on the **Run** page.
|
||||
|
||||
#### 7.3 Create Input Scenarios for All Topics
|
||||
|
||||
1. With the topics selected, click **`Generate Model Inputs`**. Ensure **`Gemma‑3:12b‑it‑qat`** is still chosen, and then affirm your selection.
|
||||
1. With the topics selected, click **`Generate Model Inputs`**. Ensure **`Gemma-3n-2B`** is still chosen, and then affirm your selection.
|
||||
Kiln now asks the model to produce a short *scenario description* for each topic.
|
||||
2. After the model finishes, review the generated inputs. You may edit any that look off.
|
||||
|
||||
@@ -292,24 +306,44 @@ When you first open a project, Kiln lands on the **Run** page.
|
||||
|
||||
---
|
||||
|
||||
## Objective 3: Fine Tuning with LLaMA Factory
|
||||
## Objective 3: Fine Tuning with Unsloth Studio
|
||||
|
||||
There are many popular options for performing finetunes, although many have their drawbacks:
|
||||
There are many popular options for performing fine tunes, although many have their drawbacks:
|
||||
* [Unsloth](https://unsloth.ai) is the most popular solution, but currently does not support multi-gpu setups without a commercial license.
|
||||
* [Axoltl](https://axolotl.ai) is built off of Unsloth, and does support multi-gpu setups, but often lags behind Unsloth in features and capability.
|
||||
* Both these options are also CLI only. While not the end of the world, it does mean we need to learn how these tools work
|
||||
* [Axoltl](https://axolotl.ai) is built off of Unsloth, and does support multi-gpu setups, but often lags behind Unsloth in features and capability, and does not feature any Web UI.
|
||||
* [LLaMaFactory](https://github.com/hiyouga/LLaMA-Factory) is the most flexible of these options, supporting both Unsloth & Axlotle, as well as additional backends. However, this tool is daunting for a beginner to approach fine tuning, and is best left for later experimentation.
|
||||
<br>
|
||||
While I encourage you to explore all of these tools, they are unfortunately out of the scope for this lab. Instead, we're going to focus on **Unsloth**, as it provides the best web UI to easily navigate the fine tuning process.
|
||||
|
||||
While I encourage you to explore both of these tools, they are unfortunately out of the scope for this lab. Instead, we're going to use a project that tries to make these tools easier to use - [LLaMaFactory](https://github.com/hiyouga/LLaMA-Factory). To do so, we'll need to perform some additional setup of our lab environment:
|
||||
### Explore: Touring Unsloth Studio
|
||||
|
||||
### Explore: Touring LLaMa Factory
|
||||
Although Unsloth Studio does its best to simplify the fine tuning process, there are still many dials and knobs to turn! Lets take a brief tour of the most important options:
|
||||
|
||||
Although LLaMa Factory does its best to simplify the fine tuning process, there are still many dials and knobs to turn! Lets take a brief tour of the most important options:
|
||||
1. Model Selection - This area allows us to select any model that we're interested in fine tuning. Unsloth Studio will handle downloading the FP16 version of the model from **HuggingFace** for us.
|
||||
2. Quantization Selection - Without much better hardware, we will usually be training **LoRA**s (Low-Rank Adapters). These will slightly nudge the parameters of the model in the direction we're interested in. If we need additional headroom, we can instead **quantize the base model** (e.g., reduce its precision from 16-bit to 4-bit) and then apply **LoRA** to the quantized model, generating a **QLoRA** (Quantized LoRA). This approach combines the efficiency of quantization with the parameter-efficiency of LoRA. Unsloth will conveniently tell us its estimate for how well a given combination of *Model* & **QLoRA** will fit in our system's available VRAM.
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/XwAdaKJ.png"
|
||||
width="800"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
Model & LoRA Type Selections. Note how models are labeled "OOM" or "Tight" based on hardware.
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
3. Dataset Selection - This is where we can utilize our custom made dataset. Unfortunately, while we've gone through the process of making a dataset, we had to use a very small model to simulate the process. Conveniently, Unsloth allows us to search for any dataset available publicly on HuggingFace. We can select conveniently select the sarahwei/cyber_MITRE_CTI_dataset_v15 for our purposes. You can select "View Dataset" if you'd like to see some of the raw contents of this data.
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/8xBdcnd.png"
|
||||
width="400"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
Dataset Selection
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
1. Model Selection - This area allows us to select any model that we're interested in finetuning. LLaMa factory will handle downloading the FP16 version of the model from **HuggingFace** for us. Note that for fine tuning, while you can fine tune an already quantized model, you'll often obtain a better result as measured by perplexity by starting with the "raw" model.
|
||||
2. Quantization Selection - Without much better hardware, we will usually be training **LoRA**s (Low-Rank Adapters). These will slightly nudge the parameters of the model in the direction we're interested in. If we need additional headroom, we can instead **quantize the base model** (e.g., reduce its precision from 16-bit to 4-bit) and then apply **LoRA** to the quantized model, generating a **QLoRA** (Quantized LoRA). This approach combines the efficiency of quantization with the parameter-efficiency of LoRA.
|
||||
3. Dataset Selection - This is where we can utilize our custom made dataset. Unfortunately, adding these datasets is a rather manual effort. This lab has already pre-loaded our dataset for us, but the steps are listed in COME UP WITH SOMEHWERE TO DO THAT.
|
||||
4. Train Settings - This is where we can configure exactly how our model will be trained. The majority of these settings can stay default, until you've a specific need that pushes you down the rabbit hole. In particular, we'll be interested in
|
||||
* **Learning Rate** - Controls how large an adjustment to the model's weights are made during each step
|
||||
* **Epoch** - Determines the number of times the training algorithm will iterate over the entire dataset (aka repeats training 3 times by default). Critical to help avoid under or over fitting.
|
||||
@@ -319,84 +353,60 @@ Although LLaMa Factory does its best to simplify the fine tuning process, there
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/zbQ17cp.png"
|
||||
width="800"
|
||||
src="https://i.imgur.com/fzSvggY.png"
|
||||
width="400"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
Fine Tuning Settings
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
### Execute: LLaMa Factory Fine Tuning
|
||||
### Execute: Unsloth Studio Fine Tuning
|
||||
|
||||
Set the following before we start to fine tune Gemma:
|
||||
1. **Model**: `Gemma-3-4B`
|
||||
2. **Chat template**: `Gemma3`
|
||||
3. **Learning Rate**: `5e-6`
|
||||
4. **Dataset**: `mitre`
|
||||
1. **Model**: `unsloth/gemma-3-270m-it`
|
||||
2. **Max Steps**: `100` (NOTE: For real fine tuning, use Epochs, not Steps.)
|
||||
3. **Learning Rate**: `0.00005`
|
||||
4. **Dataset**: `sarahwei/cyber_MITRE_CTI_dataset_v15`
|
||||
5. **Warmup Steps**: `100`
|
||||
* Scroll to the bottom of the page, and click `Preview command`. The WebUI is merely a front end for constructuing `llamafactory-cli` commands, and this shows exactly what will be run.
|
||||
* When done reviewing, next click `Start`. It will take some time for LLaMa Factory to start its process, as it will first need to download the full `FP16` raw `Gemma-3-4B` model files.
|
||||
* When done reviewing, next click `Start`. It will take some time for Unsloth Studio to start its process, as it will first need to download the full `FP16` raw `Gemma-3-4B` model files.
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/r7dfG2k.png"
|
||||
width="600"
|
||||
src="https://i.imgur.com/fzSvggY.png"
|
||||
width="400"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
LLaMa Factory CLI Generated Command & Start
|
||||
Setting Max Steps, Learning Rate, and Warmup Steps
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
**Monitor the loss graph** | The graph is measuring **Loss** per **Training step** (roughly 8k steps, 2.5k examples * 3 epochs), or put simply, how different the model's predicted answer is from our data. This should gudually, logarithmically slope downwards if training is working.
|
||||
**Monitor the loss graph** | The graph is measuring **Loss** per **Training step** (roughly 8k steps, 2.5k examples * 3 epochs), or put simply, how different the model's predicted answer is from our data. This should gradually, logarithmically slope downwards if training is stable.
|
||||
|
||||
#### What to Look for in the Loss Curve
|
||||
#### What to Look For
|
||||
|
||||
- **Steady decline** → model is learning.
|
||||
- **Rapid flattening early** → learning‑rate may be too low or the model is under‑parameterized.
|
||||
- **Very flat near the end** → possible over‑fitting; consider reducing the number of epochs or adding regularization.
|
||||
|
||||
If the curve behaves unexpectedly, you can stop the job, adjust the **learning‑rate** or **warm‑up steps**, and restart from the latest checkpoint.
|
||||
|
||||
<div style="
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: flex-start;
|
||||
gap: 32px;
|
||||
width: 100%;
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 10px;
|
||||
box-sizing: border-box;
|
||||
">
|
||||
<div style="text-align: center; flex: 0 0 auto;">
|
||||
<img
|
||||
src="https://i.imgur.com/4n6G3Db.png"
|
||||
width="700px"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
|
||||
LLaMa Factory Fine Tuning View
|
||||
</div>
|
||||
</div>
|
||||
<div style="text-align: center; flex: 0 0 auto;">
|
||||
<img
|
||||
src="https://i.imgur.com/9NYEjpA.png"
|
||||
width="400px"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
|
||||
Loss Curve Upclose
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Once completed, we can scroll back up and
|
||||
1. Select Chat
|
||||
2. Select our newly trained **LoRA** checkpoint. This name of this checkpoint will match the date that you performed the lab.
|
||||
3. Click `Load Model`
|
||||
- **Training Loss:** Decreasing smoothly → model is learning effectively and training is stable
|
||||
- **Gradient Norm:** Drops then stabilizes → gradients are well-behaved (no major spikes)
|
||||
- **Learning Rate:** Gradually increasing, then eventually decreasing → expected warmup behavior helping stable early training
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/Z2Hpa2S.png"
|
||||
src="https://i.imgur.com/Cue7afQ.png"
|
||||
width="600"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
Typical Training Run
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
Unfortunately, due to the time constraints of a live classroom, we'll be unable to pursue this training run to completion. On the lab provided GPUs, a full Epoch could take up to two hours! Feel free to cancel it at your leisure.
|
||||
|
||||
We can however chat with a version of Gemma 3 4B that was trained before this class. It was trained against roughly 60,000 examples, partially generated using kiln, partially harvested from various datasets throughout Huggingface. While not perfect, we can see that the model is signifigantly better than the default.
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/FKZXaV3.png"
|
||||
width="600"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
@@ -404,19 +414,15 @@ Once completed, we can scroll back up and
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
Scrolling down will show all the options for interaction with the model, as we'd expect in most other interfaces. We have options for changing inference perameters, such as Top-P or Temperature, as well as a location for us to input our system prompt. If we're looking to test the model's accuracy with our fine tune, we normally need to ensure these values match the desired endstate values as closely as possible, but we're only going to set the system prompt, as that is most critical for our finetune.
|
||||
To test this ourselves, select:
|
||||
|
||||
Set the system prompt to the one we selected when using **Kiln.ai** - "Given a description of an attack technique, tactic, or procedure, the model should return only a MITRE ATTACK ID and Name."
|
||||
|
||||
| Test Prompt | Expected Output Format |
|
||||
|------------|------------------------|
|
||||
| "A malicious actor uses PowerShell to download a file from a remote server." | `T1059.001 – PowerShell` |
|
||||
| "The adversary exfiltrates data via a compressed archive sent over HTTP." | `T1567.001 – Exfiltration Over Web Services` |
|
||||
| "Credential dumping is performed using Mimikatz." | `T1003.001 – LSASS Memory` |
|
||||
1. The chat button at the very top of the screen
|
||||
2. Download our model. Its under my personal HuggingFace Account name, c4ch3c4d3
|
||||
3. Set the system prompt to the one we selected when using **Kiln.ai** - "Given a description of an attack technique, tactic, or procedure, return only an accurate MITRE ATT&CK ID and Name in the format: "ID# - Technique".
|
||||
|
||||
<figure style="text-align: center;">
|
||||
<img
|
||||
src="https://i.imgur.com/ArMfy4j.png"
|
||||
src="https://i.imgur.com/GHExjE3.png"
|
||||
width="600"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<figcaption style="margin-top: 8px; font-size: 1.1em; ">
|
||||
@@ -424,58 +430,41 @@ Set the system prompt to the one we selected when using **Kiln.ai** - "Given a d
|
||||
</figcaption>
|
||||
</figure>
|
||||
|
||||
If we're happy with our final model, lastly we can export the model for easy import into Ollama.
|
||||
| Test Prompt | Expected Output Format |
|
||||
|------------|------------------------|
|
||||
| "A malicious actor uses PowerShell to download a file from a remote server." | `T1059.001 – PowerShell` |
|
||||
| "The adversary exfiltrates data via a compressed archive sent over HTTP." | `T1567.001 – Exfiltration Over Web Services` |
|
||||
| "Credential dumping is performed using Mimikatz." | `T1003.001 – LSASS Memory` |
|
||||
|
||||
The Unsloth chat view is relatively simplistic, but does provide options for changing inference perameters, such as Top-P or Temperature, as well as a location for us to input our system prompt. If we're looking to test the model's accuracy with our fine tune, we normally need to ensure these values match the desired endstate values as closely as possible.
|
||||
|
||||
### Export the Fine‑Tuned Model
|
||||
|
||||
1. Switch to the **Export** tab.
|
||||
2. Choose a directory on your local machine (or a mounted drive) where you want the exported files to live.
|
||||
3. Select one of the following output formats:
|
||||
|
||||
- **FP16 Safetensors** – a high‑quality checkpoint you can load again with LLaMA Factory or Hugging Face.
|
||||
- **GGUF (4‑bit)** – a compact file ready for import into **Ollama** or other GGUF‑compatible runtimes.
|
||||
|
||||
<div style="
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: flex-start;
|
||||
gap: 32px;
|
||||
width: 100%;
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 10px;
|
||||
box-sizing: border-box;
|
||||
">
|
||||
<div style="text-align: center; flex: 0 0 auto;">
|
||||
<img
|
||||
src="https://i.imgur.com/7rAbX33.png"
|
||||
width="700px"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
|
||||
Export Model
|
||||
</div>
|
||||
</div>
|
||||
<div style="text-align: center; flex: 0 0 auto;">
|
||||
<img
|
||||
src="https://i.imgur.com/5GBXu0i.png"
|
||||
width="400px"
|
||||
style="display: block; margin-left: auto; margin-right: auto; border: 5px solid black;">
|
||||
<div style="margin-top: 8px; font-size: 1.1em; text-align: center;">
|
||||
Local File Location
|
||||
</div>
|
||||
</div>
|
||||
<div class="lab-callout lab-callout--warning">
|
||||
<strong>Skippable:</strong> These steps are provided for reference as we never successfully finished a fine tune within the lab time period.
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
1. Switch to the **Export** tab.
|
||||
2. Select the training run of the model you've performed.
|
||||
3. Select the latest checkpoint, or if you'd like to explore an alternative, the checkpoint desired.
|
||||
4. We can export in a number of formats:
|
||||
|
||||
- **Merged Model** – A BF16 .safetensors format of the model which can be utilized in other projects
|
||||
- **LORA** – Only export the LORA adapter layers generated during training. Useful if we wish to share only our new files with other users who already have the model downloaded, but not our fine tune.
|
||||
- **GGUF** – A compact file ready for import into **Ollama** or other GGUF‑compatible runtimes.
|
||||
|
||||
<br>
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
In this lab, we completed a full fine-tuning workflow:
|
||||
In this lab, we completed a LoRA fine-tuning workflow:
|
||||
|
||||
1. **Dataset Generation** - We explored public datasets on HuggingFace and used Kiln AI to generate a synthetic dataset for MITRE ATT&CK classification.
|
||||
2. **Fine Tuning** - We used LLaMA Factory to fine-tune Gemma-3-4B on our generated dataset.
|
||||
2. **Fine Tuning** - We used Unsloth Studio to fine-tune Gemma-3-4B on our generated dataset.
|
||||
3. **Validation & Export** - We tested the model with sample prompts and exported the fine-tuned model in both FP16 and GGUF formats.
|
||||
|
||||
If all has gone well, then the model should be much more accurate at identifying MITRE ATT&CK codes from user input scenarios. If not, additional experimentation may be necessary to produce a good fine tune. Playing with the parameters we've discussed, improving and expanding our dataset, or even fine tuning a larger or better base model can also help affect our success rate.
|
||||
|
||||
Reference in New Issue
Block a user