Remove managed student assumptions from Lab 3 terminal

This commit is contained in:
2026-04-13 19:39:51 -06:00
parent ca6a966ad6
commit a97c8a7694
6 changed files with 25 additions and 45 deletions
+9 -7
View File
@@ -24,14 +24,14 @@ In this lab, we will:
<strong>Execute</strong> sections require running commands and producing output.
</div>
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you to log in with the managed `student` account.
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you for any local username and password that already work on that host.
<div data-lab3-terminal></div>
If the embedded terminal is unavailable, you can still fall back to:
- SSH - <IP>:22
- The lab workspace is rooted at `/home/student/lab3`
- A regular terminal session on the lab host
## Objective 1: HuggingFace & LLaMa.cpp
@@ -99,7 +99,7 @@ The projects original goal was to make LLaMA models accessible on systems wit
For this lab we will work with **WhiteRabbitNeoV37B**, a cybersecurityoriented finetune of Qwen2.5Coder7B. This model is less popular than LLaMA-3.2, and if we'd like to run it in `llama.cpp` or Ollama, we first need to convert it into a usable GGUF artifact.
<div class="lab-callout lab-callout--warning">
<strong>Warning:</strong> Although the next two steps show how to find and download this model so you can replicate the process, any course-provided WhiteRabbitNeo support files will be staged under <code>/home/student/lab3/WhiteRabbitNeo</code> when they are available in the deployment.
<strong>Warning:</strong> The commands below assume you are working from <code>~/lab3</code>. If you prefer another path, adjust the examples consistently as you go.
</div>
### 1. Locate & download the model
@@ -140,6 +140,8 @@ git lfs install
2. Clone the model:
```bash
mkdir -p ~/lab3/WhiteRabbitNeo
cd ~/lab3/WhiteRabbitNeo
git clone https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B
```
@@ -148,7 +150,7 @@ git clone https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B
**LLaMa.cpp** makes it easy for us to package models downloaded in SafeTensors format to GGUF. We can convert the model with the following official project script command:
```bash
convert_hf_to_gguf.py /home/student/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B/WhiteRabbitNeo-V3-7B --outfile /home/student/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
convert_hf_to_gguf.py ~/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B/WhiteRabbitNeo-V3-7B --outfile ~/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
```
### 4 Execute: Review Model Metadata
@@ -157,7 +159,7 @@ When these steps have completed, you should see a new WhiteRabbitNeo-V3-7B.gguf
Run the following command:
```bash
gguf-dump /home/student/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
gguf-dump ~/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
```
We should then see:
@@ -190,7 +192,7 @@ A text listing of all of the model's tensors, and the precision of each. Because
Run our newly created **.GGUF** file as is. Run the model using the following command:
```bash
llama-cli -m /home/student/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
llama-cli -m ~/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf
```
Once loaded, interact with the model. We can see a number of interesting parameters that were selected by default, such as **Top K**, **Top P**, **Temperature**, and more, which we'll discuss in the next section. In the meantime, explore interaction with the model. When run in this raw state, the model may be overly chatty. You can stop its output with `Ctrl+C` at any time.
@@ -319,7 +321,7 @@ We can also import our WhiteRabbitNeo **.GGUF** model into Ollama, without havin
1. **Create a simple modelfile** This will tell Ollama where the model lives.
```bash
echo "FROM /home/student/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf" > Modelfile
echo "FROM $HOME/lab3/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B.gguf" > Modelfile
```
2. **Register the model with Ollama**