Add runtime-configured Lab 3 browser terminal
This commit is contained in:
@@ -24,14 +24,14 @@ In this lab, we will:
|
||||
<strong>Execute</strong> sections require running commands and producing output.
|
||||
</div>
|
||||
|
||||
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you to log in with the default `student` account.
|
||||
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you to log in with the managed `student` account.
|
||||
|
||||
<div data-lab3-terminal></div>
|
||||
|
||||
If the embedded terminal is unavailable, you can still fall back to:
|
||||
|
||||
- SSH - <IP>:22
|
||||
- All necessary artifacts are in the `lab3` folder
|
||||
- The lab workspace is rooted at `/home/student/lab3`
|
||||
|
||||
## Objective 1: HuggingFace & LLaMa.cpp
|
||||
|
||||
@@ -99,7 +99,7 @@ The project’s original goal was to make LLaMA models accessible on systems wit
|
||||
For this lab we will work with **WhiteRabbitNeo‑V3‑7B**, a cybersecurity‑oriented fine‑tune of Qwen2.5‑Coder‑7B. This model is less popular than LLaMA-3.2, and if we'd like to run it in `llama.cpp` or Ollama, we first need to convert it into a usable GGUF artifact.
|
||||
|
||||
<div class="lab-callout lab-callout--warning">
|
||||
<strong>Warning:</strong> Although the next two steps show how to find and download this model so you can replicate the process, support files are already provided in <code>/home/student/lab3/WhiteRabbitNeo</code> to speed up lab execution.
|
||||
<strong>Warning:</strong> Although the next two steps show how to find and download this model so you can replicate the process, any course-provided WhiteRabbitNeo support files will be staged under <code>/home/student/lab3/WhiteRabbitNeo</code> when they are available in the deployment.
|
||||
</div>
|
||||
|
||||
### 1. Locate & download the model
|
||||
|
||||
Reference in New Issue
Block a user