Add runtime-configured Lab 3 browser terminal

This commit is contained in:
2026-04-13 16:40:14 -06:00
parent 7e4d35b6a3
commit ca6a966ad6
6 changed files with 270 additions and 96 deletions
+3 -3
View File
@@ -24,14 +24,14 @@ In this lab, we will:
<strong>Execute</strong> sections require running commands and producing output.
</div>
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you to log in with the default `student` account.
To start this lab, use the embedded terminal below. It connects to the same lab machine in your browser and should prompt you to log in with the managed `student` account.
<div data-lab3-terminal></div>
If the embedded terminal is unavailable, you can still fall back to:
- SSH - <IP>:22
- All necessary artifacts are in the `lab3` folder
- The lab workspace is rooted at `/home/student/lab3`
## Objective 1: HuggingFace & LLaMa.cpp
@@ -99,7 +99,7 @@ The projects original goal was to make LLaMA models accessible on systems wit
For this lab we will work with **WhiteRabbitNeoV37B**, a cybersecurityoriented finetune of Qwen2.5Coder7B. This model is less popular than LLaMA-3.2, and if we'd like to run it in `llama.cpp` or Ollama, we first need to convert it into a usable GGUF artifact.
<div class="lab-callout lab-callout--warning">
<strong>Warning:</strong> Although the next two steps show how to find and download this model so you can replicate the process, support files are already provided in <code>/home/student/lab3/WhiteRabbitNeo</code> to speed up lab execution.
<strong>Warning:</strong> Although the next two steps show how to find and download this model so you can replicate the process, any course-provided WhiteRabbitNeo support files will be staged under <code>/home/student/lab3/WhiteRabbitNeo</code> when they are available in the deployment.
</div>
### 1. Locate & download the model