100 lines
5.4 KiB
Markdown
100 lines
5.4 KiB
Markdown
# Local Courseware Deployment
|
|
|
|
This project builds a student-friendly local lab environment for the courseware with a small control surface:
|
|
|
|
- `./deploy-courseware.sh` installs and configures the environment, then starts every managed service.
|
|
- `./destroy-courseware.sh` stops the managed services, uninstalls courseware-managed Ollama, and removes the project-owned lab state.
|
|
- `./labctl` provides day-two controls such as `start`, `stop`, `status`, `urls`, `logs`, and `open kiln`.
|
|
|
|
## What It Installs
|
|
|
|
- Ollama
|
|
- `llama.cpp`
|
|
- TransformerLab, pinned to the classic single-user `v0.28.2` release
|
|
- Open WebUI
|
|
- ChunkViz
|
|
- Embedding Atlas
|
|
- Promptfoo
|
|
- Unsloth Studio
|
|
- Kiln Desktop
|
|
- Course-specific support assets for lab 2 and lab 4
|
|
|
|
## Supported Baselines
|
|
|
|
This build intentionally avoids the reference VM's hardware workarounds.
|
|
|
|
- macOS: Apple Silicon only, with at least 16 GB unified memory.
|
|
- Linux: Debian/Ubuntu-family only, with an NVIDIA GPU visible to `nvidia-smi` and at least 8 GB VRAM.
|
|
- WSL: treated as Linux, so the NVIDIA GPU must be exposed into WSL.
|
|
|
|
## WSL Check
|
|
|
|
If you run this inside WSL, the launcher checks GPU readiness before Ansible starts.
|
|
|
|
If that check fails, fix WSL first:
|
|
|
|
- Install or update the NVIDIA Windows driver with WSL/CUDA support
|
|
- Run `wsl --update` in Windows PowerShell
|
|
- Run `wsl --shutdown`
|
|
- Reopen WSL and confirm `nvidia-smi` works
|
|
|
|
Important: `nvidia-smi` is only the driver check. Building CUDA-enabled `llama.cpp` also requires the Linux-side CUDA toolkit inside the distro.
|
|
|
|
On Linux and WSL, the first `./labctl up` or `./labctl preflight` run may prompt once for your sudo password so Ansible can install system packages.
|
|
|
|
On Ubuntu WSL x86_64, preflight now installs the Linux-side CUDA toolkit automatically if it is missing.
|
|
|
|
It first tries the distro package:
|
|
- `sudo apt install -y nvidia-cuda-toolkit`
|
|
|
|
If that package is unavailable or still does not expose `nvcc`, the installer falls back to NVIDIA's WSL-Ubuntu repository bootstrap for the toolkit only, not a Linux GPU driver.
|
|
|
|
If the automatic bootstrap still fails, verify:
|
|
|
|
- `nvcc --version`
|
|
- `ls /usr/local/cuda/include/cuda_runtime.h`
|
|
|
|
For non-Ubuntu WSL distros, install the CUDA toolkit manually before running the deploy script.
|
|
|
|
## Standard Assumptions
|
|
|
|
- The host-side install path assumes modern local tooling, but TransformerLab itself is provisioned from a pinned classic single-user layout.
|
|
- TransformerLab is intentionally pinned to the older single-user `v0.28.2` release because newer upstream releases changed the project structure and behavior in ways that break this courseware.
|
|
- This project does not rely on TransformerLab's upstream `install.sh`; the Ansible role provisions the pinned release directly so web assets, env layout, and runtime behavior stay reproducible.
|
|
- The scripts do not patch TransformerLab plugins or preserve the VM's special-case fixes.
|
|
- No Ollama models are pulled during `./labctl up`; students pull models manually as part of the courseware.
|
|
- WhiteRabbitNeo GGUFs are no longer pulled during `./labctl up`. After base setup, run `state/lab2/download_whiterabbitneo-gguf.sh` to fetch only the `BF16`, `Q8_0`, `Q4_K_M`, and `Q2_K` files from `bartowski/WhiteRabbitNeo_WhiteRabbitNeo-V3-7B-GGUF` and register local Ollama models `WhiteRabbitNeo`, `WhiteRabbitNeo-BF16`, `WhiteRabbitNeo-Q8`, `WhiteRabbitNeo-Q4`, and `WhiteRabbitNeo-Q2`.
|
|
- TransformerLab and Unsloth homes are redirected into this project's `state/` tree via symlinks.
|
|
- Managed web services bind for access from both Linux and the Windows side of WSL, while `labctl urls` still reports localhost-friendly URLs.
|
|
- The local Ansible bootstrap in `.venv-ansible/` is machine-specific and will be recreated automatically if the folder is copied between hosts.
|
|
|
|
## Lab URLs
|
|
|
|
After `./deploy-courseware.sh`, run `./labctl urls`.
|
|
|
|
Default endpoints:
|
|
|
|
- Ollama API: `http://127.0.0.1:11434`
|
|
- Open WebUI: `http://127.0.0.1:8080`
|
|
- TransformerLab: `http://127.0.0.1:8338`
|
|
- ChunkViz: `http://127.0.0.1:3001`
|
|
- Embedding Atlas: `http://127.0.0.1:5055`
|
|
- Unsloth Studio: `http://127.0.0.1:8888`
|
|
- Promptfoo UI: `http://127.0.0.1:15500`
|
|
- Wiki: `http://127.0.0.1:80`
|
|
|
|
## Notes
|
|
|
|
- `./labctl up` installs the environment and then starts every managed service.
|
|
- `./labctl versions` shows the pinned TransformerLab and Ansible runtime versions used by this workspace.
|
|
- TransformerLab is installed as a pinned single-user app and no default courseware-managed TransformerLab user is created automatically.
|
|
- `./labctl start core` starts only `ollama` and `open-webui`.
|
|
- `./labctl start all` starts every managed web service.
|
|
- `./labctl open kiln` launches the Kiln desktop app installed into the project state.
|
|
- The scripted Promptfoo install drops a starter config at `state/lab6/promptfoo.yaml`.
|
|
- `labctl start all` now includes Promptfoo via `promptfoo view` and the cloned wiki app.
|
|
- Lab 2 includes `state/lab2/download_whiterabbitneo-gguf.sh`, which uses `git` + `git lfs` to pull only the supported WhiteRabbitNeo quants. Add `--download-only` if you want the files without Ollama registration.
|
|
- The wiki is cloned from `https://git.zuccaro.me/bzuccaro/LLM-Labs.git` into `state/repos/LLM-Labs` and started with `npm`.
|
|
- `./labctl down` now uninstalls Ollama entirely when this project installed it, instead of only stopping the service.
|
|
- Unsloth Studio currently supports chat and data workflows on macOS; Linux/WSL remains the standard path for NVIDIA-backed training.
|