Version 1 checkpoint
This commit is contained in:
@@ -61,7 +61,7 @@ For non-Ubuntu WSL distros, install the CUDA toolkit manually before running the
|
||||
- The host-side install path assumes modern local tooling, but TransformerLab itself is provisioned from a pinned classic single-user layout.
|
||||
- TransformerLab is intentionally pinned to the older single-user `v0.28.2` release because newer upstream releases changed the project structure and behavior in ways that break this courseware.
|
||||
- This project does not rely on TransformerLab's upstream `install.sh`; the Ansible role provisions the pinned release directly so web assets, env layout, and runtime behavior stay reproducible.
|
||||
- The scripts do not patch TransformerLab plugins or preserve the VM's special-case fixes.
|
||||
- The courseware repairs installed TransformerLab Fastchat plugin manifests so Fastchat-gated features such as Model Architecture and Visualize Logprobs stay available on pinned installs.
|
||||
- No Ollama models are pulled during `./labctl up`; students pull models manually as part of the courseware.
|
||||
- WhiteRabbitNeo GGUFs are no longer pulled during `./labctl up`. After base setup, run `state/lab2/download_whiterabbitneo-gguf.sh` to fetch only the `BF16`, `Q8_0`, `Q4_K_M`, and `Q2_K` files from `bartowski/WhiteRabbitNeo_WhiteRabbitNeo-V3-7B-GGUF` and register local Ollama models `WhiteRabbitNeo`, `WhiteRabbitNeo-BF16`, `WhiteRabbitNeo-Q8`, `WhiteRabbitNeo-Q4`, and `WhiteRabbitNeo-Q2`.
|
||||
- TransformerLab and Unsloth homes are redirected into this project's `state/` tree via symlinks.
|
||||
|
||||
Reference in New Issue
Block a user