v1.3
This commit is contained in:
@@ -19,13 +19,19 @@ This project builds a student-friendly local lab environment for the courseware
|
||||
- Kiln Desktop
|
||||
- Course-specific support assets for lab 2 and lab 4
|
||||
|
||||
## Supported Baselines
|
||||
## Supported Host Profiles
|
||||
|
||||
This build intentionally avoids the reference VM's hardware workarounds.
|
||||
|
||||
- macOS: Apple Silicon only, with at least 16 GB unified memory.
|
||||
- Linux: Debian/Ubuntu-family only, with an NVIDIA GPU visible to `nvidia-smi` and at least 8 GB VRAM.
|
||||
- WSL: treated as Linux, so the NVIDIA GPU must be exposed into WSL.
|
||||
- Native Debian/Ubuntu: Debian-family Linux with an NVIDIA GPU visible to `nvidia-smi` and at least 8 GB VRAM.
|
||||
- WSL: Debian/Ubuntu-family Linux running under WSL, with the NVIDIA GPU exposed into the distro.
|
||||
|
||||
The launcher and Ansible preflight now classify the host dynamically and apply different setup behavior for:
|
||||
|
||||
- `macos`
|
||||
- `native-debian-ubuntu`
|
||||
- `wsl`
|
||||
|
||||
## WSL Check
|
||||
|
||||
@@ -56,6 +62,18 @@ If the automatic bootstrap still fails, verify:
|
||||
|
||||
For non-Ubuntu WSL distros, install the CUDA toolkit manually before running the deploy script.
|
||||
|
||||
## Native Debian/Ubuntu CUDA Behavior
|
||||
|
||||
On native Debian/Ubuntu hosts, the installer now handles three CUDA-toolkit cases:
|
||||
|
||||
- If the toolkit is already usable, it reuses the existing install instead of forcing a reinstall.
|
||||
- If the distro exposes `nvidia-cuda-toolkit`, it installs that package.
|
||||
- If the distro package is unavailable, it bootstraps NVIDIA's official CUDA network repository for supported native Debian/Ubuntu releases and installs the toolkit from there.
|
||||
|
||||
If `apt` starts in a broken dependency state, the installer now attempts `dpkg --configure -a` and `apt-get --fix-broken install` before retrying package installation.
|
||||
|
||||
If CUDA is already mounted or preinstalled outside `PATH`, the installer now detects standard locations such as `/usr/local/cuda/bin/nvcc` and `/usr/local/cuda-*/bin/nvcc`.
|
||||
|
||||
## Standard Assumptions
|
||||
|
||||
- The host-side install path assumes modern local tooling, but TransformerLab itself is provisioned from a pinned classic single-user layout.
|
||||
@@ -69,6 +87,7 @@ For non-Ubuntu WSL distros, install the CUDA toolkit manually before running the
|
||||
- TransformerLab and Unsloth homes are redirected into this project's `state/` tree via symlinks.
|
||||
- Managed web services bind for access from both Linux and the Windows side of WSL, while `labctl urls` still reports localhost-friendly URLs.
|
||||
- The local Ansible bootstrap in `.venv-ansible/` is machine-specific and will be recreated automatically if the folder is copied between hosts.
|
||||
- `llama.cpp` now uses a conservative, memory-aware build parallelism setting instead of an unbounded `-j` build, which avoids OOM failures on smaller Linux and WSL hosts.
|
||||
|
||||
## Lab URLs
|
||||
|
||||
|
||||
Reference in New Issue
Block a user