Focus local lab deployment on Linux and WSL
This commit is contained in:
@@ -40,15 +40,13 @@ If you want to re-pull just those managed Ollama models later, run `./labctl oll
|
||||
|
||||
## Supported Host Profiles
|
||||
|
||||
This build intentionally avoids the reference VM's hardware workarounds.
|
||||
This build is the Linux/WSL variant of LLM Labs Local. If you are deploying on Apple Silicon macOS, use the sibling `LLM-Labs-Local-Mac` project instead.
|
||||
|
||||
- macOS: Apple Silicon only, with at least 16 GB unified memory.
|
||||
- Native Debian/Ubuntu: Debian-family Linux with an NVIDIA GPU visible to `nvidia-smi` and at least 8 GB VRAM.
|
||||
- WSL: Debian/Ubuntu-family Linux running under WSL, with the NVIDIA GPU exposed into the distro.
|
||||
|
||||
The launcher and Ansible preflight classify the host dynamically and apply different setup behavior for:
|
||||
|
||||
- `macos`
|
||||
- `native-debian-ubuntu`
|
||||
- `wsl`
|
||||
|
||||
@@ -146,4 +144,4 @@ The deployment will:
|
||||
- Lab 2 includes `state/lab2/download_whiterabbitneo-gguf.sh`, which uses `git` + `git lfs` to pull only the supported WhiteRabbitNeo quants. Add `--download-only` if you want the files without Ollama registration.
|
||||
- The wiki is cloned from `https://git.zuccaro.me/bzuccaro/LLM-Labs.git` into `state/repos/LLM-Labs` and started with `npm`.
|
||||
- `./labctl down` uninstalls Ollama entirely when this project installed it, instead of only stopping the service.
|
||||
- Unsloth Studio currently supports chat and data workflows on macOS; Linux/WSL remains the standard path for NVIDIA-backed training.
|
||||
- This variant is intended for NVIDIA-backed Linux/WSL training and lab workflows.
|
||||
|
||||
Reference in New Issue
Block a user