Initial snapshot before transformerlab recovery
This commit is contained in:
@@ -0,0 +1,7 @@
|
||||
.DS_Store
|
||||
.venv-ansible/
|
||||
state/
|
||||
*.retry
|
||||
.webui_secret_key
|
||||
__pycache__/
|
||||
*.pyc
|
||||
@@ -0,0 +1,99 @@
|
||||
# Local Courseware Deployment
|
||||
|
||||
This project builds a student-friendly local lab environment for the courseware with a small control surface:
|
||||
|
||||
- `./deploy-courseware.sh` installs and configures the environment, then starts every managed service.
|
||||
- `./destroy-courseware.sh` stops the managed services, uninstalls courseware-managed Ollama, and removes the project-owned lab state.
|
||||
- `./labctl` provides day-two controls such as `start`, `stop`, `status`, `urls`, `logs`, and `open kiln`.
|
||||
|
||||
## What It Installs
|
||||
|
||||
- Ollama
|
||||
- `llama.cpp`
|
||||
- TransformerLab, pinned to the classic single-user `v0.28.2` release
|
||||
- Open WebUI
|
||||
- ChunkViz
|
||||
- Embedding Atlas
|
||||
- Promptfoo
|
||||
- Unsloth Studio
|
||||
- Kiln Desktop
|
||||
- Course-specific support assets for lab 2 and lab 4
|
||||
|
||||
## Supported Baselines
|
||||
|
||||
This build intentionally avoids the reference VM's hardware workarounds.
|
||||
|
||||
- macOS: Apple Silicon only, with at least 16 GB unified memory.
|
||||
- Linux: Debian/Ubuntu-family only, with an NVIDIA GPU visible to `nvidia-smi` and at least 8 GB VRAM.
|
||||
- WSL: treated as Linux, so the NVIDIA GPU must be exposed into WSL.
|
||||
|
||||
## WSL Check
|
||||
|
||||
If you run this inside WSL, the launcher checks GPU readiness before Ansible starts.
|
||||
|
||||
If that check fails, fix WSL first:
|
||||
|
||||
- Install or update the NVIDIA Windows driver with WSL/CUDA support
|
||||
- Run `wsl --update` in Windows PowerShell
|
||||
- Run `wsl --shutdown`
|
||||
- Reopen WSL and confirm `nvidia-smi` works
|
||||
|
||||
Important: `nvidia-smi` is only the driver check. Building CUDA-enabled `llama.cpp` also requires the Linux-side CUDA toolkit inside the distro.
|
||||
|
||||
On Linux and WSL, the first `./labctl up` or `./labctl preflight` run may prompt once for your sudo password so Ansible can install system packages.
|
||||
|
||||
On Ubuntu WSL x86_64, preflight now installs the Linux-side CUDA toolkit automatically if it is missing.
|
||||
|
||||
It first tries the distro package:
|
||||
- `sudo apt install -y nvidia-cuda-toolkit`
|
||||
|
||||
If that package is unavailable or still does not expose `nvcc`, the installer falls back to NVIDIA's WSL-Ubuntu repository bootstrap for the toolkit only, not a Linux GPU driver.
|
||||
|
||||
If the automatic bootstrap still fails, verify:
|
||||
|
||||
- `nvcc --version`
|
||||
- `ls /usr/local/cuda/include/cuda_runtime.h`
|
||||
|
||||
For non-Ubuntu WSL distros, install the CUDA toolkit manually before running the deploy script.
|
||||
|
||||
## Standard Assumptions
|
||||
|
||||
- The host-side install path assumes modern local tooling, but TransformerLab itself is provisioned from a pinned classic single-user layout.
|
||||
- TransformerLab is intentionally pinned to the older single-user `v0.28.2` release because newer upstream releases changed the project structure and behavior in ways that break this courseware.
|
||||
- This project does not rely on TransformerLab's upstream `install.sh`; the Ansible role provisions the pinned release directly so web assets, env layout, and runtime behavior stay reproducible.
|
||||
- The scripts do not patch TransformerLab plugins or preserve the VM's special-case fixes.
|
||||
- No Ollama models are pulled during `./labctl up`; students pull models manually as part of the courseware.
|
||||
- WhiteRabbitNeo GGUFs are no longer pulled during `./labctl up`. After base setup, run `state/lab2/download_whiterabbitneo-gguf.sh` to fetch only the `BF16`, `Q8_0`, `Q4_K_M`, and `Q2_K` files from `bartowski/WhiteRabbitNeo_WhiteRabbitNeo-V3-7B-GGUF` and register local Ollama models `WhiteRabbitNeo`, `WhiteRabbitNeo-BF16`, `WhiteRabbitNeo-Q8`, `WhiteRabbitNeo-Q4`, and `WhiteRabbitNeo-Q2`.
|
||||
- TransformerLab and Unsloth homes are redirected into this project's `state/` tree via symlinks.
|
||||
- Managed web services bind for access from both Linux and the Windows side of WSL, while `labctl urls` still reports localhost-friendly URLs.
|
||||
- The local Ansible bootstrap in `.venv-ansible/` is machine-specific and will be recreated automatically if the folder is copied between hosts.
|
||||
|
||||
## Lab URLs
|
||||
|
||||
After `./deploy-courseware.sh`, run `./labctl urls`.
|
||||
|
||||
Default endpoints:
|
||||
|
||||
- Ollama API: `http://127.0.0.1:11434`
|
||||
- Open WebUI: `http://127.0.0.1:8080`
|
||||
- TransformerLab: `http://127.0.0.1:8338`
|
||||
- ChunkViz: `http://127.0.0.1:3001`
|
||||
- Embedding Atlas: `http://127.0.0.1:5055`
|
||||
- Unsloth Studio: `http://127.0.0.1:8888`
|
||||
- Promptfoo UI: `http://127.0.0.1:15500`
|
||||
- Wiki: `http://127.0.0.1:80`
|
||||
|
||||
## Notes
|
||||
|
||||
- `./labctl up` installs the environment and then starts every managed service.
|
||||
- `./labctl versions` shows the pinned TransformerLab and Ansible runtime versions used by this workspace.
|
||||
- TransformerLab is installed as a pinned single-user app and no default courseware-managed TransformerLab user is created automatically.
|
||||
- `./labctl start core` starts only `ollama` and `open-webui`.
|
||||
- `./labctl start all` starts every managed web service.
|
||||
- `./labctl open kiln` launches the Kiln desktop app installed into the project state.
|
||||
- The scripted Promptfoo install drops a starter config at `state/lab6/promptfoo.yaml`.
|
||||
- `labctl start all` now includes Promptfoo via `promptfoo view` and the cloned wiki app.
|
||||
- Lab 2 includes `state/lab2/download_whiterabbitneo-gguf.sh`, which uses `git` + `git lfs` to pull only the supported WhiteRabbitNeo quants. Add `--download-only` if you want the files without Ollama registration.
|
||||
- The wiki is cloned from `https://git.zuccaro.me/bzuccaro/LLM-Labs.git` into `state/repos/LLM-Labs` and started with `npm`.
|
||||
- `./labctl down` now uninstalls Ollama entirely when this project installed it, instead of only stopping the service.
|
||||
- Unsloth Studio currently supports chat and data workflows on macOS; Linux/WSL remains the standard path for NVIDIA-backed training.
|
||||
@@ -0,0 +1,6 @@
|
||||
[defaults]
|
||||
inventory = inventory/localhost.yml
|
||||
roles_path = roles
|
||||
host_key_checking = False
|
||||
interpreter_python = auto_silent
|
||||
retry_files_enabled = False
|
||||
@@ -0,0 +1,94 @@
|
||||
courseware_state_dir: "{{ courseware_root }}/state"
|
||||
courseware_markers_dir: "{{ courseware_state_dir }}/markers"
|
||||
courseware_logs_dir: "{{ courseware_state_dir }}/logs"
|
||||
courseware_run_dir: "{{ courseware_state_dir }}/run"
|
||||
courseware_repos_dir: "{{ courseware_state_dir }}/repos"
|
||||
courseware_venvs_dir: "{{ courseware_state_dir }}/venvs"
|
||||
courseware_models_dir: "{{ courseware_state_dir }}/models"
|
||||
courseware_datasets_dir: "{{ courseware_state_dir }}/datasets"
|
||||
courseware_tools_dir: "{{ courseware_state_dir }}/tools"
|
||||
courseware_apps_dir: "{{ courseware_state_dir }}/apps"
|
||||
courseware_downloads_dir: "{{ courseware_state_dir }}/downloads"
|
||||
courseware_lab2_dir: "{{ courseware_state_dir }}/lab2"
|
||||
courseware_lab6_dir: "{{ courseware_state_dir }}/lab6"
|
||||
courseware_transformerlab_home: "{{ courseware_state_dir }}/transformerlab-home"
|
||||
courseware_unsloth_home: "{{ courseware_state_dir }}/unsloth-home"
|
||||
courseware_ollama_models_dir: "{{ courseware_models_dir }}/ollama"
|
||||
courseware_node_runtime_dir: "{{ courseware_tools_dir }}/node-runtime"
|
||||
courseware_node_runtime_bin_dir: "{{ courseware_node_runtime_dir }}/node_modules/node/bin"
|
||||
courseware_promptfoo_dir: "{{ courseware_lab6_dir }}"
|
||||
courseware_wiki_repo_dir: "{{ courseware_repos_dir }}/LLM-Labs"
|
||||
courseware_llama_cpp_bin_dir: "{{ courseware_repos_dir }}/llama.cpp/build/bin"
|
||||
|
||||
courseware_bind_host: "0.0.0.0"
|
||||
courseware_url_host: "127.0.0.1"
|
||||
courseware_ports:
|
||||
ollama: 11434
|
||||
open_webui: 8080
|
||||
transformerlab: 8338
|
||||
chunkviz: 3001
|
||||
embedding_atlas: 5055
|
||||
unsloth: 8888
|
||||
promptfoo: 15500
|
||||
wiki: 80
|
||||
|
||||
courseware_transformerlab_version: "v0.28.2"
|
||||
courseware_transformerlab_version_dir: "{{ courseware_transformerlab_version | regex_replace('^v', '') }}"
|
||||
courseware_llama_cpp_commit: "51fa458a92d6a3f305f8fd76fc8f702e3e87ddb5"
|
||||
courseware_chunkviz_commit: "a891eacafda1f28a12373ad3b00102e68f07c57f"
|
||||
courseware_promptfoo_version: "0.119.0"
|
||||
courseware_kiln_release_tag: "v0.18.1"
|
||||
courseware_node_runtime_version: "20.20.2"
|
||||
courseware_wiki_repo: "https://git.zuccaro.me/bzuccaro/LLM-Labs.git"
|
||||
|
||||
courseware_open_webui_spec: "open-webui"
|
||||
courseware_embedding_atlas_spec: "embedding-atlas"
|
||||
|
||||
courseware_white_rabbit_repo: "bartowski/WhiteRabbitNeo_WhiteRabbitNeo-V3-7B-GGUF"
|
||||
courseware_white_rabbit_variants:
|
||||
- ollama_model: "WhiteRabbitNeo"
|
||||
quant: "Q4_K_M"
|
||||
filename: "WhiteRabbitNeo-V3-7B-Q4_K_M.gguf"
|
||||
alias_of_default: true
|
||||
- ollama_model: "WhiteRabbitNeo-BF16"
|
||||
quant: "BF16"
|
||||
filename: "WhiteRabbitNeo-V3-7B-bf16.gguf"
|
||||
- ollama_model: "WhiteRabbitNeo-Q8"
|
||||
quant: "Q8_0"
|
||||
filename: "WhiteRabbitNeo-V3-7B-Q8_0.gguf"
|
||||
- ollama_model: "WhiteRabbitNeo-Q4"
|
||||
quant: "Q4_K_M"
|
||||
filename: "WhiteRabbitNeo-V3-7B-Q4_K_M.gguf"
|
||||
- ollama_model: "WhiteRabbitNeo-Q2"
|
||||
quant: "Q2_K"
|
||||
filename: "WhiteRabbitNeo-V3-7B-Q2_K.gguf"
|
||||
courseware_ollama_models:
|
||||
- "llama3.2"
|
||||
- "qwen3.5:4b"
|
||||
- "gemma3n:e2b"
|
||||
courseware_optional_ollama_models:
|
||||
- "gemma3:12b-it-qat"
|
||||
courseware_install_optional_heavy_models: false
|
||||
|
||||
courseware_wsl_cuda_pin_url: "https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin"
|
||||
courseware_wsl_cuda_pin_dest: "/etc/apt/preferences.d/cuda-repository-pin-600"
|
||||
courseware_wsl_cuda_installer_url: "https://developer.download.nvidia.com/compute/cuda/13.2.0/local_installers/cuda-repo-wsl-ubuntu-13-2-local_13.2.0-1_amd64.deb"
|
||||
courseware_wsl_cuda_installer_filename: "cuda-repo-wsl-ubuntu-13-2-local_13.2.0-1_amd64.deb"
|
||||
courseware_wsl_cuda_installer_local_path: "/tmp/cuda-repo-wsl-ubuntu-13-2-local_13.2.0-1_amd64.deb"
|
||||
courseware_wsl_cuda_repo_dir: "/var/cuda-repo-wsl-ubuntu-13-2-local"
|
||||
courseware_wsl_cuda_toolkit_package: "cuda-toolkit-13-2"
|
||||
courseware_unsloth_installer_url: "https://unsloth.ai/install.sh"
|
||||
courseware_unsloth_installer_path: "{{ courseware_downloads_dir }}/unsloth-install.sh"
|
||||
courseware_unsloth_python_version: "3.12"
|
||||
courseware_unsloth_install_timeout_seconds: 1800
|
||||
courseware_ollama_install_marker: "{{ courseware_markers_dir }}/ollama-installed-by-courseware"
|
||||
|
||||
courseware_services:
|
||||
- "ollama"
|
||||
- "open-webui"
|
||||
- "transformerlab"
|
||||
- "chunkviz"
|
||||
- "embedding-atlas"
|
||||
- "unsloth"
|
||||
- "promptfoo"
|
||||
- "wiki"
|
||||
@@ -0,0 +1 @@
|
||||
localhost ansible_connection=local
|
||||
@@ -0,0 +1,271 @@
|
||||
- hosts: localhost
|
||||
gather_facts: true
|
||||
vars_files:
|
||||
- ../group_vars/all.yml
|
||||
tasks:
|
||||
- name: Stop managed services
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_root }}/scripts/service_manager.sh"
|
||||
- stop
|
||||
- all
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Stat managed TransformerLab symlink
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.transformerlab"
|
||||
follow: false
|
||||
register: courseware_down_transformerlab
|
||||
|
||||
- name: Stat managed TransformerLab marker
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.transformerlab/.courseware-managed"
|
||||
register: courseware_down_transformerlab_marker
|
||||
|
||||
- name: Stat managed Unsloth symlink
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.unsloth"
|
||||
follow: false
|
||||
register: courseware_down_unsloth
|
||||
|
||||
- name: Stat managed Unsloth marker
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.unsloth/.courseware-managed"
|
||||
register: courseware_down_unsloth_marker
|
||||
|
||||
- name: Stat conda environments file
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.conda/environments.txt"
|
||||
register: courseware_down_conda_envs
|
||||
|
||||
- name: Stat courseware-managed Ollama install marker
|
||||
stat:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
register: courseware_down_ollama_marker
|
||||
|
||||
- name: Stat Ollama systemd unit
|
||||
stat:
|
||||
path: /etc/systemd/system/ollama.service
|
||||
register: courseware_down_ollama_systemd_unit
|
||||
|
||||
- name: Stat managed Unsloth launcher symlink
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.local/bin/unsloth"
|
||||
follow: false
|
||||
register: courseware_down_unsloth_launcher
|
||||
|
||||
- name: Stat managed Unsloth desktop entry
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.local/share/applications/unsloth-studio.desktop"
|
||||
register: courseware_down_unsloth_desktop
|
||||
|
||||
- name: Stat managed Unsloth desktop shortcut
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/Desktop/unsloth-studio.desktop"
|
||||
register: courseware_down_unsloth_desktop_shortcut
|
||||
|
||||
- name: Stat managed Unsloth share directory
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.local/share/unsloth/studio.conf"
|
||||
register: courseware_down_unsloth_share_conf
|
||||
|
||||
- name: Stat managed llama.cpp PATH symlinks
|
||||
stat:
|
||||
path: "/usr/local/bin/{{ item }}"
|
||||
follow: false
|
||||
loop:
|
||||
- llama-cli
|
||||
- llama-quantize
|
||||
- llama-perplexity
|
||||
- llama-server
|
||||
register: courseware_down_llama_path_slots
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Stop and disable courseware-managed Ollama systemd service
|
||||
become: true
|
||||
systemd:
|
||||
name: ollama
|
||||
state: stopped
|
||||
enabled: false
|
||||
when:
|
||||
- ansible_service_mgr == "systemd"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
- courseware_down_ollama_systemd_unit.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove courseware-managed Ollama systemd unit
|
||||
become: true
|
||||
file:
|
||||
path: /etc/systemd/system/ollama.service
|
||||
state: absent
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
- courseware_down_ollama_systemd_unit.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Reload systemd after Ollama unit removal
|
||||
become: true
|
||||
command: systemctl daemon-reload
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- ansible_service_mgr == "systemd"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
- courseware_down_ollama_systemd_unit.stat.exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Remove courseware-managed Ollama Linux install paths
|
||||
become: true
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
loop:
|
||||
- /usr/local/bin/ollama
|
||||
- /usr/bin/ollama
|
||||
- /bin/ollama
|
||||
- /usr/local/lib/ollama
|
||||
- /usr/lib/ollama
|
||||
- /lib/ollama
|
||||
- /usr/share/ollama
|
||||
- /var/lib/ollama
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove courseware-managed Ollama Linux user
|
||||
become: true
|
||||
user:
|
||||
name: ollama
|
||||
state: absent
|
||||
remove: true
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove courseware-managed Ollama Linux group
|
||||
become: true
|
||||
group:
|
||||
name: ollama
|
||||
state: absent
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Stop courseware-managed Ollama macOS app if running
|
||||
command: pkill -x Ollama
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Uninstall courseware-managed Ollama Homebrew formula
|
||||
command: brew uninstall ollama
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- courseware_down_ollama_marker.stat.exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed TransformerLab conda environment entry
|
||||
lineinfile:
|
||||
path: "{{ ansible_env.HOME }}/.conda/environments.txt"
|
||||
regexp: "^{{ (courseware_transformerlab_home ~ '/envs/transformerlab') | regex_escape() }}$"
|
||||
state: absent
|
||||
when: courseware_down_conda_envs.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed TransformerLab path
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.transformerlab"
|
||||
state: absent
|
||||
when:
|
||||
- courseware_down_transformerlab.stat.exists
|
||||
- >
|
||||
(courseware_down_transformerlab.stat.islnk and
|
||||
courseware_down_transformerlab.stat.lnk_source == courseware_transformerlab_home)
|
||||
or courseware_down_transformerlab_marker.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth path
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.unsloth"
|
||||
state: absent
|
||||
when:
|
||||
- courseware_down_unsloth.stat.exists
|
||||
- >
|
||||
(courseware_down_unsloth.stat.islnk and
|
||||
courseware_down_unsloth.stat.lnk_source == courseware_unsloth_home)
|
||||
or courseware_down_unsloth_marker.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth launcher symlink
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.local/bin/unsloth"
|
||||
state: absent
|
||||
when:
|
||||
- courseware_down_unsloth_launcher.stat.exists
|
||||
- courseware_down_unsloth_launcher.stat.islnk
|
||||
- courseware_down_unsloth_launcher.stat.lnk_source == (courseware_unsloth_home ~ "/studio/unsloth_studio/bin/unsloth")
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth desktop entry
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.local/share/applications/unsloth-studio.desktop"
|
||||
state: absent
|
||||
when: courseware_down_unsloth_desktop.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth desktop shortcut
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/Desktop/unsloth-studio.desktop"
|
||||
state: absent
|
||||
when: courseware_down_unsloth_desktop_shortcut.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth share directory
|
||||
file:
|
||||
path: "{{ ansible_env.HOME }}/.local/share/unsloth"
|
||||
state: absent
|
||||
when:
|
||||
- courseware_down_unsloth_share_conf.stat.exists
|
||||
- lookup('file', ansible_env.HOME ~ '/.local/share/unsloth/studio.conf', errors='ignore') is search(courseware_unsloth_home ~ '/studio/unsloth_studio/bin/unsloth')
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed Unsloth Windows shortcuts
|
||||
command: >
|
||||
powershell.exe -NoProfile -Command
|
||||
"$locations = @([Environment]::GetFolderPath('Desktop'), (Join-Path $env:APPDATA 'Microsoft\Windows\Start Menu\Programs'));
|
||||
foreach ($dir in $locations) {
|
||||
if (-not $dir -or -not (Test-Path $dir)) { continue }
|
||||
$linkPath = Join-Path $dir 'Unsloth Studio.lnk';
|
||||
if (Test-Path $linkPath) { Remove-Item -LiteralPath $linkPath -Force -ErrorAction SilentlyContinue }
|
||||
}"
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- "'microsoft' in ansible_kernel | lower or 'wsl' in ansible_kernel | lower"
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Remove managed llama.cpp PATH symlinks
|
||||
become: true
|
||||
file:
|
||||
path: "/usr/local/bin/{{ item.item }}"
|
||||
state: absent
|
||||
loop: "{{ courseware_down_llama_path_slots.results | default([]) }}"
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- item.stat.exists
|
||||
- item.stat.islnk
|
||||
- item.stat.lnk_source == (courseware_llama_cpp_bin_dir ~ '/' ~ item.item)
|
||||
failed_when: false
|
||||
|
||||
- name: Remove project-owned state directory
|
||||
file:
|
||||
path: "{{ courseware_state_dir }}"
|
||||
state: absent
|
||||
@@ -0,0 +1,18 @@
|
||||
- hosts: localhost
|
||||
gather_facts: true
|
||||
vars_files:
|
||||
- ../group_vars/all.yml
|
||||
roles:
|
||||
- { role: preflight, tags: ["preflight"] }
|
||||
- directories
|
||||
- packages
|
||||
- lab_assets
|
||||
- node_runtime
|
||||
- llama_cpp
|
||||
- transformerlab
|
||||
- open_webui
|
||||
- chunkviz
|
||||
- promptfoo
|
||||
- wiki
|
||||
- kiln
|
||||
- unsloth
|
||||
@@ -0,0 +1,19 @@
|
||||
- name: Clone ChunkViz
|
||||
git:
|
||||
repo: "https://github.com/gkamradt/ChunkViz"
|
||||
dest: "{{ courseware_repos_dir }}/ChunkViz"
|
||||
version: "{{ courseware_chunkviz_commit }}"
|
||||
update: false
|
||||
|
||||
- name: Install ChunkViz dependencies
|
||||
command: npm install
|
||||
args:
|
||||
chdir: "{{ courseware_repos_dir }}/ChunkViz"
|
||||
creates: "{{ courseware_repos_dir }}/ChunkViz/node_modules"
|
||||
|
||||
- name: Build ChunkViz
|
||||
command: npm run build
|
||||
args:
|
||||
chdir: "{{ courseware_repos_dir }}/ChunkViz"
|
||||
creates: "{{ courseware_repos_dir }}/ChunkViz/build/index.html"
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Default variables for common role
|
||||
|
||||
common_packages_debian:
|
||||
- python3
|
||||
- python3-pip
|
||||
- git
|
||||
- curl
|
||||
- wget
|
||||
- build-essential
|
||||
- cmake
|
||||
- ninja-build
|
||||
- libssl-dev
|
||||
- pkg-config
|
||||
|
||||
common_packages_macos:
|
||||
- python3
|
||||
- git
|
||||
- curl
|
||||
- cmake
|
||||
- ninja
|
||||
@@ -0,0 +1,9 @@
|
||||
#!/bin/sh
|
||||
# Add local bin to PATH if not already added
|
||||
case ":${PATH}:" in
|
||||
*:"$HOME/.local/bin":*)
|
||||
;;
|
||||
*)
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
;;
|
||||
esac
|
||||
@@ -0,0 +1,10 @@
|
||||
---
|
||||
# Handlers for common role
|
||||
|
||||
- name: Update PATH in shell config
|
||||
ansible.builtin.debug:
|
||||
msg: "PATH will be updated in next shell session"
|
||||
|
||||
- name: Shell updated
|
||||
ansible.builtin.debug:
|
||||
msg: "Shell configuration updated. Please restart your shell for changes to take effect."
|
||||
@@ -0,0 +1,92 @@
|
||||
---
|
||||
# Common setup tasks - runs on all platforms
|
||||
|
||||
- name: Ensure required system packages are installed (Debian/Ubuntu)
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- python3
|
||||
- python3-pip
|
||||
- git
|
||||
- curl
|
||||
- wget
|
||||
- build-essential
|
||||
- cmake
|
||||
- ninja-build
|
||||
- libssl-dev
|
||||
- pkg-config
|
||||
- zstd
|
||||
state: present
|
||||
update_cache: no
|
||||
when: ansible_os_family == "Debian"
|
||||
become: yes
|
||||
|
||||
- name: Ensure Homebrew is installed (macOS)
|
||||
ansible.builtin.homebrew:
|
||||
name:
|
||||
- python3
|
||||
- git
|
||||
- curl
|
||||
- cmake
|
||||
- ninja
|
||||
state: present
|
||||
when: ansible_os_family == "Darwin"
|
||||
|
||||
- name: Install Python virtual environment module (user space)
|
||||
ansible.builtin.pip:
|
||||
name: virtualenv
|
||||
state: present
|
||||
executable: pip3
|
||||
extra_args: "--break-system-packages"
|
||||
become: no
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Create lab base directory structure
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ llmlab_base }}/lab1"
|
||||
- "{{ llmlab_base }}/lab2"
|
||||
- "{{ llmlab_base }}/lab3"
|
||||
- "{{ llmlab_base }}/lab4"
|
||||
- "{{ llmlab_base }}/lab5"
|
||||
- "{{ llmlab_base }}/lab6"
|
||||
- "{{ llmlab_base }}/.llmlab"
|
||||
- "{{ llmlab_base }}/.llmlab/logs"
|
||||
become: no
|
||||
|
||||
- name: Create .local/bin directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/.local/bin"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: no
|
||||
|
||||
- name: Copy common environment script
|
||||
ansible.builtin.copy:
|
||||
src: "{{ playbook_dir }}/roles/common/files/env"
|
||||
dest: "{{ llmlab_base }}/.local/bin/env"
|
||||
mode: '0755'
|
||||
force: yes
|
||||
notify: Update PATH in shell config
|
||||
|
||||
- name: Ensure .local/bin is in PATH
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ llmlab_base }}/.bashrc"
|
||||
line: 'export PATH="$HOME/.local/bin:$PATH"'
|
||||
state: present
|
||||
insertafter: EOF
|
||||
notify: Shell updated
|
||||
|
||||
- name: Create logs directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/.llmlab/logs"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Create setup log file
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/.llmlab/logs/setup.log"
|
||||
state: touch
|
||||
mode: '0644'
|
||||
@@ -0,0 +1,90 @@
|
||||
- name: Create managed state directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
loop:
|
||||
- "{{ courseware_state_dir }}"
|
||||
- "{{ courseware_markers_dir }}"
|
||||
- "{{ courseware_logs_dir }}"
|
||||
- "{{ courseware_run_dir }}"
|
||||
- "{{ courseware_repos_dir }}"
|
||||
- "{{ courseware_venvs_dir }}"
|
||||
- "{{ courseware_models_dir }}"
|
||||
- "{{ courseware_datasets_dir }}"
|
||||
- "{{ courseware_tools_dir }}"
|
||||
- "{{ courseware_apps_dir }}"
|
||||
- "{{ courseware_downloads_dir }}"
|
||||
- "{{ courseware_lab2_dir }}"
|
||||
- "{{ courseware_transformerlab_home }}"
|
||||
- "{{ courseware_unsloth_home }}"
|
||||
- "{{ courseware_ollama_models_dir }}"
|
||||
|
||||
- name: Seed managed ownership markers
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: touch
|
||||
mode: "0644"
|
||||
loop:
|
||||
- "{{ courseware_transformerlab_home }}/.courseware-managed"
|
||||
- "{{ courseware_unsloth_home }}/.courseware-managed"
|
||||
|
||||
- name: Check existing TransformerLab path
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.transformerlab"
|
||||
follow: false
|
||||
register: courseware_transformerlab_link
|
||||
|
||||
- name: Check existing TransformerLab ownership marker
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.transformerlab/.courseware-managed"
|
||||
register: courseware_transformerlab_marker
|
||||
|
||||
- name: Fail if TransformerLab path is already occupied
|
||||
fail:
|
||||
msg: "{{ ansible_env.HOME }}/.transformerlab already exists and is not managed by this project."
|
||||
when:
|
||||
- courseware_transformerlab_link.stat.exists
|
||||
- >
|
||||
(
|
||||
(not courseware_transformerlab_link.stat.islnk) or
|
||||
(courseware_transformerlab_link.stat.islnk and
|
||||
courseware_transformerlab_link.stat.lnk_source != courseware_transformerlab_home)
|
||||
) and
|
||||
(not courseware_transformerlab_marker.stat.exists)
|
||||
|
||||
- name: Link TransformerLab home into project state
|
||||
file:
|
||||
src: "{{ courseware_transformerlab_home }}"
|
||||
dest: "{{ ansible_env.HOME }}/.transformerlab"
|
||||
state: link
|
||||
force: true
|
||||
|
||||
- name: Check existing Unsloth path
|
||||
stat:
|
||||
path: "{{ ansible_env.HOME }}/.unsloth"
|
||||
follow: false
|
||||
register: courseware_unsloth_link
|
||||
|
||||
- name: Fail if Unsloth path is already occupied
|
||||
fail:
|
||||
msg: "{{ ansible_env.HOME }}/.unsloth already exists and is not managed by this project."
|
||||
when:
|
||||
- courseware_unsloth_link.stat.exists
|
||||
- >
|
||||
(not courseware_unsloth_link.stat.islnk) or
|
||||
(courseware_unsloth_link.stat.islnk and
|
||||
courseware_unsloth_link.stat.lnk_source != courseware_unsloth_home)
|
||||
|
||||
- name: Link Unsloth home into project state
|
||||
file:
|
||||
src: "{{ courseware_unsloth_home }}"
|
||||
dest: "{{ ansible_env.HOME }}/.unsloth"
|
||||
state: link
|
||||
force: true
|
||||
|
||||
- name: Write runtime environment file
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../templates/runtime.env.j2"
|
||||
dest: "{{ courseware_state_dir }}/runtime.env"
|
||||
mode: "0644"
|
||||
@@ -0,0 +1,25 @@
|
||||
- name: Download Kiln Linux archive
|
||||
get_url:
|
||||
url: "https://github.com/Kiln-AI/Kiln/releases/download/{{ courseware_kiln_release_tag }}/Kiln.Linux.x64.zip"
|
||||
dest: "{{ courseware_downloads_dir }}/Kiln.Linux.x64.zip"
|
||||
mode: "0644"
|
||||
|
||||
- name: Create Kiln Linux directory
|
||||
file:
|
||||
path: "{{ courseware_apps_dir }}/kiln"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Unpack Kiln Linux binary
|
||||
unarchive:
|
||||
src: "{{ courseware_downloads_dir }}/Kiln.Linux.x64.zip"
|
||||
dest: "{{ courseware_apps_dir }}/kiln"
|
||||
remote_src: true
|
||||
creates: "{{ courseware_apps_dir }}/kiln/Kiln"
|
||||
|
||||
- name: Ensure Kiln Linux binary is executable
|
||||
file:
|
||||
path: "{{ courseware_apps_dir }}/kiln/Kiln"
|
||||
mode: "0755"
|
||||
state: file
|
||||
|
||||
@@ -0,0 +1,19 @@
|
||||
- name: Download Kiln macOS disk image
|
||||
get_url:
|
||||
url: "https://github.com/Kiln-AI/Kiln/releases/download/{{ courseware_kiln_release_tag }}/Kiln.MacOS.AppleSilicon.M-Processor.dmg"
|
||||
dest: "{{ courseware_downloads_dir }}/Kiln.MacOS.AppleSilicon.M-Processor.dmg"
|
||||
mode: "0644"
|
||||
|
||||
- name: Install Kiln.app into project state
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
mount_point=$(mktemp -d /tmp/kiln.XXXXXX)
|
||||
hdiutil attach "{{ courseware_downloads_dir }}/Kiln.MacOS.AppleSilicon.M-Processor.dmg" -mountpoint "$mount_point" -nobrowse -quiet
|
||||
app_path=$(find "$mount_point" -maxdepth 1 -name '*.app' | head -n 1)
|
||||
rm -rf "{{ courseware_apps_dir }}/Kiln.app"
|
||||
cp -R "$app_path" "{{ courseware_apps_dir }}/Kiln.app"
|
||||
hdiutil detach "$mount_point" -quiet
|
||||
rmdir "$mount_point"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
creates: "{{ courseware_apps_dir }}/Kiln.app"
|
||||
@@ -0,0 +1,8 @@
|
||||
- name: Install Kiln on Linux
|
||||
include_tasks: linux.yml
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Install Kiln on macOS
|
||||
include_tasks: macos.yml
|
||||
when: ansible_system == "Darwin"
|
||||
|
||||
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Wiki update script - runs as student user
|
||||
# This script clones or updates the wiki repository
|
||||
|
||||
REPO_URL="${WIKI_REPO_URL:-https://git.zuccaro.me/bzuccaro/LLM-Labs.git}"
|
||||
WIKI_DIR="${HOME}/wiki"
|
||||
STUDENT_USER="${SUDO_USER:-student}"
|
||||
|
||||
run_as_student() {
|
||||
sudo -u "$STUDENT_USER" -- "$@"
|
||||
}
|
||||
|
||||
if [ -d "$WIKI_DIR/.git" ]; then
|
||||
echo "Updating existing wiki..."
|
||||
run_as_student git -C "$WIKI_DIR" pull --ff-only
|
||||
else
|
||||
echo "Cloning wiki repository..."
|
||||
tmp_dir=$(mktemp /tmp/wiki.clone.XXXXXX)
|
||||
chown "$STUDENT_USER:$STUDENT_USER" "$tmp_dir"
|
||||
rmdir "$tmp_dir"
|
||||
|
||||
run_as_student git clone "$REPO_URL" "$tmp_dir"
|
||||
|
||||
# Preserve node_modules and .next if they exist
|
||||
if [ -d "$WIKI_DIR/node_modules" ] && [ ! -e "$tmp_dir/node_modules" ]; then
|
||||
mv "$WIKI_DIR/node_modules" "$tmp_dir/node_modules"
|
||||
fi
|
||||
|
||||
if [ -d "$WIKI_DIR/.next" ] && [ ! -e "$tmp_dir/.next" ]; then
|
||||
mv "$WIKI_DIR/.next" "$tmp_dir/.next"
|
||||
fi
|
||||
|
||||
rm -rf "$WIKI_DIR"
|
||||
mv "$tmp_dir" "$WIKI_DIR"
|
||||
chown -R "$STUDENT_USER:$STUDENT_USER" "$WIKI_DIR"
|
||||
fi
|
||||
|
||||
# Install dependencies if needed
|
||||
if [ ! -d "$WIKI_DIR/node_modules" ]; then
|
||||
echo "Installing wiki dependencies..."
|
||||
run_as_student bash -lc "cd '$WIKI_DIR' && npm install --no-fund --no-audit"
|
||||
fi
|
||||
|
||||
echo "Wiki updated successfully!"
|
||||
@@ -0,0 +1,135 @@
|
||||
---
|
||||
# Lab start scripts setup
|
||||
|
||||
- name: Create lab1 start script (Transformer Lab)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab1/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
export NUMPY_DISABLE_OPTIMIZATION_CHECK=1
|
||||
source "{{ llmlab_base }}/.transformerlab/miniforge3/etc/profile.d/conda.sh"
|
||||
conda activate transformerlab
|
||||
|
||||
cd "{{ llmlab_base }}/.transformerlab/src"
|
||||
./run.sh
|
||||
mode: '0755'
|
||||
force: no
|
||||
|
||||
- name: Create lab2 start script (Ollama)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab2/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "Starting Ollama..."
|
||||
|
||||
# Check if already running
|
||||
if pgrep -f "ollama serve" > /dev/null; then
|
||||
echo "Ollama is already running."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Start Ollama
|
||||
nohup ollama serve > {{ llmlab_base }}/.llmlab/logs/ollama.log 2>&1 &
|
||||
echo "Ollama started (PID: $!)"
|
||||
echo "Ollama is available at http://localhost:11434"
|
||||
mode: '0755'
|
||||
force: no
|
||||
|
||||
- name: Create lab3 start script (Open WebUI)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab3/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
export OPEN_WEBUI_PORT=8080
|
||||
export OPEN_WEBUI_HOST=0.0.0.0
|
||||
|
||||
# Check if already running
|
||||
if pgrep -f "open-webui serve" > /dev/null; then
|
||||
echo "Open WebUI is already running."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Start Open WebUI
|
||||
nohup open-webui serve \
|
||||
--port ${OPEN_WEBUI_PORT} \
|
||||
--host ${OPEN_WEBUI_HOST} \
|
||||
> {{ llmlab_base }}/.llmlab/logs/open-webui.log 2>&1 &
|
||||
|
||||
echo "Open WebUI started on http://${OPEN_WEBUI_HOST}:${OPEN_WEBUI_PORT}"
|
||||
echo "PID: $!"
|
||||
mode: '0755'
|
||||
force: no
|
||||
|
||||
- name: Create lab4 start script (ChunkViz)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab4/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
CHUNKVIZ_PORT=${PORT:-3001}
|
||||
|
||||
# Start ChunkViz in background
|
||||
cd "{{ llmlab_base }}/lab4/ChunkViz"
|
||||
nohup npm start > {{ llmlab_base }}/.llmlab/logs/chunkviz.log 2>&1 &
|
||||
CHUNKVIZ_PID=$!
|
||||
|
||||
echo "ChunkViz started on http://0.0.0.0:${CHUNKVIZ_PORT}"
|
||||
echo "PID: ${CHUNKVIZ_PID}"
|
||||
mode: '0755'
|
||||
force: no
|
||||
|
||||
- name: Create lab5 start script (Promptfoo)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab6/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
cd "{{ llmlab_base }}/lab6"
|
||||
|
||||
# Run Promptfoo evaluation
|
||||
npx promptfoo eval -c promptfoo.yaml
|
||||
mode: '0755'
|
||||
force: no
|
||||
|
||||
- name: Create lab stop scripts
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab{{ item }}/stop.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "Stopping Lab {{ item }}..."
|
||||
|
||||
case "{{ item }}" in
|
||||
1)
|
||||
pkill -f "transformerlab.*run.sh" 2>/dev/null || true
|
||||
;;
|
||||
2)
|
||||
pkill -f "ollama serve" 2>/dev/null || true
|
||||
;;
|
||||
3)
|
||||
pkill -f "open-webui" 2>/dev/null || true
|
||||
;;
|
||||
4)
|
||||
pkill -f "ChunkViz" 2>/dev/null || true
|
||||
;;
|
||||
5)
|
||||
pkill -f "promptfoo" 2>/dev/null || true
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "Lab {{ item }} stopped."
|
||||
mode: '0755'
|
||||
loop: [1, 2, 3, 4, 5]
|
||||
|
||||
- name: Display lab scripts creation
|
||||
ansible.builtin.debug:
|
||||
msg: "All lab start/stop scripts created in {{ llmlab_base }}/"
|
||||
@@ -0,0 +1,44 @@
|
||||
- name: Set lab asset source paths
|
||||
set_fact:
|
||||
courseware_lab2_asset_src: "{{ playbook_dir }}/../../assets/lab2/wiki.test.raw"
|
||||
courseware_lab4_asset_src: "{{ playbook_dir }}/../../assets/lab4/ttps_dataset.parquet"
|
||||
|
||||
- name: Check lab 2 asset presence in repo
|
||||
stat:
|
||||
path: "{{ courseware_lab2_asset_src }}"
|
||||
register: courseware_lab2_asset
|
||||
|
||||
- name: Check lab 4 asset presence in repo
|
||||
stat:
|
||||
path: "{{ courseware_lab4_asset_src }}"
|
||||
register: courseware_lab4_asset
|
||||
|
||||
- name: Fail if required lab assets are missing from this checkout
|
||||
fail:
|
||||
msg: >-
|
||||
Required lab assets were not found in this repo checkout.
|
||||
Expected:
|
||||
{{ courseware_lab2_asset_src }}
|
||||
and
|
||||
{{ courseware_lab4_asset_src }}.
|
||||
Make sure the full project was copied, including the assets/ directory.
|
||||
when:
|
||||
- not courseware_lab2_asset.stat.exists or not courseware_lab4_asset.stat.exists
|
||||
|
||||
- name: Copy lab 2 wiki test corpus
|
||||
copy:
|
||||
src: "{{ courseware_lab2_asset_src }}"
|
||||
dest: "{{ courseware_datasets_dir }}/wiki.test.raw"
|
||||
mode: "0644"
|
||||
|
||||
- name: Render lab 2 WhiteRabbitNeo download helper
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../templates/download_whiterabbitneo-gguf.sh.j2"
|
||||
dest: "{{ courseware_lab2_dir }}/download_whiterabbitneo-gguf.sh"
|
||||
mode: "0755"
|
||||
|
||||
- name: Copy lab 4 parquet dataset
|
||||
copy:
|
||||
src: "{{ courseware_lab4_asset_src }}"
|
||||
dest: "{{ courseware_datasets_dir }}/ttps_dataset.parquet"
|
||||
mode: "0644"
|
||||
@@ -0,0 +1,49 @@
|
||||
---
|
||||
# Linux GPU detection and validation tasks
|
||||
|
||||
- name: Check for NVIDIA GPU
|
||||
ansible.builtin.command: nvidia-smi
|
||||
register: nvidia_smi_output
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display NVIDIA GPU information
|
||||
ansible.builtin.debug:
|
||||
msg: "NVIDIA GPU detected: {{ nvidia_smi_output.stdout_lines | join(', ') }}"
|
||||
when: nvidia_smi_output.rc == 0
|
||||
|
||||
- name: Validate NVIDIA VRAM
|
||||
ansible.builtin.set_fact:
|
||||
gpu_valid: true
|
||||
gpu_vram_gb: "{{ (nvidia_smi_output.stdout | regex_findall('(\\d+)MiB')) | first | default(0) | int / 1024 | int }}"
|
||||
when: nvidia_smi_output.rc == 0
|
||||
|
||||
- name: Check VRAM requirement (8GB minimum)
|
||||
ansible.builtin.debug:
|
||||
msg: "GPU VRAM: {{ gpu_vram_gb | default(0) }}GB - Requirement met: {{ gpu_vram_gb | default(0) >= 8 }}"
|
||||
when: gpu_valid is defined
|
||||
|
||||
- name: Warn about insufficient VRAM
|
||||
ansible.builtin.debug:
|
||||
msg: "WARNING: NVIDIA GPU has less than 8GB VRAM. Some labs may not function correctly."
|
||||
when: gpu_vram_gb is defined and gpu_vram_gb < 8
|
||||
|
||||
- name: Check for AMD GPU (ROCm)
|
||||
ansible.builtin.command: rocminfo
|
||||
register: rocm_output
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: nvidia_smi_output.rc != 0
|
||||
|
||||
- name: Display AMD GPU information
|
||||
ansible.builtin.debug:
|
||||
msg: "AMD GPU detected"
|
||||
when: rocm_output.rc == 0
|
||||
|
||||
- name: Set GPU type fact
|
||||
ansible.builtin.set_fact:
|
||||
gpu_type: "{{ 'nvidia' if nvidia_smi_output.rc == 0 else ('amd' if rocm_output.rc == 0 else 'none') }}"
|
||||
|
||||
- name: Display GPU summary
|
||||
ansible.builtin.debug:
|
||||
msg: "GPU Type: {{ gpu_type | default('none') }}"
|
||||
@@ -0,0 +1,172 @@
|
||||
---
|
||||
# llama.cpp installation and build
|
||||
|
||||
- name: Check if running on WSL
|
||||
ansible.builtin.command: grep -qi microsoft /proc/version
|
||||
register: wsl_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set WSL fact
|
||||
ansible.builtin.set_fact:
|
||||
is_wsl: "{{ wsl_check.rc == 0 }}"
|
||||
|
||||
- name: Detect GPU on Linux/WSL
|
||||
ansible.builtin.command: nvidia-smi
|
||||
register: nvidia_smi_output
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_os_family == "Debian" or is_wsl | default(false)
|
||||
|
||||
- name: Set GPU type for WSL/Linux
|
||||
ansible.builtin.set_fact:
|
||||
gpu_type: "{{ 'nvidia' if nvidia_smi_output.rc == 0 else 'none' }}"
|
||||
when: is_wsl | default(false) or ansible_os_family == "Debian"
|
||||
|
||||
- name: Check for Metal GPU on macOS
|
||||
ansible.builtin.command: system_profiler SPDisplaysDataType
|
||||
register: metal_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_os_family == "Darwin"
|
||||
|
||||
- name: Set GPU type for macOS
|
||||
ansible.builtin.set_fact:
|
||||
gpu_type: "metal"
|
||||
when: ansible_os_family == "Darwin" and metal_check.rc == 0
|
||||
|
||||
- name: Display detected GPU type
|
||||
ansible.builtin.debug:
|
||||
msg: "llama.cpp GPU type: {{ gpu_type | default('none') }}"
|
||||
|
||||
- name: Check if llama.cpp already exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ llmlab_base }}/lab2/llama.cpp"
|
||||
register: llama_cpp_stat
|
||||
|
||||
- name: Check existing build config
|
||||
ansible.builtin.command:
|
||||
cmd: grep -q "^GGML_CUDA:BOOL=ON" "{{ llmlab_base }}/lab2/llama.cpp/build/CMakeCache.txt" 2>/dev/null && echo "cuda" || echo "none"
|
||||
register: existing_gpu_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: llama_cpp_stat.stat.exists
|
||||
|
||||
- name: Determine if rebuild needed
|
||||
ansible.builtin.set_fact:
|
||||
needs_rebuild: >-
|
||||
{{
|
||||
not llama_cpp_stat.stat.exists or
|
||||
(gpu_type == 'nvidia' and existing_gpu_check.stdout != 'cuda') or
|
||||
(gpu_type == 'metal' and existing_gpu_check.stdout != 'metal') or
|
||||
(gpu_type == 'amd' and existing_gpu_check.stdout != 'amd')
|
||||
}}
|
||||
|
||||
- name: Clean build directory for rebuild
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
state: absent
|
||||
become: no
|
||||
when: needs_rebuild | default(false)
|
||||
|
||||
- name: Clone llama.cpp repository
|
||||
ansible.builtin.git:
|
||||
repo: https://github.com/ggerganov/llama.cpp
|
||||
dest: "{{ llmlab_base }}/lab2/llama.cpp"
|
||||
version: master
|
||||
update: no
|
||||
become: no
|
||||
when: not llama_cpp_stat.stat.exists
|
||||
|
||||
- name: Create build directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: no
|
||||
|
||||
- name: Configure llama.cpp only if not already configured
|
||||
ansible.builtin.command:
|
||||
cmd: test -f CMakeCache.txt
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
register: cmake_configured
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Configure llama.cpp with CUDA (NVIDIA GPU)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- cmake
|
||||
- ..
|
||||
- -G Ninja
|
||||
- -DCMAKE_BUILD_TYPE=Release
|
||||
- -DGGML_CUDA=on
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
when: gpu_type == 'nvidia' and cmake_configured.rc != 0
|
||||
become: no
|
||||
|
||||
- name: Configure llama.cpp for AMD (ROCm)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- cmake
|
||||
- ..
|
||||
- -G Ninja
|
||||
- -DCMAKE_BUILD_TYPE=Release
|
||||
- -DGGML_ROCM=on
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
when: gpu_type == 'amd' and cmake_configured.rc != 0
|
||||
become: no
|
||||
|
||||
- name: Configure llama.cpp for Metal (macOS)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- cmake
|
||||
- ..
|
||||
- -G Ninja
|
||||
- -DCMAKE_BUILD_TYPE=Release
|
||||
- -DGGML_METAL=on
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
when: gpu_type == 'metal' and cmake_configured.rc != 0
|
||||
become: no
|
||||
|
||||
- name: Configure llama.cpp for CPU only
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- cmake
|
||||
- ..
|
||||
- -G Ninja
|
||||
- -DCMAKE_BUILD_TYPE=Release
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
when: gpu_type | default('none') == 'none' and cmake_configured.rc != 0
|
||||
become: no
|
||||
|
||||
- name: Build llama.cpp
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- ninja
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab2/llama.cpp/build"
|
||||
become: no
|
||||
register: build_output
|
||||
|
||||
- name: Display build output
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ build_output.stdout_lines[-10:] }}"
|
||||
when: build_output.stdout_lines is defined
|
||||
|
||||
- name: Add llama.cpp to user PATH
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ llmlab_base }}/.bashrc"
|
||||
line: 'export PATH="$HOME/lab2/llama.cpp/build/bin:$PATH"'
|
||||
state: present
|
||||
insertafter: EOF
|
||||
notify: Shell updated
|
||||
|
||||
- name: Display llama.cpp installation
|
||||
ansible.builtin.debug:
|
||||
msg: "llama.cpp installed to {{ llmlab_base }}/lab2/llama.cpp"
|
||||
@@ -0,0 +1,106 @@
|
||||
- name: Clone llama.cpp
|
||||
git:
|
||||
repo: "https://github.com/ggml-org/llama.cpp.git"
|
||||
dest: "{{ courseware_repos_dir }}/llama.cpp"
|
||||
version: "{{ courseware_llama_cpp_commit }}"
|
||||
update: false
|
||||
|
||||
- name: Check for CUDA compiler on Linux
|
||||
command: which nvcc
|
||||
register: courseware_llama_nvcc
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Check for CUDA runtime header on Linux
|
||||
stat:
|
||||
path: "{{ item }}"
|
||||
loop:
|
||||
- /usr/local/cuda/include/cuda_runtime.h
|
||||
- /usr/include/cuda_runtime.h
|
||||
register: courseware_llama_cuda_headers
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Fail early when CUDA toolkit is missing on Linux/WSL
|
||||
fail:
|
||||
msg: |
|
||||
CUDA Toolkit is not installed inside this Linux environment.
|
||||
|
||||
`nvidia-smi` only proves that the NVIDIA driver is visible. It does not provide the Linux-side CUDA development toolkit needed to build CUDA-enabled llama.cpp.
|
||||
|
||||
If you are using WSL, this is the common split:
|
||||
- Windows side: NVIDIA driver exposes the GPU to WSL
|
||||
- Linux side: CUDA toolkit still must exist inside the distro
|
||||
|
||||
Fix it, then rerun:
|
||||
bash deploy-courseware.sh
|
||||
|
||||
First try:
|
||||
sudo apt update
|
||||
sudo apt install -y nvidia-cuda-toolkit
|
||||
|
||||
If that package is unavailable in your distro:
|
||||
1. add NVIDIA's CUDA apt repository for your Debian/Ubuntu release
|
||||
2. install the CUDA toolkit from that repository
|
||||
|
||||
Verify with:
|
||||
nvcc --version
|
||||
ls /usr/local/cuda/include/cuda_runtime.h
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_llama_nvcc.rc != 0 or (courseware_llama_cuda_headers.results | selectattr('stat.exists', 'equalto', true) | list | length == 0)
|
||||
|
||||
- name: Set llama.cpp backend flag
|
||||
set_fact:
|
||||
courseware_llama_backend_flag: "{{ '-DGGML_METAL=ON' if ansible_system == 'Darwin' else '-DGGML_CUDA=ON' }}"
|
||||
|
||||
- name: Configure llama.cpp
|
||||
command:
|
||||
argv:
|
||||
- cmake
|
||||
- -S
|
||||
- "{{ courseware_repos_dir }}/llama.cpp"
|
||||
- -B
|
||||
- "{{ courseware_repos_dir }}/llama.cpp/build"
|
||||
- -DCMAKE_BUILD_TYPE=Release
|
||||
- "{{ courseware_llama_backend_flag }}"
|
||||
args:
|
||||
creates: "{{ courseware_repos_dir }}/llama.cpp/build/CMakeCache.txt"
|
||||
|
||||
- name: Build llama.cpp tools
|
||||
command:
|
||||
argv:
|
||||
- cmake
|
||||
- --build
|
||||
- "{{ courseware_repos_dir }}/llama.cpp/build"
|
||||
- --target
|
||||
- llama-cli
|
||||
- llama-quantize
|
||||
- llama-perplexity
|
||||
- llama-server
|
||||
- -j
|
||||
|
||||
- name: Check system PATH slots for llama.cpp tools
|
||||
stat:
|
||||
path: "/usr/local/bin/{{ item }}"
|
||||
follow: false
|
||||
loop:
|
||||
- llama-cli
|
||||
- llama-quantize
|
||||
- llama-perplexity
|
||||
- llama-server
|
||||
register: courseware_llama_path_slots
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Link llama.cpp tools into /usr/local/bin
|
||||
become: true
|
||||
file:
|
||||
src: "{{ courseware_llama_cpp_bin_dir }}/{{ item.item }}"
|
||||
dest: "/usr/local/bin/{{ item.item }}"
|
||||
state: link
|
||||
force: true
|
||||
loop: "{{ courseware_llama_path_slots.results | default([]) }}"
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- not item.stat.exists or item.stat.islnk
|
||||
- not item.stat.exists or item.stat.lnk_source == (courseware_llama_cpp_bin_dir ~ '/' ~ item.item)
|
||||
@@ -0,0 +1,74 @@
|
||||
---
|
||||
# LLaMA Factory installation and setup
|
||||
|
||||
- name: Create LLaMA Factory directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Check if LLaMA Factory already cloned
|
||||
ansible.builtin.stat:
|
||||
path: "{{ llmlab_base }}/lab5/LLaMA-Factory/.git"
|
||||
register: llm_factory_git_check
|
||||
|
||||
- name: Clone LLaMA Factory repository
|
||||
ansible.builtin.git:
|
||||
repo: https://github.com/hiyouga/LLaMA-Factory.git
|
||||
dest: "{{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
version: main
|
||||
update: no
|
||||
become: no
|
||||
when: not llm_factory_git_check.stat.exists
|
||||
|
||||
- name: Create LLaMA Factory virtual environment
|
||||
ansible.builtin.command:
|
||||
cmd: python3 -m venv "{{ llmlab_base }}/lab5/LLaMA-Factory/.venv"
|
||||
args:
|
||||
creates: "{{ llmlab_base }}/lab5/LLaMA-Factory/.venv/bin/activate"
|
||||
become: no
|
||||
|
||||
- name: Install pip in virtual environment
|
||||
ansible.builtin.shell: |
|
||||
#!/bin/bash
|
||||
source "{{ llmlab_base }}/lab5/LLaMA-Factory/.venv/bin/activate"
|
||||
pip install --upgrade pip
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
executable: /bin/bash
|
||||
become: no
|
||||
register: pip_install_result
|
||||
changed_when: pip_install_result.rc == 0
|
||||
|
||||
- name: Install LLaMA Factory with GPU support
|
||||
ansible.builtin.shell: |
|
||||
#!/bin/bash
|
||||
source "{{ llmlab_base }}/lab5/LLaMA-Factory/.venv/bin/activate"
|
||||
pip install --break-system-packages -e ".[torch,metrics]"
|
||||
args:
|
||||
chdir: "{{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
executable: /bin/bash
|
||||
become: no
|
||||
register: install_result
|
||||
failed_when: install_result.rc != 0
|
||||
|
||||
- name: Create LLaMA Factory start script
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab5/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Activate virtual environment
|
||||
source "{{ llmlab_base }}/lab5/LLaMA-Factory/.venv/bin/activate"
|
||||
|
||||
# Navigate to LLaMA-Factory directory
|
||||
cd "{{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
|
||||
# Launch LLaMA Board web interface
|
||||
llamafactory-cli webui
|
||||
mode: '0755'
|
||||
|
||||
- name: Display LLaMA Factory installation
|
||||
ansible.builtin.debug:
|
||||
msg: "LLaMA Factory installed to {{ llmlab_base }}/lab5/LLaMA-Factory"
|
||||
@@ -0,0 +1,24 @@
|
||||
- name: Create local Node runtime directory
|
||||
file:
|
||||
path: "{{ courseware_node_runtime_dir }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Install contained Node runtime for web tooling
|
||||
command:
|
||||
argv:
|
||||
- npm
|
||||
- install
|
||||
- "node@{{ courseware_node_runtime_version }}"
|
||||
args:
|
||||
chdir: "{{ courseware_node_runtime_dir }}"
|
||||
creates: "{{ courseware_node_runtime_bin_dir }}/node"
|
||||
|
||||
- name: Allow contained Node runtime to bind low ports on Linux
|
||||
become: true
|
||||
command:
|
||||
argv:
|
||||
- setcap
|
||||
- cap_net_bind_service=+ep
|
||||
- "{{ courseware_node_runtime_bin_dir }}/node"
|
||||
when: ansible_system == "Linux"
|
||||
@@ -0,0 +1,5 @@
|
||||
---
|
||||
# Default variables for Ollama role
|
||||
|
||||
ollama_port: 11434
|
||||
ollama_host: "0.0.0.0"
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# Handlers for Ollama role
|
||||
|
||||
- name: Start Ollama service
|
||||
ansible.builtin.systemd:
|
||||
name: ollama
|
||||
state: started
|
||||
enabled: yes
|
||||
become: yes
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
# Ollama installation and setup
|
||||
|
||||
- name: Check if Ollama is already installed
|
||||
ansible.builtin.command: ollama --version
|
||||
register: ollama_version_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Install Ollama (Linux)
|
||||
ansible.builtin.shell: |
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
when:
|
||||
- ansible_os_family == "Debian"
|
||||
- ollama_version_check.rc != 0
|
||||
become: yes
|
||||
notify: Start Ollama service
|
||||
|
||||
- name: Install Ollama (macOS via Homebrew)
|
||||
ansible.builtin.homebrew:
|
||||
name: ollama
|
||||
state: present
|
||||
when:
|
||||
- ansible_os_family == "Darwin"
|
||||
- ollama_version_check.rc != 0
|
||||
|
||||
- name: Check if Ollama service exists
|
||||
ansible.builtin.command: systemctl list-unit-files ollama.service
|
||||
register: ollama_service_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Ensure Ollama service is running (Linux)
|
||||
ansible.builtin.systemd:
|
||||
name: ollama
|
||||
state: started
|
||||
enabled: yes
|
||||
become: yes
|
||||
when:
|
||||
- ansible_os_family == "Debian"
|
||||
- ollama_version_check.rc == 0
|
||||
- ollama_service_check.stdout is defined
|
||||
- "'ollama.service' in ollama_service_check.stdout"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Start Ollama manually if no systemd service
|
||||
ansible.builtin.shell: |
|
||||
pkill -f "ollama serve" 2>/dev/null || true
|
||||
nohup ollama serve > {{ llmlab_base }}/.llmlab/logs/ollama.log 2>&1 &
|
||||
when:
|
||||
- ansible_os_family == "Debian"
|
||||
- ollama_version_check.rc == 0
|
||||
- (ollama_service_check.stdout is not defined or "'ollama.service' not in ollama_service_check.stdout")
|
||||
become: no
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wait for Ollama to be ready
|
||||
ansible.builtin.wait_for:
|
||||
port: 11434
|
||||
delay: 5
|
||||
timeout: 60
|
||||
when: ollama_version_check.rc == 0
|
||||
|
||||
- name: Display Ollama version
|
||||
ansible.builtin.debug:
|
||||
msg: "Ollama installed: {{ ollama_version_check.stdout }}"
|
||||
@@ -0,0 +1,30 @@
|
||||
- name: Start Ollama before model pulls
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_root }}/scripts/service_manager.sh"
|
||||
- start
|
||||
- ollama
|
||||
changed_when: false
|
||||
|
||||
- name: Pull core Ollama models
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_ollama_bin }}"
|
||||
- pull
|
||||
- "{{ item }}"
|
||||
environment:
|
||||
OLLAMA_HOST: "{{ courseware_bind_host }}:{{ courseware_ports.ollama }}"
|
||||
OLLAMA_MODELS: "{{ courseware_ollama_models_dir }}"
|
||||
loop: "{{ courseware_ollama_models }}"
|
||||
|
||||
- name: Pull optional heavy Ollama models
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_ollama_bin }}"
|
||||
- pull
|
||||
- "{{ item }}"
|
||||
environment:
|
||||
OLLAMA_HOST: "{{ courseware_bind_host }}:{{ courseware_ports.ollama }}"
|
||||
OLLAMA_MODELS: "{{ courseware_ollama_models_dir }}"
|
||||
loop: "{{ courseware_optional_ollama_models }}"
|
||||
when: courseware_install_optional_heavy_models | bool
|
||||
@@ -0,0 +1,53 @@
|
||||
---
|
||||
# Open WebUI installation and setup
|
||||
# Using Docker installation as it's more compatible and the recommended method
|
||||
|
||||
- name: Check if Docker is installed
|
||||
ansible.builtin.command: docker --version
|
||||
register: docker_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Open WebUI installation method
|
||||
ansible.builtin.debug:
|
||||
msg: "Open WebUI will be installed via Docker. Run: docker run -d -p 3080:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main"
|
||||
when: docker_check.rc == 0
|
||||
|
||||
- name: Skip Open WebUI pip installation (use Docker instead)
|
||||
ansible.builtin.debug:
|
||||
msg: "Skipping pip installation of Open WebUI due to Python version incompatibility. Use Docker instead."
|
||||
when: docker_check.rc != 0
|
||||
|
||||
- name: Create Open WebUI start script (Docker)
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ llmlab_base }}/lab3/start.sh"
|
||||
content: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if Docker is running
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "Error: Docker is not installed. Please install Docker first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if container is already running
|
||||
if docker ps | grep -q open-webui; then
|
||||
echo "Open WebUI is already running."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Start Open WebUI container
|
||||
docker run -d \
|
||||
-p 3080:8080 \
|
||||
-v open-webui:/app/backend/data \
|
||||
--name open-webui \
|
||||
--restart always \
|
||||
ghcr.io/open-webui/open-webui:main
|
||||
|
||||
echo "Open WebUI started on http://localhost:3080"
|
||||
mode: '0755'
|
||||
|
||||
- name: Display Open WebUI installation
|
||||
ansible.builtin.debug:
|
||||
msg: "Open WebUI installed via Docker. Access at http://localhost:3080"
|
||||
@@ -0,0 +1,69 @@
|
||||
- name: Create Open WebUI data directory
|
||||
file:
|
||||
path: "{{ courseware_state_dir }}/open-webui"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Create Open WebUI virtual environment
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_python_bin }}"
|
||||
- -m
|
||||
- venv
|
||||
- "{{ courseware_venvs_dir }}/open-webui"
|
||||
args:
|
||||
creates: "{{ courseware_venvs_dir }}/open-webui/bin/python"
|
||||
|
||||
- name: Upgrade Open WebUI venv tooling
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_venvs_dir }}/open-webui/bin/python"
|
||||
- -m
|
||||
- pip
|
||||
- install
|
||||
- --upgrade
|
||||
- pip
|
||||
- setuptools
|
||||
- wheel
|
||||
|
||||
- name: Install Open WebUI
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_venvs_dir }}/open-webui/bin/python"
|
||||
- -m
|
||||
- pip
|
||||
- install
|
||||
- "{{ courseware_open_webui_spec }}"
|
||||
- "numpy<2"
|
||||
|
||||
- name: Create Embedding Atlas virtual environment
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_python_bin }}"
|
||||
- -m
|
||||
- venv
|
||||
- "{{ courseware_venvs_dir }}/embedding-atlas"
|
||||
args:
|
||||
creates: "{{ courseware_venvs_dir }}/embedding-atlas/bin/python"
|
||||
|
||||
- name: Upgrade Embedding Atlas venv tooling
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_venvs_dir }}/embedding-atlas/bin/python"
|
||||
- -m
|
||||
- pip
|
||||
- install
|
||||
- --upgrade
|
||||
- pip
|
||||
- setuptools
|
||||
- wheel
|
||||
|
||||
- name: Install Embedding Atlas
|
||||
command:
|
||||
argv:
|
||||
- "{{ courseware_venvs_dir }}/embedding-atlas/bin/python"
|
||||
- -m
|
||||
- pip
|
||||
- install
|
||||
- "{{ courseware_embedding_atlas_spec }}"
|
||||
- "numpy<2"
|
||||
@@ -0,0 +1,155 @@
|
||||
- name: Install Debian/Ubuntu prerequisites
|
||||
become: true
|
||||
apt:
|
||||
name:
|
||||
- build-essential
|
||||
- ca-certificates
|
||||
- cmake
|
||||
- curl
|
||||
- git
|
||||
- git-lfs
|
||||
- libcap2-bin
|
||||
- libcurl4-openssl-dev
|
||||
- nodejs
|
||||
- npm
|
||||
- pkg-config
|
||||
- python3
|
||||
- python3-pip
|
||||
- python3-venv
|
||||
- unzip
|
||||
- zstd
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Query CUDA toolkit apt candidate
|
||||
command: apt-cache policy nvidia-cuda-toolkit
|
||||
register: courseware_cuda_toolkit_policy
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check for nvcc
|
||||
command: which nvcc
|
||||
register: courseware_nvcc_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set CUDA toolkit package availability
|
||||
set_fact:
|
||||
courseware_cuda_toolkit_package_available: >-
|
||||
{{
|
||||
courseware_cuda_toolkit_policy.rc == 0
|
||||
and 'Candidate: (none)' not in courseware_cuda_toolkit_policy.stdout
|
||||
}}
|
||||
|
||||
- name: Install distro CUDA toolkit when available
|
||||
block:
|
||||
- name: Install nvidia-cuda-toolkit
|
||||
become: true
|
||||
apt:
|
||||
name: nvidia-cuda-toolkit
|
||||
state: present
|
||||
rescue:
|
||||
- name: Fail with CUDA toolkit guidance after apt install error
|
||||
fail:
|
||||
msg: |
|
||||
CUDA Toolkit could not be installed from the distro package manager.
|
||||
|
||||
This installer needs the Linux-side CUDA toolkit for llama.cpp, not just a working `nvidia-smi`.
|
||||
|
||||
Try this first:
|
||||
sudo apt update
|
||||
sudo apt install -y nvidia-cuda-toolkit
|
||||
|
||||
If that still fails, add NVIDIA's CUDA repository for your Debian/Ubuntu release and install the toolkit from there.
|
||||
|
||||
Verify with:
|
||||
nvcc --version
|
||||
ls /usr/local/cuda/include/cuda_runtime.h
|
||||
when:
|
||||
- not courseware_is_wsl
|
||||
- courseware_cuda_toolkit_package_available
|
||||
- courseware_nvcc_check.rc != 0
|
||||
|
||||
- name: Fail with CUDA toolkit guidance when no apt candidate exists
|
||||
fail:
|
||||
msg: |
|
||||
CUDA Toolkit is not available from this distro's current apt sources.
|
||||
|
||||
This installer needs the Linux-side CUDA toolkit for llama.cpp, not just a working `nvidia-smi`.
|
||||
|
||||
On WSL this usually means:
|
||||
- Windows side: the NVIDIA driver is installed correctly
|
||||
- Linux side: the CUDA toolkit repository is still missing
|
||||
|
||||
Add NVIDIA's CUDA repository for your Debian/Ubuntu release, install the toolkit, then rerun:
|
||||
bash deploy-courseware.sh
|
||||
|
||||
Verify with:
|
||||
nvcc --version
|
||||
ls /usr/local/cuda/include/cuda_runtime.h
|
||||
when:
|
||||
- not courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_package_available
|
||||
- courseware_nvcc_check.rc != 0
|
||||
|
||||
- name: Check for Ollama binary
|
||||
command: which ollama
|
||||
register: courseware_ollama_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Install Ollama
|
||||
become: true
|
||||
shell: curl -fsSL https://ollama.com/install.sh | sh
|
||||
args:
|
||||
creates: /usr/local/bin/ollama
|
||||
when: courseware_ollama_check.rc != 0
|
||||
|
||||
- name: Mark Ollama as installed by courseware
|
||||
file:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
state: touch
|
||||
mode: "0644"
|
||||
when: courseware_ollama_check.rc != 0
|
||||
|
||||
- name: Check for courseware-managed Ollama install marker
|
||||
stat:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
register: courseware_ollama_install_marker_before
|
||||
|
||||
- name: Adopt existing local Ollama install into courseware management
|
||||
file:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
state: touch
|
||||
mode: "0644"
|
||||
when:
|
||||
- not courseware_ollama_install_marker_before.stat.exists
|
||||
- courseware_ollama_check.rc == 0
|
||||
- ansible_system == "Linux"
|
||||
- ansible_service_mgr == "systemd"
|
||||
- courseware_is_wsl | bool
|
||||
- ansible_user_id != "ollama"
|
||||
- ansible_env.HOME is search('/home/')
|
||||
- ansible_env.HOME != '/usr/share/ollama'
|
||||
- ansible_env.HOME != '/var/lib/ollama'
|
||||
|
||||
- name: Refresh courseware-managed Ollama install marker
|
||||
stat:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
register: courseware_ollama_install_marker_stat
|
||||
|
||||
- name: Check for Ollama systemd unit
|
||||
stat:
|
||||
path: /etc/systemd/system/ollama.service
|
||||
register: courseware_ollama_systemd_unit
|
||||
|
||||
- name: Stop and disable courseware-managed Ollama systemd service
|
||||
become: true
|
||||
systemd:
|
||||
name: ollama
|
||||
state: stopped
|
||||
enabled: false
|
||||
when:
|
||||
- ansible_service_mgr == "systemd"
|
||||
- courseware_ollama_install_marker_stat.stat.exists
|
||||
- courseware_ollama_systemd_unit.stat.exists
|
||||
@@ -0,0 +1,29 @@
|
||||
- name: Check installed Homebrew formulas
|
||||
command: "brew list --versions {{ item }}"
|
||||
loop:
|
||||
- git
|
||||
- git-lfs
|
||||
- cmake
|
||||
- node
|
||||
- python@3.11
|
||||
- ollama
|
||||
register: courseware_brew_checks
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Install missing Homebrew formulas
|
||||
command: "brew install {{ item.item }}"
|
||||
loop: "{{ courseware_brew_checks.results }}"
|
||||
when: item.rc != 0
|
||||
|
||||
- name: Mark Ollama as installed by courseware on macOS
|
||||
file:
|
||||
path: "{{ courseware_ollama_install_marker }}"
|
||||
state: touch
|
||||
mode: "0644"
|
||||
when:
|
||||
- courseware_brew_checks.results
|
||||
| selectattr('item', 'equalto', 'ollama')
|
||||
| selectattr('rc', 'ne', 0)
|
||||
| list
|
||||
| length > 0
|
||||
@@ -0,0 +1,8 @@
|
||||
- name: Install macOS prerequisites
|
||||
include_tasks: macos.yml
|
||||
when: ansible_system == "Darwin"
|
||||
|
||||
- name: Install Linux prerequisites
|
||||
include_tasks: linux.yml
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
@@ -0,0 +1,329 @@
|
||||
- name: Detect WSL
|
||||
set_fact:
|
||||
courseware_is_wsl: "{{ 'microsoft' in ansible_kernel | lower or 'wsl' in ansible_kernel | lower }}"
|
||||
|
||||
- name: Fail on unsupported operating systems
|
||||
fail:
|
||||
msg: "Supported platforms are Apple Silicon macOS and Debian-family Linux/WSL."
|
||||
when: ansible_system not in ["Darwin", "Linux"]
|
||||
|
||||
- name: Fail on unsupported macOS architecture
|
||||
fail:
|
||||
msg: "This installer supports Apple Silicon Macs only."
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- ansible_architecture not in ["arm64", "aarch64"]
|
||||
|
||||
- name: Fail on undersized macOS systems
|
||||
fail:
|
||||
msg: "This courseware assumes a modern Apple Silicon Mac with at least 16 GB of unified memory."
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- (ansible_memtotal_mb | int) < 16000
|
||||
|
||||
- name: Check for Xcode command line tools
|
||||
command: xcode-select -p
|
||||
register: courseware_xcode_select
|
||||
changed_when: false
|
||||
when: ansible_system == "Darwin"
|
||||
|
||||
- name: Check for Homebrew
|
||||
command: which brew
|
||||
register: courseware_brew_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_system == "Darwin"
|
||||
|
||||
- name: Fail when Xcode command line tools are missing
|
||||
fail:
|
||||
msg: "Install Xcode Command Line Tools first with 'xcode-select --install'."
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- courseware_xcode_select.rc != 0
|
||||
|
||||
- name: Fail when Homebrew is missing
|
||||
fail:
|
||||
msg: "Install Homebrew first from https://brew.sh/."
|
||||
when:
|
||||
- ansible_system == "Darwin"
|
||||
- courseware_brew_check.rc != 0
|
||||
|
||||
- name: Fail on unsupported Linux family
|
||||
fail:
|
||||
msg: "This installer currently supports Debian and Ubuntu only."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- ansible_os_family != "Debian"
|
||||
|
||||
- name: Query NVIDIA GPU memory
|
||||
command: nvidia-smi --query-gpu=memory.total --format=csv,noheader,nounits
|
||||
register: courseware_gpu_memory
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Query NVIDIA GPU names
|
||||
command: nvidia-smi --query-gpu=name --format=csv,noheader
|
||||
register: courseware_gpu_names
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Fail when no supported NVIDIA GPU is visible
|
||||
fail:
|
||||
msg: "Linux/WSL requires an NVIDIA GPU visible to nvidia-smi."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_gpu_memory.rc != 0
|
||||
|
||||
- name: Fail when GPU VRAM is below baseline
|
||||
fail:
|
||||
msg: "This build assumes at least 8 GB of VRAM on Linux/WSL."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- (courseware_gpu_memory.stdout_lines | map('int') | max) < 8192
|
||||
|
||||
- name: Check for CUDA compiler on Linux
|
||||
command: which nvcc
|
||||
register: courseware_preflight_nvcc
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Check for CUDA runtime header on Linux
|
||||
stat:
|
||||
path: "{{ item }}"
|
||||
loop:
|
||||
- /usr/local/cuda/include/cuda_runtime.h
|
||||
- /usr/include/cuda_runtime.h
|
||||
register: courseware_preflight_cuda_headers
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Set CUDA toolkit readiness
|
||||
set_fact:
|
||||
courseware_cuda_toolkit_ready: >-
|
||||
{{
|
||||
courseware_preflight_nvcc.rc == 0
|
||||
or (courseware_preflight_cuda_headers.results | selectattr('stat.exists', 'equalto', true) | list | length > 0)
|
||||
}}
|
||||
when: ansible_system == "Linux"
|
||||
|
||||
- name: Query distro CUDA toolkit apt candidate
|
||||
command: apt-cache policy nvidia-cuda-toolkit
|
||||
register: courseware_preflight_cuda_toolkit_policy
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- ansible_os_family == "Debian"
|
||||
|
||||
- name: Set distro CUDA toolkit package availability
|
||||
set_fact:
|
||||
courseware_preflight_cuda_toolkit_package_available: >-
|
||||
{{
|
||||
courseware_preflight_cuda_toolkit_policy.rc == 0
|
||||
and 'Candidate: (none)' not in courseware_preflight_cuda_toolkit_policy.stdout
|
||||
}}
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- ansible_os_family == "Debian"
|
||||
|
||||
- name: Fail when automatic WSL CUDA bootstrap is unsupported
|
||||
fail:
|
||||
msg: "Automatic CUDA bootstrap currently supports Ubuntu x86_64 on WSL only. For other WSL distros, install the CUDA toolkit manually before rerunning."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution != "Ubuntu" or ansible_architecture not in ["x86_64", "amd64"]
|
||||
|
||||
- name: Install distro CUDA toolkit on Ubuntu WSL when available
|
||||
become: true
|
||||
apt:
|
||||
name: nvidia-cuda-toolkit
|
||||
state: present
|
||||
update_cache: true
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- courseware_preflight_cuda_toolkit_package_available | default(false)
|
||||
|
||||
- name: Recheck CUDA compiler after distro toolkit install
|
||||
command: which nvcc
|
||||
register: courseware_preflight_nvcc_after_distro_install
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- courseware_preflight_cuda_toolkit_package_available | default(false)
|
||||
|
||||
- name: Recheck CUDA runtime header after distro toolkit install
|
||||
stat:
|
||||
path: "{{ item }}"
|
||||
loop:
|
||||
- /usr/local/cuda/include/cuda_runtime.h
|
||||
- /usr/include/cuda_runtime.h
|
||||
register: courseware_preflight_cuda_headers_after_distro_install
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- courseware_preflight_cuda_toolkit_package_available | default(false)
|
||||
|
||||
- name: Refresh CUDA toolkit readiness after distro toolkit install
|
||||
set_fact:
|
||||
courseware_cuda_toolkit_ready: >-
|
||||
{{
|
||||
courseware_preflight_nvcc_after_distro_install.rc == 0
|
||||
or (courseware_preflight_cuda_headers_after_distro_install.results | selectattr('stat.exists', 'equalto', true) | list | length > 0)
|
||||
}}
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- courseware_preflight_cuda_toolkit_package_available | default(false)
|
||||
|
||||
- name: Remove legacy NVIDIA CUDA apt key when preparing WSL toolkit install
|
||||
become: true
|
||||
command: apt-key del 7fa2af80
|
||||
register: courseware_wsl_cuda_apt_key_delete
|
||||
changed_when: courseware_wsl_cuda_apt_key_delete.rc == 0
|
||||
failed_when: false
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Download NVIDIA WSL CUDA apt pin
|
||||
become: true
|
||||
get_url:
|
||||
url: "{{ courseware_wsl_cuda_pin_url }}"
|
||||
dest: "{{ courseware_wsl_cuda_pin_dest }}"
|
||||
mode: "0644"
|
||||
force: true
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Download NVIDIA WSL CUDA local installer
|
||||
get_url:
|
||||
url: "{{ courseware_wsl_cuda_installer_url }}"
|
||||
dest: "{{ courseware_wsl_cuda_installer_local_path }}"
|
||||
mode: "0644"
|
||||
force: false
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Install NVIDIA WSL CUDA local repository package
|
||||
become: true
|
||||
apt:
|
||||
deb: "{{ courseware_wsl_cuda_installer_local_path }}"
|
||||
state: present
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Find NVIDIA WSL CUDA keyring
|
||||
become: true
|
||||
find:
|
||||
paths: "{{ courseware_wsl_cuda_repo_dir }}"
|
||||
patterns: "cuda-*-keyring.gpg"
|
||||
file_type: file
|
||||
register: courseware_wsl_cuda_keyring
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Fail when NVIDIA WSL CUDA keyring is missing
|
||||
fail:
|
||||
msg: "The NVIDIA WSL CUDA repository package was installed, but its keyring file was not found under {{ courseware_wsl_cuda_repo_dir }}."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- (courseware_wsl_cuda_keyring.files | length) == 0
|
||||
|
||||
- name: Copy NVIDIA WSL CUDA keyring into trusted keyrings
|
||||
become: true
|
||||
copy:
|
||||
src: "{{ courseware_wsl_cuda_keyring.files[0].path }}"
|
||||
dest: "/usr/share/keyrings/{{ courseware_wsl_cuda_keyring.files[0].path | basename }}"
|
||||
remote_src: true
|
||||
mode: "0644"
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- (courseware_wsl_cuda_keyring.files | length) > 0
|
||||
|
||||
- name: Install NVIDIA WSL CUDA toolkit
|
||||
become: true
|
||||
apt:
|
||||
name: "{{ courseware_wsl_cuda_toolkit_package }}"
|
||||
state: present
|
||||
update_cache: true
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Recheck CUDA compiler after WSL toolkit install
|
||||
command: which nvcc
|
||||
register: courseware_preflight_nvcc_after_install
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
|
||||
- name: Fail when CUDA toolkit is still missing after WSL install attempt
|
||||
fail:
|
||||
msg: "The NVIDIA WSL CUDA toolkit install completed, but `nvcc` is still missing. Verify the repository package and rerun the installer."
|
||||
when:
|
||||
- ansible_system == "Linux"
|
||||
- courseware_is_wsl
|
||||
- not courseware_cuda_toolkit_ready
|
||||
- ansible_distribution == "Ubuntu"
|
||||
- ansible_architecture in ["x86_64", "amd64"]
|
||||
- courseware_preflight_nvcc_after_install.rc != 0
|
||||
|
||||
- name: Set runtime binary defaults
|
||||
set_fact:
|
||||
courseware_python_bin: >-
|
||||
{{ '/opt/homebrew/opt/python@3.11/bin/python3.11' if ansible_system == 'Darwin' else '/usr/bin/python3' }}
|
||||
courseware_ollama_bin: "ollama"
|
||||
@@ -0,0 +1,27 @@
|
||||
- name: Create Promptfoo working directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
loop:
|
||||
- "{{ courseware_tools_dir }}/promptfoo"
|
||||
- "{{ courseware_lab6_dir }}"
|
||||
|
||||
- name: Install Promptfoo locally
|
||||
command: "npm install promptfoo@{{ courseware_promptfoo_version }}"
|
||||
args:
|
||||
chdir: "{{ courseware_tools_dir }}/promptfoo"
|
||||
environment:
|
||||
PATH: "{{ courseware_node_runtime_bin_dir }}:{{ ansible_env.PATH }}"
|
||||
|
||||
- name: Force Promptfoo server to bind to the configured host
|
||||
replace:
|
||||
path: "{{ courseware_tools_dir }}/promptfoo/node_modules/promptfoo/dist/src/server/server.js"
|
||||
regexp: '\.listen\(port, \(\) => \{'
|
||||
replace: ".listen(port, process.env.COURSEWARE_BIND_HOST || '0.0.0.0', () => {"
|
||||
|
||||
- name: Render promptfoo starter config
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../templates/promptfoo.yaml.j2"
|
||||
dest: "{{ courseware_lab6_dir }}/promptfoo.yaml"
|
||||
mode: "0644"
|
||||
@@ -0,0 +1,139 @@
|
||||
- name: Bootstrap TransformerLab release files
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
cd "{{ courseware_transformerlab_home }}"
|
||||
curl -L "https://github.com/transformerlab/transformerlab-app/archive/refs/tags/{{ courseware_transformerlab_version }}.tar.gz" -o transformerlab.tar.gz
|
||||
tar -xzf transformerlab.tar.gz
|
||||
rm -f transformerlab.tar.gz
|
||||
rm -rf src
|
||||
mv "transformerlab-app-{{ courseware_transformerlab_version_dir }}/api" src
|
||||
echo "{{ courseware_transformerlab_version }}" > src/LATEST_VERSION
|
||||
curl -L "https://github.com/transformerlab/transformerlab-app/releases/download/{{ courseware_transformerlab_version }}/transformerlab_web.tar.gz" -o transformerlab_web.tar.gz
|
||||
rm -rf webapp
|
||||
mkdir -p webapp
|
||||
tar -xzf transformerlab_web.tar.gz -C webapp
|
||||
rm -f transformerlab_web.tar.gz
|
||||
args:
|
||||
executable: /bin/bash
|
||||
creates: "{{ courseware_transformerlab_home }}/src/install.sh"
|
||||
|
||||
- name: Add TransformerLab Miniforge Python path for space-safe bootstrap
|
||||
replace:
|
||||
path: "{{ courseware_transformerlab_home }}/src/install.sh"
|
||||
regexp: 'CONDA_BIN=\$\{MINIFORGE_ROOT\}/bin/conda\n'
|
||||
replace: |
|
||||
CONDA_BIN=${MINIFORGE_ROOT}/bin/conda
|
||||
CONDA_PYTHON_BIN=${MINIFORGE_ROOT}/bin/python
|
||||
when: "' ' in courseware_transformerlab_home"
|
||||
|
||||
- name: Inject space-safe TransformerLab conda runner
|
||||
blockinfile:
|
||||
path: "{{ courseware_transformerlab_home }}/src/install.sh"
|
||||
insertbefore: '^check_conda\(\) \{$'
|
||||
marker: '# {mark} courseware conda runner'
|
||||
block: |
|
||||
conda_direct_exec_works() {
|
||||
"${CONDA_BIN}" --version >/dev/null 2>&1
|
||||
}
|
||||
|
||||
run_conda() {
|
||||
if conda_direct_exec_works; then
|
||||
"${CONDA_BIN}" "$@"
|
||||
else
|
||||
"${CONDA_PYTHON_BIN}" "${CONDA_BIN}" "$@"
|
||||
fi
|
||||
}
|
||||
when: "' ' in courseware_transformerlab_home"
|
||||
|
||||
- name: Rewrite TransformerLab installer to use the space-safe conda runner
|
||||
replace:
|
||||
path: "{{ courseware_transformerlab_home }}/src/install.sh"
|
||||
regexp: 'eval "\$\(\$\{CONDA_BIN\} shell\.bash hook\)"'
|
||||
replace: 'eval "$(run_conda shell.bash hook)"'
|
||||
when: "' ' in courseware_transformerlab_home"
|
||||
|
||||
- name: Rewrite TransformerLab doctor output to use the space-safe conda runner
|
||||
replace:
|
||||
path: "{{ courseware_transformerlab_home }}/src/install.sh"
|
||||
regexp: '\$\(\$\{CONDA_BIN\} --version\)'
|
||||
replace: '$(run_conda --version)'
|
||||
when: "' ' in courseware_transformerlab_home"
|
||||
|
||||
- name: Install TransformerLab
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
./src/install.sh 2>&1 | tee "{{ courseware_logs_dir }}/transformerlab_install.log"
|
||||
touch "{{ courseware_transformerlab_home }}/.courseware-managed"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
chdir: "{{ courseware_transformerlab_home }}"
|
||||
creates: "{{ courseware_transformerlab_home }}/miniforge3/bin/conda"
|
||||
|
||||
- name: Rewrite TransformerLab Miniforge entrypoints to a space-safe shebang path
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
actual_prefix="{{ courseware_transformerlab_home }}/miniforge3/bin/"
|
||||
safe_prefix="{{ ansible_env.HOME }}/.transformerlab/miniforge3/bin/"
|
||||
|
||||
find "{{ courseware_transformerlab_home }}/miniforge3/bin" -maxdepth 1 -type f -print0 |
|
||||
while IFS= read -r -d '' file; do
|
||||
first_line=$(head -n 1 "$file" || true)
|
||||
case "$first_line" in
|
||||
"#!${actual_prefix}"*)
|
||||
suffix=${first_line#\#!}
|
||||
suffix=${suffix#"${actual_prefix}"}
|
||||
replacement="#!${safe_prefix}${suffix}"
|
||||
tmp_file=$(mktemp)
|
||||
{
|
||||
printf '%s\n' "$replacement"
|
||||
tail -n +2 "$file"
|
||||
} >"$tmp_file"
|
||||
chmod --reference="$file" "$tmp_file"
|
||||
mv "$tmp_file" "$file"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
args:
|
||||
executable: /bin/bash
|
||||
when: "' ' in courseware_transformerlab_home"
|
||||
|
||||
- name: Install TransformerLab multiuser dependencies
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
./src/install.sh multiuser_setup 2>&1 | tee "{{ courseware_logs_dir }}/transformerlab_multiuser_setup.log"
|
||||
touch "{{ courseware_transformerlab_home }}/.courseware-managed"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
chdir: "{{ courseware_transformerlab_home }}"
|
||||
creates: "{{ courseware_transformerlab_home }}/envs/general-uv/bin/python"
|
||||
|
||||
- name: Check TransformerLab general uv environment
|
||||
stat:
|
||||
path: "{{ courseware_transformerlab_home }}/envs/general-uv/bin/python"
|
||||
register: courseware_transformerlab_general_uv
|
||||
|
||||
- name: Retry TransformerLab multiuser setup after source refresh
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
./src/install.sh multiuser_setup 2>&1 | tee "{{ courseware_logs_dir }}/transformerlab_multiuser_setup_retry.log"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
chdir: "{{ courseware_transformerlab_home }}"
|
||||
when: not courseware_transformerlab_general_uv.stat.exists
|
||||
|
||||
- name: Recheck TransformerLab general uv environment
|
||||
stat:
|
||||
path: "{{ courseware_transformerlab_home }}/envs/general-uv/bin/python"
|
||||
register: courseware_transformerlab_general_uv
|
||||
|
||||
- name: Mark TransformerLab multiuser setup complete
|
||||
file:
|
||||
path: "{{ courseware_transformerlab_home }}/.multiuser_setup_complete"
|
||||
state: touch
|
||||
mode: "0644"
|
||||
when: courseware_transformerlab_general_uv.stat.exists
|
||||
|
||||
- name: Fail if TransformerLab general uv environment is missing
|
||||
fail:
|
||||
msg: "TransformerLab multiuser setup completed without creating {{ courseware_transformerlab_home }}/envs/general-uv/bin/python."
|
||||
when: not courseware_transformerlab_general_uv.stat.exists
|
||||
@@ -0,0 +1,43 @@
|
||||
- name: Download Unsloth Studio installer
|
||||
get_url:
|
||||
url: "{{ courseware_unsloth_installer_url }}"
|
||||
dest: "{{ courseware_unsloth_installer_path }}"
|
||||
mode: "0755"
|
||||
|
||||
- name: Install Unsloth Studio
|
||||
block:
|
||||
- name: Run Unsloth Studio installer
|
||||
shell: |
|
||||
set -euo pipefail
|
||||
timeout "{{ courseware_unsloth_install_timeout_seconds }}" \
|
||||
bash "{{ courseware_unsloth_installer_path }}" --python "{{ courseware_unsloth_python_version }}" \
|
||||
> "{{ courseware_logs_dir }}/unsloth-install.log" 2>&1
|
||||
touch "{{ courseware_unsloth_home }}/.courseware-managed"
|
||||
touch "{{ courseware_unsloth_home }}/.install_complete"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
creates: "{{ courseware_unsloth_home }}/.install_complete"
|
||||
rescue:
|
||||
- name: Capture Unsloth installer log tail
|
||||
shell: |
|
||||
if [ -f "{{ courseware_logs_dir }}/unsloth-install.log" ]; then
|
||||
tail -n 80 "{{ courseware_logs_dir }}/unsloth-install.log"
|
||||
fi
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: courseware_unsloth_install_log_tail
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Fail with Unsloth installer guidance
|
||||
fail:
|
||||
msg: |
|
||||
Unsloth Studio install failed or timed out.
|
||||
|
||||
Review the full log at:
|
||||
{{ courseware_logs_dir }}/unsloth-install.log
|
||||
|
||||
The installer is pinned to Python {{ courseware_unsloth_python_version }} to avoid slower, less predictable dependency resolution on Linux/WSL.
|
||||
|
||||
Last log lines:
|
||||
{{ courseware_unsloth_install_log_tail.stdout | default('(no log output captured)') }}
|
||||
@@ -0,0 +1,60 @@
|
||||
diff --git a/src/app/labs/[slug]/page.tsx b/src/app/labs/[slug]/page.tsx
|
||||
index f67308f..a6aac38 100644
|
||||
--- a/src/app/labs/[slug]/page.tsx
|
||||
+++ b/src/app/labs/[slug]/page.tsx
|
||||
@@ -462,6 +462,19 @@ function markdownToHtml(markdown: string) {
|
||||
return micromark(convertGfmTables(markdown), { allowDangerousHtml: true });
|
||||
}
|
||||
|
||||
+function addNoReferrerToExternalImages(html: string) {
|
||||
+ return html.replace(/<img\b([^>]*?)>/gi, (imageTag, rawAttrs: string) => {
|
||||
+ const srcMatch = /\bsrc=(['"])(https?:\/\/[^"']+)\1/i.exec(rawAttrs);
|
||||
+ if (!srcMatch || /\breferrerpolicy\s*=/i.test(rawAttrs)) return imageTag;
|
||||
+
|
||||
+ const trimmedAttrs = rawAttrs.trimEnd();
|
||||
+ const isSelfClosing = trimmedAttrs.endsWith("/");
|
||||
+ const attrs = isSelfClosing ? trimmedAttrs.slice(0, -1).trimEnd() : rawAttrs;
|
||||
+
|
||||
+ return `<img${attrs} referrerpolicy="no-referrer"${isSelfClosing ? " /" : ""}>`;
|
||||
+ });
|
||||
+}
|
||||
+
|
||||
export async function generateStaticParams() {
|
||||
return getLabSummaries().map((lab) => ({ slug: lab.slug }));
|
||||
}
|
||||
@@ -503,14 +516,15 @@ export default async function LabPage({
|
||||
stripOrdinals: breakoutStyle === "instruction-rails",
|
||||
}),
|
||||
);
|
||||
- const htmlContent =
|
||||
+ const htmlContent = addNoReferrerToExternalImages(
|
||||
breakoutStyle === "none"
|
||||
? baseHtml
|
||||
: transformOutsideDetails(baseHtml, (safeHtml) =>
|
||||
segmentStepSections(markExplicitInstructionElements(safeHtml, {
|
||||
commandPills: breakoutStyle === "command-pills",
|
||||
})),
|
||||
- );
|
||||
+ ),
|
||||
+ );
|
||||
|
||||
return (
|
||||
<main className="mx-auto w-full max-w-5xl px-6 py-10">
|
||||
diff --git a/src/components/labs/LabContent.tsx b/src/components/labs/LabContent.tsx
|
||||
index 7a7ce52..8778a23 100644
|
||||
--- a/src/components/labs/LabContent.tsx
|
||||
+++ b/src/components/labs/LabContent.tsx
|
||||
@@ -277,7 +277,12 @@ export function LabContent({ className, html }: LabContentProps) {
|
||||
>
|
||||
<div className="lab-image-modal__surface" onClick={(event) => event.stopPropagation()}>
|
||||
{/* eslint-disable-next-line @next/next/no-img-element */}
|
||||
- <img className="lab-image-modal__image" src={zoomedImage.src} alt={zoomedImage.alt} />
|
||||
+ <img
|
||||
+ className="lab-image-modal__image"
|
||||
+ src={zoomedImage.src}
|
||||
+ alt={zoomedImage.alt}
|
||||
+ referrerPolicy="no-referrer"
|
||||
+ />
|
||||
</div>
|
||||
</div>
|
||||
) : null}
|
||||
@@ -0,0 +1,51 @@
|
||||
- name: Clone lab wiki
|
||||
git:
|
||||
repo: "{{ courseware_wiki_repo }}"
|
||||
dest: "{{ courseware_wiki_repo_dir }}"
|
||||
update: false
|
||||
|
||||
- name: Check whether wiki referrer policy patch is already applied
|
||||
command:
|
||||
argv:
|
||||
- git
|
||||
- apply
|
||||
- --reverse
|
||||
- --check
|
||||
- "{{ role_path }}/files/referrer-policy.patch"
|
||||
args:
|
||||
chdir: "{{ courseware_wiki_repo_dir }}"
|
||||
register: courseware_wiki_referrer_policy_patch
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Apply managed wiki referrer policy patch
|
||||
command:
|
||||
argv:
|
||||
- git
|
||||
- apply
|
||||
- "{{ role_path }}/files/referrer-policy.patch"
|
||||
args:
|
||||
chdir: "{{ courseware_wiki_repo_dir }}"
|
||||
when: courseware_wiki_referrer_policy_patch.rc != 0
|
||||
|
||||
- name: Install wiki dependencies with contained Node runtime
|
||||
command: npm install
|
||||
args:
|
||||
chdir: "{{ courseware_wiki_repo_dir }}"
|
||||
creates: "{{ courseware_wiki_repo_dir }}/node_modules/next/package.json"
|
||||
environment:
|
||||
PATH: "{{ courseware_node_runtime_bin_dir }}:{{ ansible_env.PATH }}"
|
||||
|
||||
- name: Stat wiki build output
|
||||
stat:
|
||||
path: "{{ courseware_wiki_repo_dir }}/.next/BUILD_ID"
|
||||
register: courseware_wiki_build
|
||||
|
||||
- name: Build wiki for managed service startup
|
||||
command: npm run build
|
||||
args:
|
||||
chdir: "{{ courseware_wiki_repo_dir }}"
|
||||
environment:
|
||||
PATH: "{{ courseware_node_runtime_bin_dir }}:{{ ansible_env.PATH }}"
|
||||
when:
|
||||
- not courseware_wiki_build.stat.exists or courseware_wiki_referrer_policy_patch.rc != 0
|
||||
@@ -0,0 +1,63 @@
|
||||
---
|
||||
# Main playbook for LLM Labs deployment
|
||||
# This playbook orchestrates all installation and configuration tasks
|
||||
|
||||
- name: Deploy LLM Labs Environment
|
||||
hosts: localhost
|
||||
gather_facts: yes
|
||||
vars:
|
||||
platform: "{{ platform | default('linux-cpu') }}"
|
||||
gpu_type: "{{ gpu_type | default('none') }}"
|
||||
user_home: "{{ ansible_env.HOME }}"
|
||||
# Use the original user's home directory, not root's when using become
|
||||
llmlab_base: "{{ ansible_env.HOME | default('/home/' + ansible_user_id) }}"
|
||||
|
||||
tasks:
|
||||
- name: Display platform information
|
||||
ansible.builtin.debug:
|
||||
msg: "Deploying on platform: {{ platform }}"
|
||||
|
||||
- name: Include common setup
|
||||
ansible.builtin.import_role:
|
||||
name: common
|
||||
|
||||
- name: Include GPU setup (Linux/WSL)
|
||||
ansible.builtin.import_role:
|
||||
name: linux-gpu
|
||||
when: platform in ['linux-gpu', 'linux-amd', 'wsl']
|
||||
|
||||
- name: Include Ollama setup
|
||||
ansible.builtin.import_role:
|
||||
name: ollama
|
||||
|
||||
- name: Include llama.cpp setup
|
||||
ansible.builtin.import_role:
|
||||
name: llama-cpp
|
||||
|
||||
- name: Include Transformer Lab setup
|
||||
ansible.builtin.import_role:
|
||||
name: transformerlab
|
||||
|
||||
- name: Include Unsloth Studio setup
|
||||
ansible.builtin.import_role:
|
||||
name: unsloth
|
||||
|
||||
- name: Include Open WebUI setup
|
||||
ansible.builtin.import_role:
|
||||
name: open-webui
|
||||
|
||||
- name: Include ChunkViz setup
|
||||
ansible.builtin.import_role:
|
||||
name: chunkviz
|
||||
|
||||
- name: Include Promptfoo setup
|
||||
ansible.builtin.import_role:
|
||||
name: promptfoo
|
||||
|
||||
- name: Include lab scripts setup
|
||||
ansible.builtin.import_role:
|
||||
name: lab-scripts
|
||||
|
||||
- name: Display completion message
|
||||
ansible.builtin.debug:
|
||||
msg: "LLM Labs deployment completed successfully!"
|
||||
@@ -0,0 +1,170 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
STATE_DIR=$(cd "$SCRIPT_DIR/.." && pwd)
|
||||
RUNTIME_ENV="$STATE_DIR/runtime.env"
|
||||
|
||||
if [ ! -f "$RUNTIME_ENV" ]; then
|
||||
echo "Missing $RUNTIME_ENV. Run ./labctl up first." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC1090
|
||||
. "$RUNTIME_ENV"
|
||||
|
||||
REPO_URL="https://huggingface.co/{{ courseware_white_rabbit_repo }}"
|
||||
REPO_REF="main"
|
||||
REPO_DIR="$COURSEWARE_STATE_DIR/downloads/WhiteRabbitNeo_WhiteRabbitNeo-V3-7B-GGUF"
|
||||
TARGET_DIR="$COURSEWARE_STATE_DIR/models/WhiteRabbitNeo"
|
||||
REGISTER_WITH_OLLAMA=1
|
||||
|
||||
DOWNLOAD_FILES=(
|
||||
{% for variant in courseware_white_rabbit_variants | unique(attribute='filename') %}
|
||||
"{{ variant.filename }}"
|
||||
{% endfor %}
|
||||
)
|
||||
|
||||
MODEL_NAMES=(
|
||||
{% for variant in courseware_white_rabbit_variants %}
|
||||
"{{ variant.ollama_model }}"
|
||||
{% endfor %}
|
||||
)
|
||||
|
||||
MODEL_FILES=(
|
||||
{% for variant in courseware_white_rabbit_variants %}
|
||||
"{{ variant.filename }}"
|
||||
{% endfor %}
|
||||
)
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: ./download_whiterabbitneo-gguf.sh [--download-only]
|
||||
|
||||
Downloads the WhiteRabbitNeo GGUF variants used in lab 2 with git + git-lfs.
|
||||
By default it also registers local Ollama aliases after the files are present.
|
||||
|
||||
Options:
|
||||
--download-only Skip Ollama model registration
|
||||
-h, --help Show this help text
|
||||
EOF
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
echo "Missing required command: $1" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
parse_args() {
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--download-only)
|
||||
REGISTER_WITH_OLLAMA=0
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
}
|
||||
|
||||
setup_repo() {
|
||||
mkdir -p "$COURSEWARE_STATE_DIR/downloads"
|
||||
|
||||
if [ ! -d "$REPO_DIR/.git" ]; then
|
||||
rm -rf "$REPO_DIR"
|
||||
GIT_LFS_SKIP_SMUDGE=1 git clone --filter=blob:none --no-checkout "$REPO_URL" "$REPO_DIR"
|
||||
fi
|
||||
|
||||
git -C "$REPO_DIR" remote set-url origin "$REPO_URL"
|
||||
git -C "$REPO_DIR" sparse-checkout init --no-cone
|
||||
git -C "$REPO_DIR" sparse-checkout set -- "${DOWNLOAD_FILES[@]}"
|
||||
git -C "$REPO_DIR" fetch --depth=1 origin "$REPO_REF"
|
||||
git -C "$REPO_DIR" checkout -f --detach FETCH_HEAD
|
||||
git -C "$REPO_DIR" lfs install --local >/dev/null
|
||||
}
|
||||
|
||||
download_files() {
|
||||
local lfs_include
|
||||
|
||||
mkdir -p "$TARGET_DIR"
|
||||
lfs_include=$(IFS=,; printf '%s' "${DOWNLOAD_FILES[*]}")
|
||||
|
||||
git -C "$REPO_DIR" lfs pull origin --include="$lfs_include" --exclude=""
|
||||
|
||||
for file in "${DOWNLOAD_FILES[@]}"; do
|
||||
cp -f "$REPO_DIR/$file" "$TARGET_DIR/$file"
|
||||
done
|
||||
}
|
||||
|
||||
start_ollama_if_needed() {
|
||||
if curl -fsS "http://$COURSEWARE_BIND_HOST:$COURSEWARE_OLLAMA_PORT/api/tags" >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ -x "$COURSEWARE_ROOT/scripts/service_manager.sh" ]; then
|
||||
"$COURSEWARE_ROOT/scripts/service_manager.sh" start ollama >/dev/null
|
||||
fi
|
||||
|
||||
curl -fsS "http://$COURSEWARE_BIND_HOST:$COURSEWARE_OLLAMA_PORT/api/tags" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
register_models() {
|
||||
local i
|
||||
local modelfile
|
||||
|
||||
if [ "$REGISTER_WITH_OLLAMA" -eq 0 ]; then
|
||||
echo "Skipping Ollama registration because --download-only was requested."
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! command -v "$OLLAMA_BIN" >/dev/null 2>&1; then
|
||||
echo "Ollama is not installed or not on PATH; downloads completed but model registration was skipped." >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! start_ollama_if_needed; then
|
||||
echo "Ollama is not reachable on http://$COURSEWARE_BIND_HOST:$COURSEWARE_OLLAMA_PORT; downloads completed but model registration was skipped." >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
export OLLAMA_HOST="$COURSEWARE_BIND_HOST:$COURSEWARE_OLLAMA_PORT"
|
||||
export OLLAMA_MODELS="$OLLAMA_MODELS_DIR"
|
||||
|
||||
for i in "${!MODEL_NAMES[@]}"; do
|
||||
modelfile="$TARGET_DIR/Modelfile.${MODEL_NAMES[$i]}"
|
||||
printf 'FROM %s/%s\n' "$TARGET_DIR" "${MODEL_FILES[$i]}" >"$modelfile"
|
||||
"$OLLAMA_BIN" create "${MODEL_NAMES[$i]}" -f "$modelfile"
|
||||
done
|
||||
}
|
||||
|
||||
main() {
|
||||
parse_args "$@"
|
||||
require_cmd git
|
||||
require_cmd git-lfs
|
||||
git lfs install --skip-repo >/dev/null
|
||||
|
||||
echo "Preparing WhiteRabbitNeo GGUF checkout in $REPO_DIR"
|
||||
setup_repo
|
||||
|
||||
echo "Downloading selected WhiteRabbitNeo GGUF variants into $TARGET_DIR"
|
||||
download_files
|
||||
|
||||
echo "Download complete:"
|
||||
printf ' - %s\n' "${DOWNLOAD_FILES[@]}"
|
||||
|
||||
register_models
|
||||
|
||||
echo "WhiteRabbitNeo lab 2 setup is ready."
|
||||
}
|
||||
|
||||
main "$@"
|
||||
@@ -0,0 +1,14 @@
|
||||
description: Local evaluation placeholder for lab 6
|
||||
providers:
|
||||
- id: openai:chat
|
||||
label: Local Ollama
|
||||
config:
|
||||
apiBaseUrl: http://{{ courseware_bind_host }}:{{ courseware_ports.ollama }}/v1
|
||||
apiKey: local-only
|
||||
model: qwen3.5:4b
|
||||
prompts:
|
||||
- "{{ '{{prompt}}' }}"
|
||||
tests:
|
||||
- vars:
|
||||
prompt: "Summarize the purpose of this lab environment in one sentence."
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
COURSEWARE_ROOT="{{ courseware_root }}"
|
||||
COURSEWARE_STATE_DIR="{{ courseware_state_dir }}"
|
||||
COURSEWARE_BIND_HOST="{{ courseware_bind_host }}"
|
||||
COURSEWARE_URL_HOST="{{ courseware_url_host }}"
|
||||
COURSEWARE_OLLAMA_PORT="{{ courseware_ports.ollama }}"
|
||||
COURSEWARE_OPEN_WEBUI_PORT="{{ courseware_ports.open_webui }}"
|
||||
COURSEWARE_TRANSFORMERLAB_PORT="{{ courseware_ports.transformerlab }}"
|
||||
COURSEWARE_CHUNKVIZ_PORT="{{ courseware_ports.chunkviz }}"
|
||||
COURSEWARE_EMBEDDING_ATLAS_PORT="{{ courseware_ports.embedding_atlas }}"
|
||||
COURSEWARE_UNSLOTH_PORT="{{ courseware_ports.unsloth }}"
|
||||
COURSEWARE_PROMPTFOO_PORT="{{ courseware_ports.promptfoo }}"
|
||||
COURSEWARE_WIKI_PORT="{{ courseware_ports.wiki }}"
|
||||
OLLAMA_BIN="{{ courseware_ollama_bin }}"
|
||||
OLLAMA_MODELS_DIR="{{ courseware_ollama_models_dir }}"
|
||||
NODE_RUNTIME_BIN_DIR="{{ courseware_node_runtime_bin_dir }}"
|
||||
OPEN_WEBUI_VENV="{{ courseware_venvs_dir }}/open-webui"
|
||||
OPEN_WEBUI_DATA_DIR="{{ courseware_state_dir }}/open-webui"
|
||||
CHUNKVIZ_DIR="{{ courseware_repos_dir }}/ChunkViz"
|
||||
EMBEDDING_ATLAS_VENV="{{ courseware_venvs_dir }}/embedding-atlas"
|
||||
TTPS_DATASET_PATH="{{ courseware_datasets_dir }}/ttps_dataset.parquet"
|
||||
WIKI_TEST_RAW_PATH="{{ courseware_datasets_dir }}/wiki.test.raw"
|
||||
TRANSFORMERLAB_DIR="{{ courseware_transformerlab_home }}"
|
||||
UNSLOTH_BIN="{{ ansible_env.HOME }}/.local/bin/unsloth"
|
||||
PROMPTFOO_DIR="{{ courseware_promptfoo_dir }}"
|
||||
PROMPTFOO_BIN="{{ courseware_tools_dir }}/promptfoo/node_modules/.bin/promptfoo"
|
||||
WIKI_DIR="{{ courseware_wiki_repo_dir }}"
|
||||
LLAMA_CPP_BIN_DIR="{{ courseware_llama_cpp_bin_dir }}"
|
||||
KILN_LINUX_BIN="{{ courseware_apps_dir }}/kiln/Kiln"
|
||||
KILN_MAC_APP="{{ courseware_apps_dir }}/Kiln.app"
|
||||
KILN_LAUNCH_PATH="{% if ansible_system == 'Darwin' %}{{ courseware_apps_dir }}/Kiln.app{% else %}{{ courseware_apps_dir }}/kiln/Kiln{% endif %}"
|
||||
@@ -0,0 +1,8 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
|
||||
"$ROOT_DIR/labctl" up
|
||||
"$ROOT_DIR/labctl" urls
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
|
||||
"$ROOT_DIR/labctl" down
|
||||
|
||||
@@ -0,0 +1,514 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
ANSIBLE_VENV="$ROOT_DIR/.venv-ansible"
|
||||
ANSIBLE_PYTHON="$ANSIBLE_VENV/bin/python"
|
||||
ANSIBLE_PLAYBOOK="$ANSIBLE_VENV/bin/ansible-playbook"
|
||||
export ANSIBLE_CONFIG="$ROOT_DIR/ansible/ansible.cfg"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
./labctl up
|
||||
./labctl down
|
||||
./labctl preflight
|
||||
./labctl versions
|
||||
./labctl start [core|all|service...]
|
||||
./labctl stop [all|service...]
|
||||
./labctl status [all|service...]
|
||||
./labctl urls
|
||||
./labctl open kiln
|
||||
./labctl logs <service>
|
||||
EOF
|
||||
}
|
||||
|
||||
transformerlab_version() {
|
||||
local version_file=$ROOT_DIR/ansible/group_vars/all.yml
|
||||
|
||||
if [ ! -f "$version_file" ]; then
|
||||
printf '%s\n' "unknown"
|
||||
return
|
||||
fi
|
||||
|
||||
sed -nE 's/^courseware_transformerlab_version:[[:space:]]*"([^"]+)".*/\1/p' "$version_file" | head -n 1
|
||||
}
|
||||
|
||||
print_versions() {
|
||||
cat <<EOF
|
||||
Pinned component versions:
|
||||
TransformerLab: $(transformerlab_version) (single-user pinned install)
|
||||
Ansible Core: 2.18.6
|
||||
EOF
|
||||
}
|
||||
|
||||
confirm_installation() {
|
||||
local response
|
||||
local tlab_version
|
||||
tlab_version=$(transformerlab_version)
|
||||
|
||||
if [ ! -t 0 ]; then
|
||||
cat <<EOF >&2
|
||||
WARNING: THIS SCRIPT WILL CONFIGURE YOUR ENVIRONMENT WILL THE FOLLOWING SOFTWARE:
|
||||
|
||||
- Ollama
|
||||
- llama.cpp
|
||||
- TransformerLab (single-user pinned to ${tlab_version})
|
||||
- Open WebUI
|
||||
- ChunkViz
|
||||
- Embedding Atlas
|
||||
- Promptfoo
|
||||
- Unsloth Studio
|
||||
- Kiln Desktop
|
||||
- Course-specific support assets for lab 2 and lab 4
|
||||
|
||||
IT IS RECOMMENDED TO RUN THIS IN AN ISLOATED ENVIRONMENT (Dedicated WSL, VM, etc.)
|
||||
|
||||
This process may take a long time.
|
||||
|
||||
This command requires interactive confirmation. Re-run it from a terminal and answer the prompt.
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
WARNING: THIS SCRIPT WILL CONFIGURE YOUR ENVIRONMENT WILL THE FOLLOWING SOFTWARE:
|
||||
|
||||
- Ollama
|
||||
- llama.cpp
|
||||
- TransformerLab (single-user pinned to ${tlab_version})
|
||||
- Open WebUI
|
||||
- ChunkViz
|
||||
- Embedding Atlas
|
||||
- Promptfoo
|
||||
- Unsloth Studio
|
||||
- Kiln Desktop
|
||||
- Course-specific support assets for lab 2 and lab 4
|
||||
|
||||
IT IS RECOMMENDED TO RUN THIS IN AN ISLOATED ENVIRONMENT (Dedicated WSL, VM, etc.)
|
||||
|
||||
This process may take a long time.
|
||||
EOF
|
||||
|
||||
read -r -p "CONFIRM INSTALLATION (y/N): " response
|
||||
case "$response" in
|
||||
y|Y)
|
||||
;;
|
||||
*)
|
||||
echo "Installation cancelled."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
host_is_wsl() {
|
||||
[ "$(uname -s)" = "Linux" ] && uname -r | grep -qiE 'microsoft|wsl'
|
||||
}
|
||||
|
||||
host_is_arm_mac() {
|
||||
[ "$(uname -s)" = "Darwin" ] && [ "$(uname -m)" = "arm64" ]
|
||||
}
|
||||
|
||||
check_wsl_gpu_readiness() {
|
||||
if host_is_arm_mac; then
|
||||
return
|
||||
fi
|
||||
|
||||
if ! host_is_wsl; then
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v nvidia-smi >/dev/null 2>&1 && nvidia-smi >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
cat <<'EOF' >&2
|
||||
WSL GPU support is not ready yet, so the installer is stopping before Ansible runs.
|
||||
|
||||
This courseware expects WSL to already see your NVIDIA GPU.
|
||||
|
||||
Please do this on the Windows side first:
|
||||
1. Install or update the current NVIDIA Windows driver with WSL/CUDA support.
|
||||
2. Open Windows PowerShell and run: wsl --update
|
||||
3. Fully restart WSL: wsl --shutdown
|
||||
4. Reopen your Linux distro and confirm this works: nvidia-smi
|
||||
|
||||
If `nvidia-smi` still fails inside WSL:
|
||||
- Reboot Windows
|
||||
- Confirm WSL 2 is in use
|
||||
- Confirm the GPU is NVIDIA and the Windows driver is recent
|
||||
|
||||
After `nvidia-smi` works inside WSL, rerun:
|
||||
bash deploy-courseware.sh
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
linux_cuda_toolkit_is_ready() {
|
||||
command -v nvcc >/dev/null 2>&1 && return 0
|
||||
[ -f /usr/local/cuda/include/cuda_runtime.h ] && return 0
|
||||
[ -f /usr/include/cuda_runtime.h ] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
linux_cuda_toolkit_package_is_available() {
|
||||
command -v apt-cache >/dev/null 2>&1 || return 1
|
||||
apt-cache show nvidia-cuda-toolkit >/dev/null 2>&1
|
||||
}
|
||||
|
||||
print_linux_cuda_toolkit_guidance() {
|
||||
local package_hint
|
||||
package_hint="sudo apt update && sudo apt install -y nvidia-cuda-toolkit"
|
||||
|
||||
if command -v apt-cache >/dev/null 2>&1; then
|
||||
if ! apt-cache show nvidia-cuda-toolkit >/dev/null 2>&1; then
|
||||
package_hint="Your distro does not expose nvidia-cuda-toolkit in its default apt sources, so add NVIDIA's CUDA repository for your Debian/Ubuntu release and install the toolkit from there."
|
||||
fi
|
||||
fi
|
||||
|
||||
if host_is_wsl; then
|
||||
cat <<EOF >&2
|
||||
CUDA Toolkit is still missing inside this WSL Linux environment, so the installer is stopping before Ansible runs.
|
||||
|
||||
WSL splits GPU support into two layers:
|
||||
- Windows side: the NVIDIA driver makes the GPU visible to WSL
|
||||
- Linux side: llama.cpp still needs the CUDA toolkit headers/compiler inside the distro
|
||||
|
||||
What to do:
|
||||
1. Confirm the driver side works in WSL: nvidia-smi
|
||||
2. Install the Linux-side CUDA toolkit
|
||||
$package_hint
|
||||
3. Verify the toolkit:
|
||||
nvcc --version
|
||||
ls /usr/local/cuda/include/cuda_runtime.h
|
||||
|
||||
After that, rerun:
|
||||
bash deploy-courseware.sh
|
||||
EOF
|
||||
return
|
||||
fi
|
||||
|
||||
cat <<EOF >&2
|
||||
CUDA Toolkit is not installed inside this Linux environment, so the installer is stopping before Ansible runs.
|
||||
|
||||
This courseware can only build CUDA-enabled llama.cpp when the Linux-side toolkit is present.
|
||||
|
||||
What to do:
|
||||
1. Install the CUDA toolkit
|
||||
$package_hint
|
||||
2. Verify the toolkit:
|
||||
nvcc --version
|
||||
ls /usr/local/cuda/include/cuda_runtime.h
|
||||
|
||||
After that, rerun:
|
||||
bash deploy-courseware.sh
|
||||
EOF
|
||||
}
|
||||
|
||||
check_linux_cuda_toolkit() {
|
||||
if host_is_arm_mac; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$(uname -s)" != "Linux" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if host_is_wsl; then
|
||||
return
|
||||
fi
|
||||
|
||||
if linux_cuda_toolkit_is_ready; then
|
||||
return
|
||||
fi
|
||||
|
||||
if linux_cuda_toolkit_package_is_available; then
|
||||
return
|
||||
fi
|
||||
|
||||
print_linux_cuda_toolkit_guidance
|
||||
exit 1
|
||||
}
|
||||
|
||||
resolve_python() {
|
||||
if [ -n "${PYTHON_BIN:-}" ] && command -v "${PYTHON_BIN}" >/dev/null 2>&1; then
|
||||
command -v "${PYTHON_BIN}"
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
command -v python3
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v python >/dev/null 2>&1; then
|
||||
command -v python
|
||||
return
|
||||
fi
|
||||
|
||||
cat <<'EOF' >&2
|
||||
Python 3 was not found.
|
||||
|
||||
Install it first, then rerun this command:
|
||||
- Debian/Ubuntu/WSL: sudo apt update && sudo apt install -y python3 python3-venv
|
||||
- macOS: brew install python@3.11
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
host_is_debian_family() {
|
||||
[ -f /etc/os-release ] || return 1
|
||||
grep -qiE '^(ID|ID_LIKE)=(.*debian|debian.*)$' /etc/os-release
|
||||
}
|
||||
|
||||
python_has_venv_support() {
|
||||
local python_bin=$1
|
||||
local probe_parent
|
||||
local probe_venv
|
||||
|
||||
probe_parent=$(mktemp -d 2>/dev/null || mktemp -d -t labctl-venv-probe)
|
||||
probe_venv="$probe_parent/venv"
|
||||
|
||||
if "$python_bin" -m venv "$probe_venv" >/dev/null 2>&1; then
|
||||
rm -rf "$probe_parent"
|
||||
return 0
|
||||
fi
|
||||
|
||||
rm -rf "$probe_parent"
|
||||
return 1
|
||||
}
|
||||
|
||||
install_debian_python_venv_support() {
|
||||
local python_bin=$1
|
||||
local installer=(apt-get)
|
||||
local versioned_venv_pkg
|
||||
local packages=(python3-venv python3-pip)
|
||||
|
||||
if ! command -v apt-get >/dev/null 2>&1; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ "$(id -u)" -ne 0 ]; then
|
||||
if ! command -v sudo >/dev/null 2>&1; then
|
||||
cat <<'EOF' >&2
|
||||
Python can be found, but this Debian/Ubuntu system is missing the venv bootstrap packages that Ansible needs.
|
||||
|
||||
Install them, then rerun this command:
|
||||
apt-get update && apt-get install -y python3-venv python3-pip
|
||||
EOF
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -t 0 ]; then
|
||||
cat <<'EOF' >&2
|
||||
Python can be found, but this Debian/Ubuntu system still needs sudo access to install python3-venv and python3-pip.
|
||||
|
||||
Run this command from an interactive terminal so sudo can prompt for your password, or install the packages manually:
|
||||
sudo apt-get update && sudo apt-get install -y python3-venv python3-pip
|
||||
EOF
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "This step needs sudo to install missing Python packages. You may be prompted for your password."
|
||||
installer=(sudo apt-get)
|
||||
fi
|
||||
|
||||
versioned_venv_pkg=$("$python_bin" -c "import sys; print(f'python{sys.version_info.major}.{sys.version_info.minor}-venv')")
|
||||
if command -v apt-cache >/dev/null 2>&1 && apt-cache show "$versioned_venv_pkg" >/dev/null 2>&1; then
|
||||
packages=("$versioned_venv_pkg" "${packages[@]}")
|
||||
fi
|
||||
|
||||
echo "Installing missing Python venv support for this Debian/Ubuntu system..."
|
||||
"${installer[@]}" update
|
||||
"${installer[@]}" install -y "${packages[@]}"
|
||||
}
|
||||
|
||||
ansible_venv_is_usable() {
|
||||
if [ ! -x "$ANSIBLE_PYTHON" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -x "$ANSIBLE_PLAYBOOK" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
"$ANSIBLE_PYTHON" -c "import ansible" >/dev/null 2>&1 || return 1
|
||||
"$ANSIBLE_PLAYBOOK" --version >/dev/null 2>&1
|
||||
}
|
||||
|
||||
rebuild_ansible() {
|
||||
local python_bin
|
||||
|
||||
python_bin=$(resolve_python)
|
||||
|
||||
if ! python_has_venv_support "$python_bin"; then
|
||||
if host_is_debian_family; then
|
||||
install_debian_python_venv_support "$python_bin"
|
||||
fi
|
||||
fi
|
||||
|
||||
if ! python_has_venv_support "$python_bin"; then
|
||||
cat <<'EOF' >&2
|
||||
Python 3 is installed, but its virtual environment support is still unavailable.
|
||||
|
||||
Install the missing venv package for your platform, then rerun this command:
|
||||
- Debian/Ubuntu/WSL: sudo apt update && sudo apt install -y python3-venv python3-pip
|
||||
- macOS: brew reinstall python@3.11
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -rf "$ANSIBLE_VENV"
|
||||
echo "Preparing local installer runtime..."
|
||||
"$python_bin" -m venv "$ANSIBLE_VENV"
|
||||
"$ANSIBLE_PYTHON" -m pip install --upgrade pip
|
||||
"$ANSIBLE_PYTHON" -m pip install "ansible-core==2.18.6"
|
||||
}
|
||||
|
||||
ensure_ansible() {
|
||||
if ansible_venv_is_usable; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ -e "$ANSIBLE_VENV" ]; then
|
||||
echo "Refreshing installer runtime for this machine..."
|
||||
fi
|
||||
|
||||
rebuild_ansible
|
||||
}
|
||||
|
||||
sudo_keepalive_pid=""
|
||||
|
||||
ensure_sudo_session() {
|
||||
if [ "$(uname -s)" != "Linux" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$(id -u)" -eq 0 ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if ! command -v sudo >/dev/null 2>&1; then
|
||||
cat <<'EOF' >&2
|
||||
This installer needs sudo for Linux package setup, but `sudo` is not installed.
|
||||
|
||||
Install sudo or rerun this command as root, then try again.
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if sudo -n true >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ ! -t 0 ]; then
|
||||
cat <<'EOF' >&2
|
||||
This installer needs sudo for Linux package setup.
|
||||
|
||||
Run this command from an interactive terminal so sudo can prompt for your password, then rerun the same `./labctl` command.
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "This step needs sudo for Linux package setup. You may be prompted for your password."
|
||||
sudo -v
|
||||
}
|
||||
|
||||
start_sudo_keepalive() {
|
||||
if [ "$(uname -s)" != "Linux" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$(id -u)" -eq 0 ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if ! command -v sudo >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
(
|
||||
while true; do
|
||||
sudo -n true >/dev/null 2>&1 || exit 0
|
||||
sleep 60
|
||||
done
|
||||
) &
|
||||
sudo_keepalive_pid=$!
|
||||
}
|
||||
|
||||
stop_sudo_keepalive() {
|
||||
if [ -n "${sudo_keepalive_pid:-}" ]; then
|
||||
kill "$sudo_keepalive_pid" >/dev/null 2>&1 || true
|
||||
wait "$sudo_keepalive_pid" >/dev/null 2>&1 || true
|
||||
sudo_keepalive_pid=""
|
||||
fi
|
||||
}
|
||||
|
||||
run_playbook() {
|
||||
local playbook=$1
|
||||
local escaped_root
|
||||
shift
|
||||
|
||||
check_wsl_gpu_readiness
|
||||
check_linux_cuda_toolkit
|
||||
ensure_ansible
|
||||
ensure_sudo_session
|
||||
|
||||
if [ ! -x "$ANSIBLE_PLAYBOOK" ]; then
|
||||
echo "Installer runtime is incomplete. Rebuilding it..."
|
||||
rebuild_ansible
|
||||
fi
|
||||
|
||||
escaped_root=${ROOT_DIR//\\/\\\\}
|
||||
escaped_root=${escaped_root//\"/\\\"}
|
||||
|
||||
start_sudo_keepalive
|
||||
trap stop_sudo_keepalive RETURN
|
||||
"$ANSIBLE_PLAYBOOK" \
|
||||
-e "{\"courseware_root\":\"$escaped_root\"}" \
|
||||
"$ROOT_DIR/ansible/playbooks/$playbook" \
|
||||
"$@"
|
||||
}
|
||||
|
||||
require_arg() {
|
||||
if [ $# -lt 1 ]; then
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
main() {
|
||||
local cmd=${1:-}
|
||||
shift || true
|
||||
|
||||
case "$cmd" in
|
||||
up)
|
||||
confirm_installation
|
||||
run_playbook up.yml
|
||||
"$ROOT_DIR/scripts/service_manager.sh" start all
|
||||
;;
|
||||
down)
|
||||
"$ROOT_DIR/scripts/service_manager.sh" stop all || true
|
||||
run_playbook down.yml
|
||||
;;
|
||||
preflight)
|
||||
confirm_installation
|
||||
run_playbook up.yml --tags preflight
|
||||
;;
|
||||
versions)
|
||||
print_versions
|
||||
;;
|
||||
start|stop|status|urls|open|logs)
|
||||
exec "$ROOT_DIR/scripts/service_manager.sh" "$cmd" "$@"
|
||||
;;
|
||||
""|-h|--help|help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
@@ -0,0 +1,141 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Create or refresh a default local TransformerLab user."
|
||||
)
|
||||
parser.add_argument("--transformerlab-dir", required=True, help="Path to the TransformerLab home directory")
|
||||
parser.add_argument("--email", required=True, help="Email address for the default user")
|
||||
parser.add_argument("--password", required=True, help="Password for the default user")
|
||||
parser.add_argument("--first-name", default="", help="Optional first name")
|
||||
parser.add_argument("--last-name", default="", help="Optional last name")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def load_environment(transformerlab_dir: Path) -> None:
|
||||
env_file = transformerlab_dir / ".env"
|
||||
if env_file.exists():
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
except ImportError:
|
||||
return
|
||||
load_dotenv(env_file)
|
||||
|
||||
|
||||
async def ensure_user(
|
||||
transformerlab_dir: Path,
|
||||
email: str,
|
||||
password: str,
|
||||
first_name: str,
|
||||
last_name: str,
|
||||
) -> None:
|
||||
src_dir = transformerlab_dir / "src"
|
||||
if not src_dir.exists():
|
||||
raise FileNotFoundError(f"TransformerLab source directory not found: {src_dir}")
|
||||
|
||||
sys.path.insert(0, str(src_dir))
|
||||
load_environment(transformerlab_dir)
|
||||
|
||||
from fastapi_users.db import SQLAlchemyUserDatabase
|
||||
from sqlalchemy import select
|
||||
from transformerlab.models.users import UserCreate, UserManager, UserUpdate
|
||||
from transformerlab.services.provider_service import initialize_team_local_provider
|
||||
from transformerlab.shared.models.models import TeamRole, User, UserTeam
|
||||
from transformerlab.shared.models.user_model import AsyncSessionLocal, create_personal_team
|
||||
|
||||
async with AsyncSessionLocal() as session:
|
||||
user_db = SQLAlchemyUserDatabase(session, User)
|
||||
user_manager = UserManager(user_db)
|
||||
|
||||
stmt = select(User).where(User.email == email)
|
||||
result = await session.execute(stmt)
|
||||
existing_user = result.unique().scalar_one_or_none()
|
||||
|
||||
created = existing_user is None
|
||||
if created:
|
||||
await user_manager.create(
|
||||
UserCreate(
|
||||
email=email,
|
||||
password=password,
|
||||
is_active=True,
|
||||
is_superuser=False,
|
||||
is_verified=True,
|
||||
first_name=first_name or None,
|
||||
last_name=last_name or None,
|
||||
),
|
||||
safe=False,
|
||||
request=None,
|
||||
)
|
||||
else:
|
||||
await user_manager.update(
|
||||
UserUpdate(
|
||||
password=password,
|
||||
is_active=True,
|
||||
is_superuser=False,
|
||||
is_verified=True,
|
||||
first_name=first_name or existing_user.first_name,
|
||||
last_name=last_name or existing_user.last_name,
|
||||
),
|
||||
existing_user,
|
||||
safe=False,
|
||||
request=None,
|
||||
)
|
||||
|
||||
result = await session.execute(stmt)
|
||||
user = result.unique().scalar_one()
|
||||
user_id = str(user.id)
|
||||
|
||||
team_stmt = select(UserTeam).where(UserTeam.user_id == user_id).limit(1)
|
||||
team_result = await session.execute(team_stmt)
|
||||
user_team = team_result.scalar_one_or_none()
|
||||
|
||||
if user_team is None:
|
||||
personal_team = await create_personal_team(session, user)
|
||||
user_team = UserTeam(user_id=user_id, team_id=personal_team.id, role=TeamRole.OWNER.value)
|
||||
session.add(user_team)
|
||||
await session.commit()
|
||||
team_id = personal_team.id
|
||||
else:
|
||||
team_id = user_team.team_id
|
||||
|
||||
try:
|
||||
await initialize_team_local_provider(session, team_id, user_id)
|
||||
except Exception as exc:
|
||||
print(f"warning: failed to initialize local provider for {email}: {exc}")
|
||||
|
||||
print(
|
||||
f"{'created' if created else 'updated'} default TransformerLab user {email} "
|
||||
f"(team_id={team_id})"
|
||||
)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
transformerlab_dir = Path(args.transformerlab_dir).expanduser().resolve()
|
||||
|
||||
try:
|
||||
asyncio.run(
|
||||
ensure_user(
|
||||
transformerlab_dir=transformerlab_dir,
|
||||
email=args.email,
|
||||
password=args.password,
|
||||
first_name=args.first_name,
|
||||
last_name=args.last_name,
|
||||
)
|
||||
)
|
||||
except Exception as exc:
|
||||
print(f"error: {exc}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -0,0 +1,144 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)
|
||||
STATE_DIR="$ROOT_DIR/state"
|
||||
RUNTIME_ENV="$STATE_DIR/runtime.env"
|
||||
|
||||
load_runtime_env() {
|
||||
if [ -f "$RUNTIME_ENV" ]; then
|
||||
# shellcheck disable=SC1090
|
||||
. "$RUNTIME_ENV"
|
||||
fi
|
||||
|
||||
: "${COURSEWARE_STATE_DIR:=$STATE_DIR}"
|
||||
: "${COURSEWARE_BIND_HOST:=127.0.0.1}"
|
||||
: "${COURSEWARE_URL_HOST:=127.0.0.1}"
|
||||
: "${COURSEWARE_PROMPTFOO_PORT:=15500}"
|
||||
: "${COURSEWARE_WIKI_PORT:=80}"
|
||||
: "${NODE_RUNTIME_BIN_DIR:=$COURSEWARE_STATE_DIR/tools/node-runtime/node_modules/node/bin}"
|
||||
: "${PROMPTFOO_DIR:=$COURSEWARE_STATE_DIR/lab6}"
|
||||
: "${WIKI_DIR:=$COURSEWARE_STATE_DIR/repos/LLM-Labs}"
|
||||
: "${LLAMA_CPP_BIN_DIR:=$COURSEWARE_STATE_DIR/repos/llama.cpp/build/bin}"
|
||||
}
|
||||
|
||||
ensure_runtime_env() {
|
||||
if [ ! -f "$RUNTIME_ENV" ]; then
|
||||
echo "Missing $RUNTIME_ENV. Run ./labctl up first." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
service_list() {
|
||||
printf '%s\n' \
|
||||
"ollama" \
|
||||
"open-webui" \
|
||||
"transformerlab" \
|
||||
"chunkviz" \
|
||||
"embedding-atlas" \
|
||||
"unsloth" \
|
||||
"promptfoo" \
|
||||
"wiki"
|
||||
}
|
||||
|
||||
service_pid_file() {
|
||||
printf '%s/run/%s.pid\n' "$STATE_DIR" "${1//-/_}"
|
||||
}
|
||||
|
||||
service_log_file() {
|
||||
printf '%s/logs/%s.log\n' "$STATE_DIR" "${1//-/_}"
|
||||
}
|
||||
|
||||
service_port() {
|
||||
case "$1" in
|
||||
ollama) printf '%s\n' "${COURSEWARE_OLLAMA_PORT}" ;;
|
||||
open-webui) printf '%s\n' "${COURSEWARE_OPEN_WEBUI_PORT}" ;;
|
||||
transformerlab) printf '%s\n' "${COURSEWARE_TRANSFORMERLAB_PORT}" ;;
|
||||
chunkviz) printf '%s\n' "${COURSEWARE_CHUNKVIZ_PORT}" ;;
|
||||
embedding-atlas) printf '%s\n' "${COURSEWARE_EMBEDDING_ATLAS_PORT}" ;;
|
||||
unsloth) printf '%s\n' "${COURSEWARE_UNSLOTH_PORT}" ;;
|
||||
promptfoo) printf '%s\n' "${COURSEWARE_PROMPTFOO_PORT}" ;;
|
||||
wiki) printf '%s\n' "${COURSEWARE_WIKI_PORT}" ;;
|
||||
*) return 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
service_url() {
|
||||
case "$1" in
|
||||
ollama) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_OLLAMA_PORT" ;;
|
||||
open-webui) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_OPEN_WEBUI_PORT" ;;
|
||||
transformerlab) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_TRANSFORMERLAB_PORT" ;;
|
||||
chunkviz) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_CHUNKVIZ_PORT" ;;
|
||||
embedding-atlas) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_EMBEDDING_ATLAS_PORT" ;;
|
||||
unsloth) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_UNSLOTH_PORT" ;;
|
||||
promptfoo) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_PROMPTFOO_PORT" ;;
|
||||
wiki) printf 'http://%s:%s\n' "$COURSEWARE_URL_HOST" "$COURSEWARE_WIKI_PORT" ;;
|
||||
*) return 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
service_command() {
|
||||
case "$1" in
|
||||
ollama)
|
||||
printf 'exec env OLLAMA_HOST=%s:%s OLLAMA_MODELS="%s" "%s" serve' \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_OLLAMA_PORT" \
|
||||
"$OLLAMA_MODELS_DIR" \
|
||||
"$OLLAMA_BIN"
|
||||
;;
|
||||
open-webui)
|
||||
printf 'exec env DATA_DIR="%s" OLLAMA_BASE_URL=http://%s:%s "%s/bin/open-webui" serve --host %s --port %s' \
|
||||
"$OPEN_WEBUI_DATA_DIR" \
|
||||
"$COURSEWARE_URL_HOST" \
|
||||
"$COURSEWARE_OLLAMA_PORT" \
|
||||
"$OPEN_WEBUI_VENV" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_OPEN_WEBUI_PORT"
|
||||
;;
|
||||
transformerlab)
|
||||
printf 'cd "%s/src" && exec ./run.sh -h %s -p %s' \
|
||||
"$TRANSFORMERLAB_DIR" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_TRANSFORMERLAB_PORT"
|
||||
;;
|
||||
chunkviz)
|
||||
printf 'cd "%s" && PATH="%s:$PATH" exec "./node_modules/.bin/serve" build -s -n -L -l tcp://%s:%s' \
|
||||
"$CHUNKVIZ_DIR" \
|
||||
"$NODE_RUNTIME_BIN_DIR" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_CHUNKVIZ_PORT"
|
||||
;;
|
||||
embedding-atlas)
|
||||
printf 'exec "%s/bin/embedding-atlas" "%s" --text "Scenario" --host %s --port %s' \
|
||||
"$EMBEDDING_ATLAS_VENV" \
|
||||
"$TTPS_DATASET_PATH" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_EMBEDDING_ATLAS_PORT"
|
||||
;;
|
||||
unsloth)
|
||||
printf 'exec "%s" studio -H %s -p %s' \
|
||||
"$UNSLOTH_BIN" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_UNSLOTH_PORT"
|
||||
;;
|
||||
promptfoo)
|
||||
printf 'cd "%s" && PATH="%s:$PATH" COURSEWARE_BIND_HOST=%s "%s" view . --port %s --no' \
|
||||
"$PROMPTFOO_DIR" \
|
||||
"$NODE_RUNTIME_BIN_DIR" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$PROMPTFOO_BIN" \
|
||||
"$COURSEWARE_PROMPTFOO_PORT"
|
||||
;;
|
||||
wiki)
|
||||
printf 'cd "%s" && PATH="%s:$PATH" exec "./node_modules/.bin/next" start --hostname %s --port %s' \
|
||||
"$WIKI_DIR" \
|
||||
"$NODE_RUNTIME_BIN_DIR" \
|
||||
"$COURSEWARE_BIND_HOST" \
|
||||
"$COURSEWARE_WIKI_PORT"
|
||||
;;
|
||||
*)
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@@ -0,0 +1,260 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck disable=SC1091
|
||||
. "$SCRIPT_DIR/common.sh"
|
||||
|
||||
load_runtime_env
|
||||
|
||||
mkdir -p "$STATE_DIR/run" "$STATE_DIR/logs"
|
||||
|
||||
resolve_targets() {
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "No target specified." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
case "$1" in
|
||||
core)
|
||||
printf '%s\n' "ollama" "open-webui"
|
||||
;;
|
||||
all)
|
||||
service_list
|
||||
;;
|
||||
*)
|
||||
printf '%s\n' "$@"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
has_live_pid() {
|
||||
local service=$1
|
||||
local pid_file
|
||||
pid_file=$(service_pid_file "$service")
|
||||
|
||||
if [ -f "$pid_file" ]; then
|
||||
local pid
|
||||
pid=$(cat "$pid_file")
|
||||
if kill -0 "$pid" >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
is_running() {
|
||||
local service=$1
|
||||
|
||||
has_live_pid "$service" || service_ready "$service"
|
||||
}
|
||||
|
||||
service_ready() {
|
||||
local service=$1
|
||||
|
||||
case "$service" in
|
||||
ollama)
|
||||
curl -fsS "$(service_url "$service")/api/tags" >/dev/null 2>&1
|
||||
;;
|
||||
promptfoo)
|
||||
curl -fsS "$(service_url "$service")/health" >/dev/null 2>&1
|
||||
;;
|
||||
open-webui|transformerlab|chunkviz|embedding-atlas|unsloth|wiki)
|
||||
curl -fsS "$(service_url "$service")" >/dev/null 2>&1
|
||||
;;
|
||||
*)
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
start_one() {
|
||||
local service=$1
|
||||
local cmd
|
||||
local log_file
|
||||
local pid_file
|
||||
local attempt
|
||||
|
||||
if has_live_pid "$service"; then
|
||||
echo "$service already running"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if service_ready "$service"; then
|
||||
echo "$service already available"
|
||||
return 0
|
||||
fi
|
||||
|
||||
case "$service" in
|
||||
open-webui)
|
||||
start_one ollama
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
cmd=$(service_command "$service")
|
||||
log_file=$(service_log_file "$service")
|
||||
pid_file=$(service_pid_file "$service")
|
||||
|
||||
if command -v setsid >/dev/null 2>&1; then
|
||||
nohup setsid bash -lc "$cmd" </dev/null >>"$log_file" 2>&1 &
|
||||
else
|
||||
nohup bash -lc "$cmd" </dev/null >>"$log_file" 2>&1 &
|
||||
fi
|
||||
echo $! >"$pid_file"
|
||||
|
||||
for attempt in $(seq 1 60); do
|
||||
if service_ready "$service"; then
|
||||
echo "started $service"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! has_live_pid "$service"; then
|
||||
rm -f "$pid_file"
|
||||
echo "failed to start $service; check $log_file" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "$service did not become ready in time; check $log_file" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
stop_one() {
|
||||
local service=$1
|
||||
local pid_file
|
||||
|
||||
pid_file=$(service_pid_file "$service")
|
||||
|
||||
if [ ! -f "$pid_file" ]; then
|
||||
echo "$service not running"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local pid
|
||||
pid=$(cat "$pid_file")
|
||||
if kill -0 "$pid" >/dev/null 2>&1; then
|
||||
kill "$pid" >/dev/null 2>&1 || true
|
||||
sleep 2
|
||||
if kill -0 "$pid" >/dev/null 2>&1; then
|
||||
kill -9 "$pid" >/dev/null 2>&1 || true
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "$pid_file"
|
||||
echo "stopped $service"
|
||||
}
|
||||
|
||||
status_one() {
|
||||
local service=$1
|
||||
|
||||
if service_ready "$service"; then
|
||||
printf 'RUNNING %-15s %s\n' "$service" "$(service_url "$service")"
|
||||
elif has_live_pid "$service"; then
|
||||
printf 'STARTING %-15s %s\n' "$service" "$(service_url "$service")"
|
||||
else
|
||||
printf 'STOPPED %-15s %s\n' "$service" "$(service_url "$service")"
|
||||
fi
|
||||
}
|
||||
|
||||
urls() {
|
||||
cat <<EOF
|
||||
Ollama API: $(service_url ollama)
|
||||
Open WebUI: $(service_url open-webui)
|
||||
TransformerLab: $(service_url transformerlab)
|
||||
ChunkViz: $(service_url chunkviz)
|
||||
Embedding Atlas: $(service_url embedding-atlas)
|
||||
Unsloth Studio: $(service_url unsloth)
|
||||
Promptfoo CLI: $PROMPTFOO_BIN
|
||||
Promptfoo UI: $(service_url promptfoo)
|
||||
Wiki: $(service_url wiki)
|
||||
Kiln app: ${KILN_LAUNCH_PATH:-not installed}
|
||||
EOF
|
||||
}
|
||||
|
||||
open_kiln() {
|
||||
local host_os
|
||||
|
||||
host_os=$(uname -s)
|
||||
if [ "$host_os" = "Darwin" ] && [ -d "$KILN_MAC_APP" ]; then
|
||||
open "$KILN_MAC_APP"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ -x "$KILN_LINUX_BIN" ]; then
|
||||
nohup "$KILN_LINUX_BIN" >/dev/null 2>&1 &
|
||||
echo "started Kiln from $KILN_LINUX_BIN"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "Kiln is not installed." >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
show_logs() {
|
||||
local service=$1
|
||||
local log_file
|
||||
|
||||
log_file=$(service_log_file "$service")
|
||||
if [ ! -f "$log_file" ]; then
|
||||
echo "No log file for $service" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
tail -n 80 "$log_file"
|
||||
}
|
||||
|
||||
main() {
|
||||
local cmd=${1:-}
|
||||
shift || true
|
||||
|
||||
ensure_runtime_env
|
||||
|
||||
case "$cmd" in
|
||||
start)
|
||||
while IFS= read -r service; do
|
||||
start_one "$service"
|
||||
done < <(resolve_targets "$@")
|
||||
;;
|
||||
stop)
|
||||
while IFS= read -r service; do
|
||||
stop_one "$service"
|
||||
done < <(resolve_targets "$@")
|
||||
;;
|
||||
status)
|
||||
if [ $# -eq 0 ]; then
|
||||
set -- all
|
||||
fi
|
||||
while IFS= read -r service; do
|
||||
status_one "$service"
|
||||
done < <(resolve_targets "$@")
|
||||
;;
|
||||
urls)
|
||||
urls
|
||||
;;
|
||||
open)
|
||||
if [ "${1:-}" != "kiln" ]; then
|
||||
echo "Only 'open kiln' is supported." >&2
|
||||
exit 1
|
||||
fi
|
||||
open_kiln
|
||||
;;
|
||||
logs)
|
||||
if [ $# -ne 1 ]; then
|
||||
echo "Usage: ./labctl logs <service>" >&2
|
||||
exit 1
|
||||
fi
|
||||
show_logs "$1"
|
||||
;;
|
||||
*)
|
||||
echo "Unknown command: $cmd" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user