Update lab content instructions
This commit is contained in:
@@ -121,16 +121,6 @@ Locate, pull, and run **Qwen3.5 4B** using the **Open WebUI**. By defualt, Ope
|
||||
<figcaption>Successful inference – the model returns a coherent answer.</figcaption>
|
||||
</figure>
|
||||
|
||||
9. **Download Gemma3n e2B**
|
||||
|
||||
- While we're downloading models, let us download one more. You can either repeat the process from the previous steps to find and download **Gemma3n e2B**, or just use the following model tag to download the model via the Open WebUI search bar:
|
||||
|
||||
```bash
|
||||
ollama pull gemma3n:e2b
|
||||
```
|
||||
|
||||
Google has designed gemma 3n models designed for efficient execution on resource constrained devices such as laptops, tablets, phones, or Nvidia 2080 Super GPUs.
|
||||
|
||||
---
|
||||
|
||||
## Objective 3: Inference Settings
|
||||
@@ -220,7 +210,7 @@ Feel free to continue to explore with other topics or images. Note how each time
|
||||
### Explore: Prompt Engineering & System Prompting
|
||||
|
||||
<div class="lab-callout lab-callout--warning">
|
||||
<strong>Warning:</strong> As you explore chat via Open WebUI, ensure you turn <code>think (Ollama)</code> to OFF. <strong>Qwen3.5 4B</strong> is likely to enter an infinite thinking loop for these tasks otherwise, which will require a VM reboot.
|
||||
<strong>Warning:</strong> As you explore chat via Open WebUI, ensure you turn <code>think (Ollama)</code> to <strong>OFF</strong>. <strong>Qwen3.5 4B</strong> is likely to enter an infinite thinking loop for these tasks otherwise, which will require a VM reboot.
|
||||
|
||||
<br><br>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user