Update lab content instructions
This commit is contained in:
@@ -175,21 +175,6 @@ In general:
|
||||
|
||||
This is useful because it shows us that model output is not magic or certainty. Each generated token is chosen from a probability distribution over many possible next tokens.
|
||||
|
||||
### Explore: Try Different Prompt Styles
|
||||
|
||||
To make the confidence view more interesting, compare:
|
||||
|
||||
1. A common phrase such as `The quick brown fox`
|
||||
2. A factual question
|
||||
3. A short cybersecurity prompt
|
||||
|
||||
Notice where the model appears highly certain and where it becomes less stable. Small local models often produce text that sounds very confident even when the underlying prediction distribution is more fragile than it first appears.
|
||||
|
||||
<div class="lab-screenshot-placeholder">
|
||||
<strong>Screenshot Placeholder</strong>
|
||||
Confidence heatmap and hover tooltip view.
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Reference in New Issue
Block a user