Refactor lab 1 for Netron and local confidence views

This commit is contained in:
2026-04-16 11:15:39 -06:00
parent a97c8a7694
commit e4621ca65b
20 changed files with 1634 additions and 280 deletions
+1 -1
View File
@@ -179,7 +179,7 @@ We should then see:
A text listing of all of the model's tensors, and the precision of each. Because we have merely converted the model's format, and not performed quantization, the model is still in **FP16**.
- This is a text view of the previous graphical view we saw in **Lab 1, Objective 2: Visualizing a LLM**. While **TransformerLab** calls tensors **layers**, terms such as **tensors**, **layers**, and **blocks** can all be used semi-interchangeably, depending on the tool in question. We will further confuse these topics when we get to the Ollama objective below.
- This is a text view of the previous graphical view we saw in **Lab 1, Objective 2: Visualizing a LLM**. While tools such as **Netron** may expose tensors, operators, and repeating blocks with different labels, terms such as **tensors**, **layers**, and **blocks** can still be used semi-interchangeably at this level of discussion. We will further confuse these topics when we get to the Ollama objective below.
- Pedantically, the proper definitions are:
- Tensor - A multi-dimensional array of vectors to store data
- Layer - A base computational unit in a neural network