From 75c5aec671e65b3466f1a9326c15b6733fa545c7 Mon Sep 17 00:00:00 2001 From: haoyuren <13851610112@163.com> Date: Thu, 26 Jun 2025 19:29:02 -0700 Subject: add a pic --- 2025-06-26/2025-06-26-notes.pdf | Bin 281981 -> 325156 bytes 2025-06-26/2025-06-26-notes.tex | 24 ++++++++++++++++++++++++ 2025-06-26/bias-variance-diagram.png | Bin 0 -> 58622 bytes template.pdf | Bin 134201 -> 134201 bytes 4 files changed, 24 insertions(+) create mode 100644 2025-06-26/bias-variance-diagram.png diff --git a/2025-06-26/2025-06-26-notes.pdf b/2025-06-26/2025-06-26-notes.pdf index 3d0ef8c..a933f33 100644 Binary files a/2025-06-26/2025-06-26-notes.pdf and b/2025-06-26/2025-06-26-notes.pdf differ diff --git a/2025-06-26/2025-06-26-notes.tex b/2025-06-26/2025-06-26-notes.tex index bbf4ed3..b672e13 100644 --- a/2025-06-26/2025-06-26-notes.tex +++ b/2025-06-26/2025-06-26-notes.tex @@ -219,6 +219,30 @@ The average fit is akin to the expected prediction $\mathbb{E}[\hat{f}(x)]$ over Bias is high when the hypothesis class is unable to capture $f_{\text{true}}(x)$. This happens when the model class is too simple or restrictive to represent the true underlying function. +\textbf{Model Complexity vs Bias-Variance:} + +\textbf{Low Complexity Model:} +\begin{itemize} + \item \textbf{High Bias} - Cannot capture complex patterns in $f_{\text{true}}(x)$ + \item \textbf{Low Variance} - Predictions consistent across different training sets + \item Example: Linear model for non-linear data +\end{itemize} + +\textbf{High Complexity Model:} +\begin{itemize} + \item \textbf{Low Bias} - Can approximate $f_{\text{true}}(x)$ well + \item \textbf{High Variance} - Predictions vary significantly with training data + \item Example: High-degree polynomial or deep neural network +\end{itemize} + +\textbf{Key Insight:} There's a fundamental tradeoff - reducing bias often increases variance, and vice versa. + +\begin{figure}[h] +\centering +\includegraphics[width=0.85\textwidth]{bias-variance-diagram.png} +\caption{Bias and Variance Contributing to Total Error (Source: Wikimedia Commons)} +\end{figure} + \subsection{Approximating Generalization Error} Since true distribution $D$ is unknown, we approximate generalization error using: \textbf{Validation Set:} Hold-out data to estimate $R(h) \approx \hat{R}_{val}(h)$ diff --git a/2025-06-26/bias-variance-diagram.png b/2025-06-26/bias-variance-diagram.png new file mode 100644 index 0000000..3f6fab1 Binary files /dev/null and b/2025-06-26/bias-variance-diagram.png differ diff --git a/template.pdf b/template.pdf index 7885fb1..6233a00 100644 Binary files a/template.pdf and b/template.pdf differ -- cgit v1.2.3