diff options
| author | haoyuren <13851610112@163.com> | 2025-06-26 19:29:02 -0700 |
|---|---|---|
| committer | haoyuren <13851610112@163.com> | 2025-06-26 19:29:02 -0700 |
| commit | 75c5aec671e65b3466f1a9326c15b6733fa545c7 (patch) | |
| tree | 49bc5409c70138ef08ddba14ece2a5b13cc43cb7 | |
| parent | cd09198b9e806384f8c382316270ba0ba7c76ff1 (diff) | |
| -rw-r--r-- | 2025-06-26/2025-06-26-notes.pdf | bin | 281981 -> 325156 bytes | |||
| -rw-r--r-- | 2025-06-26/2025-06-26-notes.tex | 24 | ||||
| -rw-r--r-- | 2025-06-26/bias-variance-diagram.png | bin | 0 -> 58622 bytes | |||
| -rw-r--r-- | template.pdf | bin | 134201 -> 134201 bytes |
4 files changed, 24 insertions, 0 deletions
diff --git a/2025-06-26/2025-06-26-notes.pdf b/2025-06-26/2025-06-26-notes.pdf Binary files differindex 3d0ef8c..a933f33 100644 --- a/2025-06-26/2025-06-26-notes.pdf +++ b/2025-06-26/2025-06-26-notes.pdf diff --git a/2025-06-26/2025-06-26-notes.tex b/2025-06-26/2025-06-26-notes.tex index bbf4ed3..b672e13 100644 --- a/2025-06-26/2025-06-26-notes.tex +++ b/2025-06-26/2025-06-26-notes.tex @@ -219,6 +219,30 @@ The average fit is akin to the expected prediction $\mathbb{E}[\hat{f}(x)]$ over Bias is high when the hypothesis class is unable to capture $f_{\text{true}}(x)$. This happens when the model class is too simple or restrictive to represent the true underlying function. +\textbf{Model Complexity vs Bias-Variance:} + +\textbf{Low Complexity Model:} +\begin{itemize} + \item \textbf{High Bias} - Cannot capture complex patterns in $f_{\text{true}}(x)$ + \item \textbf{Low Variance} - Predictions consistent across different training sets + \item Example: Linear model for non-linear data +\end{itemize} + +\textbf{High Complexity Model:} +\begin{itemize} + \item \textbf{Low Bias} - Can approximate $f_{\text{true}}(x)$ well + \item \textbf{High Variance} - Predictions vary significantly with training data + \item Example: High-degree polynomial or deep neural network +\end{itemize} + +\textbf{Key Insight:} There's a fundamental tradeoff - reducing bias often increases variance, and vice versa. + +\begin{figure}[h] +\centering +\includegraphics[width=0.85\textwidth]{bias-variance-diagram.png} +\caption{Bias and Variance Contributing to Total Error (Source: Wikimedia Commons)} +\end{figure} + \subsection{Approximating Generalization Error} Since true distribution $D$ is unknown, we approximate generalization error using: \textbf{Validation Set:} Hold-out data to estimate $R(h) \approx \hat{R}_{val}(h)$ diff --git a/2025-06-26/bias-variance-diagram.png b/2025-06-26/bias-variance-diagram.png Binary files differnew file mode 100644 index 0000000..3f6fab1 --- /dev/null +++ b/2025-06-26/bias-variance-diagram.png diff --git a/template.pdf b/template.pdf Binary files differindex 7885fb1..6233a00 100644 --- a/template.pdf +++ b/template.pdf |
