MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionIn the Bayesian curve fitting, I mentioned that the Gaussian noise assumption (likelihood) and prior Gaussian distribution over weighting vector will result in a Gaussian posterior. The likelihood and prior are conjugate, the prior is also called the conjugate prior.
About Conjugate PriorIn general, we will meet the following probabilities: \( p(\theta) : prior, p(\theta | X) : posterior, p(X), p(X | \theta) : likelih
...
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionIn the previous posts, We have met two linear basis functions: polynomial basis function and Fourier basis function. In this one, I will review and summary them and try making some comparison. The following basis functions are implemented in python code.
Polynomial Basis Function$$\phi_{poly} (x) = (\phi_{0} (x), …, \phi_{M - 1} (x))^T$$
where
$$\phi_{i} (x) = x^{i}, i \in \{0, …, M - 1\}$$
Gaussian Basis Function$$\
...
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionBased on the previous post, we will go deeper into regression from probability perspective in this post.
In LSR and RR, our training results are some single values of weighting vector \( \text{w} \), with which we can predict new input points simply by using basis function and dot product. The RR works already quite well, but the value selection of regularization coefficient is really intractable. To deal with this c
...
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionI have reviewed the Least Squares Regression (LSR) and Ridge Regression (RR). In this post, I will review them again, but from probabilistic perspective.
Probabilistic Interpretation: LSRIn LSR and RR, we optimized the results by minimizing the error (sum of squares error was utilized). We can find the training error cannot be 0, it’s not quite precise. This is because in general, the data sets contain noise. Therefor
...
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionIn the previous post, the Least Squares Regression (LSR) is reviewed. But the last shown example overfits when the frequency becomes higher (the basis function maps input points into higher dimensional feature space). This post will review a method called Ridge Regression against such overfitting. Furthermore, the concept of regularization will also be introduced.
Ridge RegressionBefore formally introduction to the Ri
...
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(', '\\)']]}});
IntroductionIn machine learning, regression is utilized to predict continuous values. Least squares regression is one of the simplest regression technology. Given the training set: \( \text{X} = \{x_1, …, x_N\} \) with target values \( T = \{t_1, …, t_N\} \). Our goal is to obtain a function, which can “best” describe these points and predict other input points’ target values. But how can we obtain such a function?
LinearFirstly,
...
Hello world!
This is my new blog. I will begin to record my life and study here. My interest are Computer Vision and Machine Learning.
It will be greatly honoured if my words here could help you.
Tong Su
MathJax Test:
$$E=mc^2$$