Bias-Variance Decomposition is one of the most important concepts in ML. It is helpful in understanding the performance of a machine learning algorithm and understanding the issue of overfitting. Here's my take on the same, hopefully it might be a good refresher for you too! #machinelearning #ml #bias #variance
Thanks for this
I think that even conceptually appreciating models as “variables” as you do here can be a useful exercise in general (beyond of the context of your article).
It’s sometimes easy to forget that a given dataset is almost always a sample from (or “subset of”) a larger population, and that even “optimised” model parameters (which are conditional on this sample) will be uncertain relative to the “true” parameters (if such a thing can even be said to exist).
There’s uncertainty all the way down!
Thank you
Reply
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.