Boosting, Post-decessor
Boosting, Post-decessor
-
This is incorrect, as weakest can do better than chance, then at least all weak learner will perform the boosting just fine.
-
As Neural networks give back propagation and thus produces no error, then all the weight will not increase (will be just normal distribution) and keep doing that will produce overfitting.
-
Train a whole lot of data will trains for a much longer time, and as boosting typically perform better and better, this is not the case.
-
Non-linear problem doesn't have anything to do with overfitting.