neural network - Why L1 regularization works in machine Learning -


well, in machine learning, 1 way prevent overfitting add l2 regularization, , says l1 regularization better, why that? know l1 used ensure sparsity of data, theoretical support result?

l1 regularization used sparsity. can beneficial if dealing big data l1 can generate more compressed models l2 regularization. due regularization parameter increases there bigger chance optima @ 0.

l2 regularization punishes big number more due squaring. of course, l2 more 'elegant' in smoothness manner.

you should check this webpage

p.s.

a more mathematically comprehensive explanation may not fit website, can try other stack exchange websites example


Comments

Popular posts from this blog

c++ - No viable overloaded operator for references a map -

java - Custom OutputStreamAppender not run: LOGBACK: No context given for <MYAPPENDER> -

java - Cannot secure connection using TLS -