Iestyn Pryce
|
d8239a9cc4
|
Revert format regression introduced in ecd07b18c1
|
2017-05-21 11:14:21 +01:00 |
|
Iestyn Pryce
|
ecd07b18c1
|
Fix log_* formats which expect size_t but receive uint32_t.
|
2017-05-19 22:31:56 +01:00 |
|
Al
|
a4431dbb27
|
[classification] removing regularization update from gradient computation in logistic regression, as that's now handled by the optimizer
|
2017-04-02 14:32:14 -04:00 |
|
Al
|
46cd725c13
|
[math] Generic dense matrix implementation using BLAS calls for matrix-matrix multiplication if available
|
2016-08-06 00:40:01 -04:00 |
|
Al
|
ababb8f2d0
|
[fix] sign comparison in regularized gradient computation for logistic regression
|
2016-01-26 01:16:16 -05:00 |
|
Al
|
f808f74271
|
[language_classification] Automatic hyperparameter optimization using either the cross-validation set or two distinct subsets of the training set
|
2016-01-17 21:11:37 -05:00 |
|
Al
|
62017fd33d
|
[optimization] Using sparse updates in stochastic gradient descent. Decomposing the updates into the gradient of the loss function (zero for features not observed in the current batch) and the gradient of the regularization term. The derivative of the regularization term in L2-regularized models is equivalent to an exponential decay function. Before computing the gradient for the current batch, we bring the weights up to date only for the features observed in that batch, and update only those values
|
2016-01-09 03:37:31 -05:00 |
|
Al
|
562cc06eaf
|
[classification] Sparse version of logistic regression gradient which, given an array of the features/columns used in the input batch, only updates the gradient for that batch, even for the operations which otherwise would apply to the entire matrix (scaling by -1/m, regularization)
|
2016-01-09 01:33:33 -05:00 |
|
Al
|
4acf10c3a4
|
[classification] Multinomial logistic regression, gradient and cost function
|
2016-01-08 01:03:09 -05:00 |
|