Pass a custom evaluation metric to LightGBM

Photo by Darling Arias on Unsplash

Why monitor a custom evaluation function?

Sometimes, you’re working with a very custom evaluation metric which can’t be used as a loss function. So you might want to:

  • train using some standard loss function, such as L2 loss
  • choose your hyperparameters (such as n_estimators ) based on the best cross-validation score, according to your custom evaluation metric.

How do we do this?

You’ll need to define a function which takes, as arguments:

  • your model’s predictions
  • your dataset’s true labels
  • your custom loss name
  • the value of your custom loss, evaluated with the inputs
  • whether your custom metric is something which you want to maximise or minimise

Let’s see an example!

Here, I train LightGBM on the breast_cancer dataset from sklearn, and choose n_estimators based on which one delivers the best negative correlation coefficient.

Training LGBM, monitoring custom eval metric

Where can I find this in the docs?

A description can be found here: https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html#lightgbm.LGBMRegressor.fit

Conclusion

We learned how to pass a custom evaluation metric to LightGBM. This is useful when you have a task with an unusual evaluation metric which you can’t use as a loss function. Now go out and train a model using a customised evaluation metric!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Marco Gorelli

Marco Gorelli

Data Scientist, pandas maintainer, Kaggle competitions expert, Univ. of Oxford MSc