Pass a custom evaluation metric to LightGBM

Marco Gorelli
2 min readJan 22, 2022

--

Sometimes, the defaults don’t cover your use-case

Photo by Darling Arias on Unsplash

Why monitor a custom evaluation function?

Sometimes, you’re working with a very custom evaluation metric which can’t be used as a loss function. So you might want to:

  • train using some standard loss function, such as L2 loss
  • choose your hyperparameters (such as n_estimators ) based on the best cross-validation score, according to your custom evaluation metric.

How do we do this?

You’ll need to define a function which takes, as arguments:

  • your model’s predictions
  • your dataset’s true labels

and which returns:

  • your custom loss name
  • the value of your custom loss, evaluated with the inputs
  • whether your custom metric is something which you want to maximise or minimise

If this is unclear, then don’t worry, we’re about to see an example (def neg_correlation ).

Let’s see an example!

Here, I train LightGBM on the breast_cancer dataset from sklearn, and choose n_estimators based on which one delivers the best negative correlation coefficient.

Training LGBM, monitoring custom eval metric

Where can I find this in the docs?

A description can be found here: https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html#lightgbm.LGBMRegressor.fit

However, there’s no example. I hope that this example can save you some time if you need to figure out how to do this yourself.

Conclusion

We learned how to pass a custom evaluation metric to LightGBM. This is useful when you have a task with an unusual evaluation metric which you can’t use as a loss function. Now go out and train a model using a customised evaluation metric!

--

--