Custom loss functions allow you to make your deep learning models more intelligent and tailored to your needs. While popular loss functions like Mean Squared Error (MSE) and Categorical Cross-Entropy work for most tasks, sometimes you need to reflect specific business priorities or research goals in how your model learns. This is where custom loss functions shine: you can penalize errors differently, combine multiple objectives, or handle special data scenarios.
In this guide, you’ll learn everything you need to create, save, and use a custom loss function in TensorFlow on your Ubuntu 24.04 GPU server.
Why Create a Custom Loss Function?
Before you jump in, it’s important to know why custom loss functions are valuable for machine learning projects.
A loss function tells your model how far off its predictions are from the ground truth. While standard loss functions work for many scenarios, sometimes your problem has unique requirements:
- You might want to punish certain mistakes more than others (e.g., false negatives in medical diagnoses).
- You may need to mix losses (e.g., combine MSE and MAE).
- You could have imbalanced data, unusual output formats, or business-specific priorities.
In all these cases, designing your own loss function lets you shape your model’s learning in exactly the way you need.
Prerequisites
- An Ubuntu 24.04 server with an NVIDIA GPU.
- A non-root user or a user with sudo privileges.
- NVIDIA drivers are installed on your server.
Step 1: Setting Up Python Environment
1. First, install the required dependencies.
apt install -y software-properties-common
2. Add the Python repository.
add-apt-repository ppa:deadsnakes/ppa
3. Install Python 3.10 with additional libraries.
apt install -y python3.10 python3.10-venv python3-pip python3.10-dev
4. Verify the Python installation.
python3.10 --version
Output.
Python 3.10.18
Step 2: Install TensorFlow
1. First, create a virtual environment for your project.
python3.10 -m venv tf-venv
2. Activate the virtual environment.
source tf-venv/bin/activate
3. Update pip to the latest version.
pip install --upgrade pip
4. Install TensorFlow.
pip install tensorflow
Step 3: Defining Your Custom Loss Function
Now, let’s create a Python file that contains your custom loss logic. This keeps your code organized and reusable.
nano custom_loss.py
Add the following code:
import tensorflow as tf
def mae_mse_loss(y_true, y_pred):
"""
Combine Mean Absolute Error (MAE) and Mean Squared Error (MSE).
Returns: scalar loss value.
"""
mae = tf.reduce_mean(tf.abs(y_true - y_pred))
mse = tf.reduce_mean(tf.square(y_true - y_pred))
return mae + mse
class WeightedMAEMSE(tf.keras.losses.Loss):
"""
Custom loss: weighted sum of MAE and MSE.
"""
def __init__(self, mae_weight=0.5, mse_weight=0.5, name="weighted_mae_mse"):
super().__init__(name=name)
self.mae_weight = mae_weight
self.mse_weight = mse_weight
def call(self, y_true, y_pred):
mae = tf.reduce_mean(tf.abs(y_true - y_pred))
mse = tf.reduce_mean(tf.square(y_true - y_pred))
return self.mae_weight * mae + self.mse_weight * mse
Explanation:
- mae_mse_loss: Combines MAE and MSE for a balanced regression loss.
- WeightedMAEMSE: Lets you control the weighting between MAE and MSE by changing parameters.
Run the script.
python3.10 custom_loss.py
Note: Any warnings here can be safely ignored
Step 4: Building and Training a Model with the Custom Loss
With your custom loss ready, let’s see how to plug it into a real model. This file will set up the data, build the model, and train it.
nano train_with_custom_loss.py
Add the following code:
import numpy as np
import tensorflow as tf
from custom_loss import mae_mse_loss, WeightedMAEMSE
# Generate toy regression data
x = np.random.rand(1000, 10)
y = np.sum(x, axis=1, keepdims=True) + np.random.normal(0, 0.1, (1000, 1)) # target: sum + noise
# Build a simple model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(10,)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
# Compile model with the function-based custom loss
model.compile(optimizer='adam', loss=mae_mse_loss)
print("Training model with function-based custom loss (MAE + MSE)...")
model.fit(x, y, epochs=5, batch_size=32)
# Now use the class-based loss with custom weights
custom_loss = WeightedMAEMSE(mae_weight=0.7, mse_weight=0.3)
model.compile(optimizer='adam', loss=custom_loss)
print("Training model with class-based custom loss (Weighted MAE + MSE)...")
model.fit(x, y, epochs=5, batch_size=32)
This script imports your custom loss and trains a small regression model using both function-based and class-based losses.
Run the script.
python3.10 train_with_custom_loss.py
The output will show training progress and loss values, proving your custom loss works!
32/32 ━━━━━━━━━━━━━━━━━━━━ 1s 10ms/step - loss: 27.3566
Epoch 2/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 15.0752
Epoch 3/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.5361
Epoch 4/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3888
Epoch 5/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3136
Training model with class-based custom loss (Weighted MAE + MSE)...
Epoch 1/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 1s 10ms/step - loss: 0.1630
Epoch 2/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1055
Epoch 3/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0864
Epoch 4/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0756
Epoch 5/5
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0695
Conclusion
Custom loss functions are a powerful tool for AI engineers and researchers. With them, you go beyond basic accuracy and make your models optimize for what matters most to your task, whether that’s a business metric, safety, or a scientific goal.
In this article, you learned not just the “how” but also the “why” of custom losses. You saw how to structure your code with separate files, use both function-based and class-based approaches, and integrate your loss into a TensorFlow training loop.