Articles

Mean Squared Error (MSE) in Machine Learning: Formula, RMSE, MAE, and R-Squared

Complete MSE guide for 2026. Formula, Python example, when MSE beats MAE or RMSE, R-squared comparison, outlier sensitivity, neural network loss use cases.

·
Updated
·
9 min read
data quality llms rag
Mean squared error formula and bar chart comparing actual vs predicted values
Table of Contents

Mean Squared Error (MSE) in Machine Learning: Formula, RMSE, MAE, and R-Squared

Mean Squared Error (MSE) is one of the most common metrics and training losses for evaluating regression models. It is the average of the squared differences between predicted and actual values, and it is the loss function that gradient descent minimizes when you train a regression network. This guide covers the MSE formula, a working Python example, when to prefer MSE over MAE or RMSE, the relationship to R-squared, and how MSE behaves in neural networks and ensemble methods.

TL;DR: MSE, RMSE, MAE, and R-Squared at a Glance

MetricFormulaUnitsOutlier sensitivityWhen to use
MSEmean of (y - y_hat) squaredTarget units squaredHighDefault regression loss, gradient descent
RMSEsquare root of MSETarget unitsHighReporting error in original units
MAEmean of absolute (y - y_hat)Target unitsLowOutlier-robust regression error
R-squared1 minus MSE divided by target varianceUnitless (1 perfect, 0 mean predictor, negative is worse)IndirectCommunicating fit quality across teams
Huber lossQuadratic for small errors, linear for largeMixed scale (delta-dependent)MediumMix of MSE smoothness and MAE robustness

What Is Mean Squared Error

Definition and Formula

Mean Squared Error is defined as:

MSE = (1 / n) * sum from i = 1 to n of (y_i - y_hat_i)^2

Where:

  • y_i is the actual value from the dataset
  • y_hat_i is the predicted value from the model
  • n is the number of observations

The squaring step ensures both over-predictions and under-predictions contribute positively, and it weights larger deviations more heavily than smaller ones. The result is non-negative, has the units of the target variable squared, and equals zero only when every prediction is exactly correct.

Bar chart comparing actual vs predicted values in Mean Squared Error (MSE) calculation for machine learning regression model evaluation.

How MSE Differs from MAE and RMSE

MAE (Mean Absolute Error) takes the absolute value of each error instead of squaring it. That keeps the metric in the original units and treats all errors linearly, so a single 10-unit error contributes the same as ten 1-unit errors. MAE is therefore more robust to outliers but harder to optimize with gradient methods because it is not differentiable at zero.

RMSE (Root Mean Squared Error) is the square root of MSE. It rescales MSE back into the original units of the target, which makes it easier to communicate. RMSE and MSE rank models identically (lower MSE always means lower RMSE), so they are interchangeable for model selection.

Why MSE Matters in Machine Learning

Measuring Regression Model Accuracy

MSE provides a single, quantitative number that summarises how close predictions are to truth. Lower MSE means predictions are tighter around the actual values. Higher MSE means the model is off, sometimes systematically and sometimes only on a few high-leverage points.

Classic use cases include:

  • Stock price prediction. MSE measures how far model output is from realized prices. Squaring penalises blowups more than chronic small drift, which matters when a single bad day can wipe out a quarter.
  • Sales forecasting. A retailer evaluates monthly forecasts with MSE to flag regions and seasons where prediction error is concentrated.
  • Weather prediction. Meteorologists track MSE of temperature and rainfall forecasts to decide when a model is good enough to release publicly.

MSE as a Loss Function in Neural Networks

In deep learning, MSE is a common regression loss exposed by every major framework. PyTorch exposes it as torch.nn.MSELoss and TensorFlow as tf.keras.losses.MeanSquaredError. The training loop computes MSE on a mini-batch, takes the gradient with respect to model parameters, and updates the parameters with an optimizer like Adam or SGD.

Three properties make MSE well behaved for gradient descent:

  • Smoothness. The squared-error surface is differentiable everywhere, which keeps the gradient stable.
  • Convexity in linear models. For linear regression, the MSE surface is globally convex, so gradient descent converges to the unique optimum.
  • Strong penalty on outliers. The squared term keeps the model focused on large errors during training. This is helpful when large errors are costly and unhelpful when they are noise.

Insights MSE Provides About Model Performance

MSE alone tells you the average squared error. Pairing MSE with residual analysis reveals where the model struggles:

  • A handful of large residuals dragging up MSE usually points to outliers, leverage points, or missing features for a sub-population.
  • A flat residual plot with high MSE points to underfitting.
  • A residual plot that fans out with increasing predictions points to heteroscedasticity. One global MSE hides range-dependent error patterns in that case, so segmented metrics or residual plots are required to diagnose where the model is failing.

How to Calculate MSE: Step-By-Step

To compute MSE by hand:

  1. Subtract the predicted value from the actual value for each observation.
  2. Square each difference.
  3. Sum the squared differences.
  4. Divide by the number of observations.

Example:

  • Actual values: [5, 7, 9]
  • Predicted values: [6, 6, 10]
  • Errors: [-1, 1, -1]
  • Squared errors: [1, 1, 1]
  • MSE: (1 + 1 + 1) / 3 = 1.0

Working Python Example with NumPy and Scikit-Learn

import numpy as np
from sklearn.metrics import mean_squared_error

y_actual = np.array([5, 7, 9])
y_predicted = np.array([6, 6, 10])

# Manual calculation
mse_manual = np.mean((y_actual - y_predicted) ** 2)

# Scikit-learn equivalent
mse_sklearn = mean_squared_error(y_actual, y_predicted)

rmse = np.sqrt(mse_sklearn)
print(f"MSE: {mse_manual:.4f}")
print(f"MSE (sklearn): {mse_sklearn:.4f}")
print(f"RMSE: {rmse:.4f}")

mean_squared_error is the standard reference implementation. Use it for production code rather than rolling your own to avoid edge-case bugs with empty arrays or NaN handling.

Common Pitfalls in MSE Calculation

  • Outliers. A single 100-unit error contributes 10,000 to the sum of squares, which dwarfs ninety-nine 1-unit errors that total only 99. Inspect residuals before trusting MSE.
  • Scale dependency. MSE is in the squared units of the target. Comparing MSE across models that predict different targets is meaningless without normalization.
  • Train versus test split. Always report MSE on a held-out test set, not the training set, to detect overfitting.

How to Interpret MSE Results

High MSE

A high MSE indicates large prediction errors on average. Common causes:

  • Underfitting (model is too simple for the relationship)
  • Inadequate feature engineering or missing predictors
  • Data quality issues like mislabeled targets, missing values, or measurement noise

Low MSE

A low MSE indicates predictions are close to actual values. Always confirm the score on test data; a very low training MSE can hide overfitting. Cross-validation and a held-out test set give a more honest read.

Balancing MSE with RMSE and R-Squared

R-squared rescales error into a unitless fit score where 1 is perfect, 0 matches the mean predictor, and negative values mean the model is worse than the mean baseline. R-squared = 1 minus (MSE / variance of target). For a clear walkthrough see R-squared model accuracy. RMSE rescales MSE back into the target units. Most reports include all three, plus a residual plot for diagnostics.

Practical Applications of MSE

Forecasting and Time Series

Time-series models use MSE to track forecast error against realized values. A retail chain that forecasts monthly sales by region uses MSE to flag regions where forecast error has grown, often a signal of missing seasonal or regional features. For drift over time see model vs data drift.

Pricing and Recommendation Models

E-commerce platforms use MSE to evaluate predicted optimal prices against actual customer behavior. Recommendation engines that predict ratings (Netflix-style 1-to-5 stars before they switched to thumbs) use MSE on the predicted rating vector against held-out ratings.

Computer Vision Regression Tasks

CNNs that regress bounding-box coordinates or pixel values use MSE on the coordinate vector. Object-detection losses like the Smooth L1 loss in Fast R-CNN combine MSE for small errors and MAE for large errors, which avoids exploding gradients while keeping smooth optimization.

Comparing Models Using MSE: Linear Regression vs Decision Trees vs Random Forests

A data scientist testing three regression models on housing-price prediction might see:

ModelMSERMSE
Linear Regression120,000~346
Decision Tree95,000~308
Random Forest80,000~283

The Random Forest has the lowest MSE, so it is the best fit on this dataset. Always confirm the ranking on a cross-validation split before deploying. Hyperparameter tuning (tree depth, learning rate, regularization strength) typically continues to lower MSE until it plateaus or test MSE starts to climb (overfitting signal).

MSE in Optimization Algorithms: Gradient Descent

Gradient descent uses MSE as the objective function for regression. For each mini-batch, the algorithm computes the gradient of MSE with respect to model parameters and steps in the negative-gradient direction. Over many iterations the parameters settle into a minimum.

The intuition is a landscape where MSE is the elevation. Gradient descent is a ball that rolls downhill, and each step is an update to model parameters. With an appropriate learning rate, gradient descent on convex MSE surfaces (like linear regression) converges to the global minimum. Non-convex surfaces (like deep neural networks) have many local minima, and the optimizer settles into one of them.

MSE in Deep Learning Practice

For continuous-output deep networks, MSE is the default training loss. For image-to-image regression (super-resolution, denoising), MSE is often combined with perceptual losses because pixel-wise MSE alone produces blurry outputs. For tabular regression, MSE is usually sufficient.

Pro Tips for Using MSE Effectively

  • Normalize features so that scales do not bias gradient updates. MSE is sensitive to feature scaling because parameters with larger feature scales also have larger gradients.
  • Pair MSE with residual plots to catch heteroscedasticity, outliers, and systematic bias that the summary number hides.
  • Combine MSE with domain knowledge. A model with low MSE that misses a known operational constraint (negative prices, missing-class predictions) is not deployable, even if the average error looks good.
  • Use Huber loss when you need MSE-like smoothness for small errors and MAE-like robustness for large ones. Huber is the standard middle-ground choice.

Advantages and Limitations of MSE

Advantages

  • Penalizes larger errors heavily, which is the right behaviour when large errors are costly.
  • Smooth and differentiable, which makes it well behaved for gradient descent.
  • Convex in linear models, which guarantees gradient descent finds the global minimum.

Limitations

  • Sensitive to outliers (a single outlier can dominate the metric).
  • Squared units complicate direct interpretation (use RMSE to recover original units).
  • Not the right metric for classification (use cross-entropy or F1, depending on the task; see F1 score).
  • Not the right metric for highly skewed targets (consider log-transforming the target).

Where MSE Fits in LLM and Agent Evaluation

MSE is a numeric-output metric, which makes it a poor fit for LLM evaluation where outputs are free-form text. LLM evals instead use LLM-as-a-judge metrics like faithfulness, instruction-following, and toxicity, plus deterministic checks for things like schema conformance. See deterministic LLM evaluation metrics for a survey.

The Future AGI ai-evaluation library (Apache 2.0 on GitHub) covers the LLM and agent side of the same problem space MSE solves for regression: scoring model outputs against ground truth at scale. The closest direct analog is wrapping a custom regression scorer through CustomLLMJudge for cases where the LLM extracts numeric fields that you want to score with MSE-like metrics.

# Requires: pip install ai-evaluation
# Env: FI_API_KEY, FI_SECRET_KEY
from fi.evals import evaluate

# Faithfulness scoring on an LLM answer that quoted numbers from a source doc.
result = evaluate(
    "faithfulness",
    output="The forecasted Q3 revenue is 2.4M USD with a 12 percent confidence band.",
    context="Q3 revenue forecast: 2.4M USD, confidence interval +/- 12 percent.",
    model="turing_flash",
)

print(result.score, result.reason)

Summary: When MSE Wins and When to Switch to MAE or Huber

MSE is the default metric for regression in classical ML and the default loss function for regression neural networks. It is smooth, well behaved for gradient descent, and the right call when large errors are disproportionately costly. Switch to MAE when outliers are noise rather than signal, Huber loss when you want a balance, and R-squared when you need a unitless score that translates across stakeholders. For classification and LLM outputs, MSE does not apply; use cross-entropy and LLM-as-a-judge metrics instead.

Frequently asked questions

What is Mean Squared Error in machine learning?
Mean Squared Error is a regression metric defined as the average of the squared differences between predicted and actual values. Lower MSE indicates a closer fit. Because the errors are squared, MSE penalizes a single large error more than several small errors of the same total magnitude, which makes it sensitive to outliers and well suited as a training loss for gradient descent.
What is the formula for MSE?
MSE equals one over n times the sum from i equals 1 to n of (y_i minus y_hat_i) squared, where y_i is the actual value, y_hat_i is the predicted value, and n is the number of observations. The result is a non-negative number in the squared units of the target variable, so a sales-prediction model in dollars produces MSE in dollars squared.
What is the difference between MSE, RMSE, and MAE?
MSE squares the errors before averaging. RMSE is the square root of MSE and is back in the original units of the target, which makes it easier to interpret. MAE is the average absolute error and treats all errors linearly, which makes it more robust to outliers than MSE or RMSE. Pick MSE or RMSE when large errors are disproportionately costly; pick MAE when outliers are noise rather than signal.
What does a low MSE indicate?
A low MSE indicates that the model's predictions are close to the actual values on the dataset you measured. Always validate the same MSE on held-out test data, because a very low training MSE can hide overfitting. Pair MSE with R-squared, residual plots, or cross-validation before deciding the model is production-ready.
Is MSE used in neural networks?
Yes. MSE is a common regression loss exposed by PyTorch (`torch.nn.MSELoss`), TensorFlow (`tf.keras.losses.MeanSquaredError`), and most other frameworks. Gradient descent uses the gradient of MSE with respect to weights to update parameters during training, and the squared-error surface is smooth and differentiable everywhere, which makes it well behaved for optimizers like Adam and SGD.
When should you not use MSE?
Avoid MSE when outliers are noise rather than signal, because the squaring step makes one large outlier dominate the metric. Switch to MAE or Huber loss in that case. Also avoid MSE as the sole metric for classification (use cross-entropy) and when the target variable is highly skewed (consider log-transforming the target or using a quantile loss).
How is MSE related to R-squared?
R-squared equals 1 minus the ratio of MSE to the variance of the target. So lowering MSE on a fixed dataset raises R-squared mechanically. R-squared is unitless: 1 is a perfect fit, 0 means the model is no better than predicting the mean, and negative values mean the model is worse than the mean baseline. This makes it easier to communicate model quality across teams without exposing raw squared units.
Does Future AGI use MSE for LLM evaluation?
Future AGI's eval catalog is designed for LLM and agent output, where MSE does not apply directly because outputs are text rather than numbers. Future AGI uses LLM-as-a-judge evaluators such as faithfulness, instruction-following, and toxicity. The closest LLM equivalent of MSE is regression-style metrics over numeric extracted fields, which can be wrapped in a CustomLLMJudge through the Apache 2.0 ai-evaluation library.
Related Articles
View all
Stay updated on AI observability

Get weekly insights on building reliable AI systems. No spam.