LogGPT

This December 2023 paper is a framework for detecting anomalies in log data using a Generative Pre-trained Transformer (GPT) model.

There are two key stages

Pre-training stage

  • In this stage, a GPT-2 model is trained on a corpus of normal log sequences to learn the underlying patterns and structure of normal system behaviour.

  • The objective is to train the model to predict the next log key in a sequence given the preceding log keys.

  • Formally, given a sequence of log keys S1:T, the model is trained to maximise the conditional probability p(kt+1 | S1:t) for each position t.

  • This allows the model to learn the normal patterns and dependencies between log keys in normal sequences.

  • After pre-training, the GPT-2 model can generate the rest of a log sequence given an initial subsequence.

Fine-tuning stage

  • The pre-trained GPT-2 model is then fine-tuned using a reinforcement learning approach to enhance its anomaly detection capabilities.

  • A set of prompts, which are partial log sequences, are fed into the model. The model generates the following log sequences based on these prompts step-by-step.

  • A novel reward function called the Top-K metric is used to provide feedback and fine-tune the model.

  • At each step, if the actual next log key is within the Top-K keys predicted by the model, it receives a reward of +1, otherwise it receives -1.

  • This reward encourages the model to rank the actual log key in the Top-K predictions for normal sequences.

  • Proximal Policy Optimization (PPO) is used to update the policy (GPT-2 model) based on this reward to maximize expected reward.

  • Fine-tuning with this Top-K reward metric helps bridge the gap between the language modeling objective in pre-training and the anomaly detection goal.

For anomaly detection, the fine-tuned LogGPT model predicts the next log key given a sequence. If the actual next log key is not in the Top-K keys predicted by the model, the sequence is flagged as anomalous.

The key ideas are:

  1. Using the powerful GPT-2 architecture to model log sequences and their dependencies

  2. Pre-training to learn normal log patterns via a language modeling objective

  3. Fine-tuning the model with a novel Top-K reward using reinforcement learning to tailor it for anomaly detection

Experiments and Results

The experiments evaluated LogGPT against various baseline methods, including traditional machine learning models like PCA, iForest, OCSVM, and LogCluster, as well as deep learning models such as DeepLog, LogAnomaly, OC4Seq, LogBERT, and CAT.

The evaluation was conducted on three log datasets: HDFS, BGL, and Thunderbird.

The results show that LogGPT significantly outperformed all the baselines across the three datasets, achieving the highest F1-scores.

An ablation study also demonstrated that the reinforcement learning component contributes significantly to LogGPT's performance.

The impact of the Top-K ratio and training data size on LogGPT's performance was analysed.

Overall, the experimental results were very positive, showing that:

  1. LogGPT consistently achieves state-of-the-art performance on log anomaly detection tasks compared to a range of baselines

  2. The reinforcement learning fine-tuning with the Top-K reward metric provides significant performance gains

  3. LogGPT is robust across different dataset sizes and characteristics

  4. Performance improves with more training data

In summary, the extensive experiments validate the effectiveness of the LogGPT approach for log anomaly detection, with the model outperforming existing methods by significant margins on multiple datasets.

The results strongly support the key innovations in LogGPT's architecture and training methodology.

Last updated

Logo

Continuum - Accelerated Artificial Intelligence

Continuum WebsiteAxolotl Platform

Copyright Continuum Labs - 2023