title: CVEfixes Data Splits
description: >-
A detailed dataset split from CVEfixes_v1.0.8 for vulnerability analysis,
including train, validation, and test sets.
author: Mohammad Taghavi
date: 2025-03-31T00:00:00.000Z
source: >-
CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from
Open-Source Software
tags:
- security
- vulnerabilities
- dataset
- software-security
license: mit
CVEfixes Data Splits README
This repository contains data splits derived from the CVEfixes_v1.0.8 dataset, an automated collection of vulnerabilities and their fixes from open-source software. The dataset has been processed and split into training, validation, and test sets to facilitate machine learning and vulnerability analysis tasks. Below, you’ll find details about the splits, problematic CVEs excluded due to memory constraints, and a comprehensive guide on how to recreate these splits yourself.
Dataset Overview
The original CVEfixes_v1.0.8 dataset was sourced from the Github repository https://github.com/secureIT-project/CVEfixes. We’ve split it into four parts:
- Training Split (Part 1): 4000 CVEs (first portion of the 70% training data)
- Training Split (Part 2): 4307 CVEs (remaining portion of the 70% training data, totaling 8307 CVEs with Part 1)
- Validation Split: 1781 CVEs (15% of the dataset)
- Test Split: 1781 CVEs (15% of the dataset)
These splits include full data from all tables in the CVEfixes.db SQLite database, preserving referential integrity across tables such as cve, fixes, commits, file_change, method_change, cwe, cwe_classification, and repository.
Excluded CVEs
The following CVEs were excluded from processing due to excessive memory usage (>50GB RAM), which caused runtime crashes on standard Colab environments:
CVE-2021-3957CVE-2024-26152CVE-2016-5833CVE-2023-6848
If your system has less than 50GB of RAM, we recommend skipping these CVEs during processing to avoid crashes.
How to Create Your Own Data Split
Below is a step-by-step guide to download, extract, and split the CVEfixes_v1.0.8 dataset into training, validation, and test sets, mirroring the process used to create these splits. This includes Python code snippets ready to run in a Google Colab environment.
Step 1: Download the Original ZIP File
Download the dataset from Hugging Face using the huggingface_hub library.
from huggingface_hub import snapshot_download
repo_id = "starsofchance/CVEfixes_v1.0.8"
filename = "CVEfixes_v1.0.8.zip"
dataset_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
allow_patterns=filename # Only download the zip file and not the splits we created
)
print(f"Dataset downloaded to: {dataset_path}")
After downloading the file you will see a massage:Dataset containing CVEfixes_v1.0.8.zip downloaded to: /addres you must copy/
Step 2: Create a Folder to Extract the Data
Set up a directory to extract the contents of the ZIP file.
import os
extract_dir = "/content/extracted_data"
os.makedirs(extract_dir, exist_ok=True)
print(f"Extraction directory created at: {extract_dir}")
Step 3: Decompress and Convert to SQLite Database
Extract the .sql.gz file from the ZIP and convert it into a SQLite database.
cache_path = "addres you have copied"
zip_file_path = os.path.join(cache_path, "CVEfixes_v1.0.8.zip")
!unzip -q "{zip_file_path}" -d "{extract_dir}"
#Verify extraction
print("\nExtracted files:")
!ls -lh "{extract_dir}"
Decompress the .gz file and convert to SQLite
!zcat {extract_dir}/CVEfixes_v1.0.8/Data/CVEfixes_v1.0.8.sql.gz | sqlite3 /content/CVEfixes.db
print("Database created at: /content/CVEfixes.db")
Step 4: Explore Tables and Relationships
Connect to the database and inspect its structure.
import sqlite3
import pandas as pd
# Connect to the database
conn = sqlite3.connect('/content/CVEfixes.db')
cursor = conn.cursor()
# Get all tables
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print("Tables in the database:", tables)
# Display column headers for each table
for table in tables:
table_name = table[0]
print(f"\nHeaders for table '{table_name}':")
cursor.execute(f"PRAGMA table_info('{table_name}');")
columns = cursor.fetchall()
column_names = [col[1] for col in columns]
print(f"Columns: {column_names}")
# Count rows in each table
for table in tables:
table_name = table[0]
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
row_count = cursor.fetchone()[0]
print(f"Table: {table_name}, Rows: {row_count}")
conn.close()
Expected Output:
Tables in the database: [('fixes',), ('commits',), ('file_change',), ('method_change',), ('cve',), ('cwe',), ('cwe_classification',), ('repository',)]
Headers for table 'fixes':
Columns: ['cve_id', 'hash', 'repo_url']
Headers for table 'commits':
Columns: ['hash', 'repo_url', 'author', 'author_date', 'author_timezone', 'committer', 'committer_date', 'committer_timezone', 'msg', 'merge', 'parents', 'num_lines_added', 'num_lines_deleted', 'dmm_unit_complexity', 'dmm_unit_interfacing', 'dmm_unit_size']
[... truncated for brevity ...]
Table: fixes, Rows: 12923
Table: commits, Rows: 12107
Table: file_change, Rows: 51342
Table: method_change, Rows: 277948
Table: cve, Rows: 11873
Table: cwe, Rows: 272
Table: cwe_classification, Rows: 12198
Table: repository, Rows: 4249
Step 5: Retrieve All Distinct CVE IDs
Extract unique CVE IDs from the cve table, which serves as the anchor for the dataset.
import sqlite3
conn = sqlite3.connect('/content/CVEfixes.db')
cursor = conn.cursor()
cursor.execute("SELECT DISTINCT cve_id FROM cve;")
cve_ids = [row[0] for row in cursor.fetchall()]
print(f"Total CVEs found: {len(cve_ids)}")
conn.close()
Step 6: Split the CVE IDs
Randomly shuffle and split the CVE IDs into training (70%), validation (15%), and test (15%) sets.
import random
import json
# Shuffle and split the dataset
random.shuffle(cve_ids)
n = len(cve_ids)
train_split = cve_ids[:int(0.70 * n)] # 70% for training
val_split = cve_ids[int(0.70 * n):int(0.85 * n)] # 15% for validation
test_split = cve_ids[int(0.85 * n):] # 15% for test
# Save the splits to JSON files
with open('/content/train_split.json', 'w') as f:
json.dump(train_split, f)
with open('/content/val_split.json', 'w') as f:
json.dump(val_split, f)
with open('/content/test_split.json', 'w') as f:
json.dump(test_split, f)
# Print split sizes
print("Train count:", len(train_split))
print("Validation count:", len(val_split))
print("Test count:", len(test_split))
Expected Output:
Total CVEs found: 11873
Train count: 8311
Validation count: 1781
Test count: 1781
Step 7: Process CVEs into JSONL Files
Define a function to bundle data for each CVE across all tables and write it to JSONL files. Below is an example script to process the training split, skipping problematic CVEs. You can adapt it for validation and test splits by changing the input and output files.
import sqlite3
import json
import time
import gc
import os
def dict_factory(cursor, row):
if cursor.description is None or row is None:
return None
return {col[0]: row[idx] for idx, col in enumerate(cursor.description)}
def get_cwe_data(cursor, cve_id):
cursor.execute("""
SELECT cwe.* FROM cwe
JOIN cwe_classification ON cwe.cwe_id = cwe_classification.cwe_id
WHERE cwe_classification.cve_id = ?;
""", (cve_id,))
return cursor.fetchall()
def get_repository_data(cursor, repo_url, repo_cache):
if repo_url in repo_cache:
return repo_cache[repo_url]
cursor.execute("SELECT * FROM repository WHERE repo_url = ?;", (repo_url,))
repo_data = cursor.fetchone()
repo_cache[repo_url] = repo_data
return repo_data
def get_method_changes(cursor, file_change_id):
cursor.execute("SELECT * FROM method_change WHERE file_change_id = ?;", (file_change_id,))
return cursor.fetchall()
def get_file_changes(cursor, commit_hash):
cursor.execute("SELECT * FROM file_change WHERE hash = ?;", (commit_hash,))
file_changes = []
for fc_row in cursor.fetchall():
file_change_data = fc_row
if file_change_data:
file_change_data['method_changes'] = get_method_changes(cursor, file_change_data['file_change_id'])
file_changes.append(file_change_data)
return file_changes
def get_commit_data(cursor, commit_hash, repo_url, repo_cache):
cursor.execute("SELECT * FROM commits WHERE hash = ? AND repo_url = ?;", (commit_hash, repo_url))
commit_row = cursor.fetchone()
if not commit_row:
return None
commit_data = commit_row
commit_data['repository'] = get_repository_data(cursor, repo_url, repo_cache)
commit_data['file_changes'] = get_file_changes(cursor, commit_hash)
return commit_data
def get_fixes_data(cursor, cve_id, repo_cache):
cursor.execute("SELECT * FROM fixes WHERE cve_id = ?;", (cve_id,))
fixes = []
for fix_row in cursor.fetchall():
fix_data = fix_row
if fix_data:
commit_details = get_commit_data(cursor, fix_data['hash'], fix_data['repo_url'], repo_cache)
if commit_details:
fix_data['commit_details'] = commit_details
fixes.append(fix_data)
return fixes
def process_cve(cursor, cve_id, repo_cache):
cursor.execute("SELECT * FROM cve WHERE cve_id = ?;", (cve_id,))
cve_row = cursor.fetchone()
if not cve_row:
return None
cve_data = cve_row
cve_data['cwe_info'] = get_cwe_data(cursor, cve_id)
cve_data['fixes_info'] = get_fixes_data(cursor, cve_id, repo_cache)
return cve_data
def process_split(split_name, split_file, db_path, output_file):
print(f"--- Processing {split_name} split ---")
conn = sqlite3.connect(db_path)
conn.row_factory = dict_factory
cursor = conn.cursor()
repo_cache = {}
with open(split_file, 'r') as f:
cve_ids = json.load(f)
skip_cves = ["CVE-2021-3957", "CVE-2024-26152", "CVE-2016-5833", "CVE-2023-6848"]
with open(output_file, 'w') as outfile:
for i, cve_id in enumerate(cve_ids):
if cve_id in skip_cves:
print(f"Skipping {cve_id} due to memory constraints.")
continue
try:
cve_bundle = process_cve(cursor, cve_id, repo_cache)
if cve_bundle:
outfile.write(json.dumps(cve_bundle) + '\n')
if (i + 1) % 50 == 0:
print(f"Processed {i + 1}/{len(cve_ids)} CVEs")
gc.collect()
except Exception as e:
print(f"Error processing {cve_id}: {e}")
continue
conn.close()
gc.collect()
print(f"Finished processing {split_name} split. Output saved to {output_file}")
# Example usage for training split
process_split(
split_name="train",
split_file="/content/train_split.json",
db_path="/content/CVEfixes.db",
output_file="/content/train_data.jsonl"
)
Notes:
- Replace
trainwithvalortestand adjust file paths to process other splits. - The script skips the problematic CVEs listed above.
- Output is written to a
.jsonlfile, with one JSON object per line.
Preprocessing
The current splits (train_data_part1.jsonl, train_data_part2.jsonl, val_data.jsonl, test_data.jsonl) contain raw data from all tables. Preprocessing (e.g., feature extraction, normalization) will be addressed in subsequent steps depending on your use case.
Copyright and License
Copyright © 2021-2024 Data-Driven Software Engineering Department (dataSED), Simula Research Laboratory, Norway
This work is licensed under the Creative Commons Attribution 4.0 International License.
Reference
The original dataset is sourced from:
CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software
Guru Bhandari, Amara Naseer, Leon Moonen
Simula Research Laboratory, Oslo, Norway
- Guru Bhandari: [email protected]
- Amara Naseer: [email protected]
- Leon Moonen: [email protected]
For more details, refer to the original publication at https://dl.acm.org/doi/10.1145/3475960.3475985.