Back to Projects

SparkDrive

Project Motivation & Problem Statement

Network intrusion detection is critical for cybersecurity, requiring ML models that can classify network traffic as normal or malicious at scale. The NSL-KDD dataset provides a benchmark for evaluating intrusion detection systems with 42 features capturing network connection characteristics. SparkDrive builds a complete ML pipeline using Apache Spark MLlib to train and evaluate classifiers on this dataset, demonstrating scalable feature engineering, hyperparameter tuning, and model evaluation techniques for cybersecurity applications.

Technical Approach

1. Dataset: NSL-KDD Network Intrusion Data

Used the NSL-KDD benchmark dataset with 42 columns representing network connection attributes:

  • Nominal Features (3): protocol_type, service, flag - categorical network protocol attributes.
  • Binary Features (6): land, logged_in, root_shell, su_attempted, is_host_login, is_guest_login - boolean indicators.
  • Continuous Features (32): Including duration, src_bytes, dst_bytes, count, srv_count, and various rate metrics (serror_rate, rerror_rate, etc.).
  • Target: class column with "normal" vs. attack types, converted to binary outcome (0.0 = normal, 1.0 = attack).

2. Feature Engineering Pipeline

Built a multi-stage Spark Pipeline for preprocessing heterogeneous feature types:

  • FeatureTypeCaster: Custom Transformer that casts all binary and continuous columns to DoubleType for numerical processing.
  • StringIndexer: Converted nominal columns to numeric indices (protocol_type_index, service_index, flag_index).
  • OneHotEncoder: Transformed indexed nominal columns to one-hot encoded vectors (protocol_type_encoded, etc.).
  • VectorAssembler: Combined all continuous, binary, and one-hot features into a single vectorized_features column.
  • StandardScaler: Normalized the feature vector to zero mean and unit variance, outputting final features column.

3. Custom Transformers

  • OutcomeCreater: Custom transformer using UDF to convert class labels to binary outcome (normal=0.0, attack=1.0).
  • ColumnDropper: Removes intermediate columns after pipeline processing, keeping only features and outcome.

4. Model Training with Cross-Validation

  • Logistic Regression: Used pyspark.ml.classification.LogisticRegression as the classifier.
  • Hyperparameter Grid: Searched over regParam=[0.01, 0.1, 0.5] and maxIter=[10, 50, 100] (9 parameter combinations).
  • CrossValidator: 3-fold cross-validation with BinaryClassificationEvaluator using areaUnderROC metric.
  • Best Model Selection: CrossValidator automatically selects the best hyperparameter combination based on validation AUC.

5. Model Evaluation

  • Metrics: Evaluated using AUC-ROC on held-out test set (KDDTest+.txt).
  • Output Inspection: Displayed features, outcome, prediction, and probability columns for result analysis.
  • Schema Validation: Used printSchema() to verify pipeline output structure.

Implementation Details

  • Data Files: KDDTrain+.txt (125,973 records) and KDDTest+.txt (22,544 records).
  • Spark Configuration: Local mode with master("local[*]") for multi-core parallelism.
  • Pipeline Stages: 8 stages total including 5 built-in transformers and 3 custom transformers.
  • Null Handling: dropna(subset=crucial_cols) removes records with missing values in critical columns.

Results

  • Successfully built end-to-end ML pipeline from raw CSV data to trained classifier.
  • Cross-validation identified optimal hyperparameters from 9-combination grid search.
  • Pipeline architecture enables reproducible training with serializable model components.
  • Custom transformers demonstrate extensibility of Spark ML Pipeline API.

Limitations

  • Logistic Regression may underperform compared to ensemble methods (Random Forest, GBT) for this dataset.
  • Single-machine Spark mode; true scalability benefits require distributed cluster deployment.
  • Binary classification collapses attack types; multi-class formulation would provide finer-grained detection.

Skills and Technologies Demonstrated

  • Apache Spark MLlib Pipeline API
  • Custom Transformer development in PySpark
  • Feature engineering for heterogeneous data types
  • Hyperparameter tuning with CrossValidator
  • Binary classification with Logistic Regression
  • AUC-ROC evaluation metrics
  • Network intrusion detection domain knowledge

Resources