EECS 445 Course Projects
Undergraduate Winter 2026 Upper Level Electives Grade: A
EECS 445: Introduction to Machine Learning explores the mathematical foundations and practical implementation of supervised and unsupervised machine learning algorithms, focusing on their application to complex, real-world datasets in fields like robot perception and computer vision.
Topic 1: Statistical Learning & Predictive Modeling
This section focuses on the transition from explicit programming to data-driven inference. By implementing foundational algorithms from scratch, ranging from regularized linear models to kernel methods, I developed a rigorous pipeline for clinical risk assessment and medical data analysis.
Project #1: Clinical Risk Prediction & Kernel Methods
- Objective: Developing a predictive classification pipeline to identify high-risk ICU patients by analyzing high-dimensional clinical time-series and static health records from the PhysioNet dataset.
- Build: Engineered a robust preprocessing workflow including max-value feature extraction, mean imputation, and Min-Max normalization. Implemented 5-fold stratified cross-validation to optimize hyperparameters ($C$ and $\gamma$) for both Logistic Regression and Kernel Ridge Regression.
- Functionality: Achieved high-precision mortality predictions by addressing class imbalance via asymmetric cost functions (class weighting) and evaluating performance through 1,000-sample bootstrapping to ensure statistical significance.
Topic 2: Deep Learning & Computer Vision
This section explores supervised deep learning architectures for image classification and representation learning. By implementing Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) using PyTorch, I developed robust pipelines capable of distinguishing complex visual features while mitigating overfitting through transfer learning and data augmentation.
Project #2: Image Classification & Transfer Learning
- Objective: Formulate a deep learning vision system to accurately classify specific dog breeds (Collies vs. Golden Retrievers) from a noisy, limited dataset by overcoming severe out-of-distribution shifts.
- Build: Engineered a custom, lightweight 3-block Convolutional Neural Network and implemented the multi-head attention and forward-pass mechanisms of a Vision Transformer (ViT). Built a robust data preprocessing and augmentation pipeline utilizing random cropping, color jitter, and structural rotations to maximize the utility of 64x64 input images.
- Functionality: Maximized out-of-sample generalization by designing a two-phase transfer learning approach. Pre-trained the model on an auxiliary 8-class dataset to establish foundational spatial awareness, then froze the convolutional backbone to fine-tune a specialized classification head, successfully preventing catastrophic forgetting and over-parameterization.