A system that uses reinforcement learning and spatial analytics to optimize routes and resource allocation during disasters. It integrates with map data (hospitals, roads, traffic) to suggest efficient emergency response plans and has been designed with real-world deployment in mind for first responders.
Projects
Description · GitHub · Tech stack · Job simulations via Forage
Dec 2025 – Present. Building an open, manufacturer-agnostic EV route planner for drivers whose vehicles lack proprietary navigation or charging-network integrations. Solving real-world issues such as charger availability, range anxiety, wait times, and inconsistent infrastructure across regions. Integrating public charging datasets and mapping APIs to locate stations, charging speeds, and real-time constraints. Implementing energy-aware routing algorithms to minimize travel time, charging stops, and overall trip cost. Designing a dashboard that visualizes battery range predictions, charging schedules, and optimal stop recommendations. Focused on accessibility and open data to support older vehicles and non-Tesla/non-proprietary systems. This project was completed as a job simulation via Forage.
Designed and implemented a relational database system to manage grocery store operations using a sample dataset of purchases and sales. Transformed raw, unstructured data into a normalized schema with an Extended Entity-Relationship (EER) diagram, SQL schema, and test queries. Built an EER diagram to model entities (Items, Vendors, Customers, Purchases, Sales, Inventory) and their relationships. Created a normalized SQL schema with primary keys, foreign keys, and appropriate datatypes. Wrote SQL queries to validate relationships and answer business questions (e.g., vendor supply analysis, sales reporting). Addressed data anomalies and demonstrated how the new design ensures scalability, integrity, and business growth.
Oct 2025 – Dec 2025. Cleaned and processed 8.8K+ Netflix titles with inconsistent metadata. Engineered features from text fields (genres, duration, release dates). Conducted genre co-occurrence and trend analysis. Built visualizations to explain rating patterns across content types. Identified factors associated with higher IMDb scores. Delivered a reproducible Jupyter pipeline for analysis.
Developed a custom version of the classic Flappy Bird using Python and Pygame. Implemented physics-based movement, collision detection, and randomized obstacle generation, while extending gameplay with bosses, power-ups, and scoring logic. Showcased skills in game development, object-oriented programming, and creative problem-solving. Associated with University of Illinois Chicago.
Feb 2025. The project won 1st place on the John Deere track at Sparkhacks 2025. User inputs a city name; the system uses a geolocation API to get latitude and longitude. Environmental factors (temperature, bird migration patterns, recent outbreaks) are used to assess avian flu risk. A decision tree classifier determines whether the risk in the given city is High or Low, and the system displays the risk level with relevant insights.
Loan Default Prediction
Data Science Coding Challenge: Loan Default Prediction using Machine Learning (UIC). Developed and implemented a predictive model to estimate the probability of loan defaults using real-world financial data. Replaced baseline dummy classifiers with Logistic Regression, incorporating feature preprocessing and one-hot encoding for categorical variables. Aligned training and testing datasets and generated probability-based outputs for over 100,000 loan records. Applied Logistic Regression for a reliable classification model; engineered features and produced submission-ready predictions (LoanID, predicted_probability). Gained hands-on experience with data preprocessing, model training, evaluation (AUC), and submission pipelines in Python.
Predictive Demand Forecasting & Resource Allocation
Independent Research, Feb 2026. Developed predictive models to forecast service demand and resource needs using historical time-series and operational datasets. Cleaned and processed 150k+ records using Python (Pandas, NumPy) and SQL, handling missing values, outliers, and inconsistent formats. Applied regression and time-series forecasting (moving averages, ARIMA) to predict peak usage periods and workload spikes. Designed optimization models to allocate limited resources efficiently under budget and capacity constraints. Built automated reporting dashboards to track KPIs, trends, and forecast accuracy. Reduced manual reporting time with reusable ETL pipelines and scheduled data updates.
Quant Regime Strategy (Independent Study)
Personal Project, Dec 2025 – Feb 2026. Built a full quantitative research pipeline on 6 years of daily equity returns. Heavy tails—empirical tail frequency is 5x higher than Gaussian models predict. Volatility clustering—ACF of |returns| significant out to 50+ lags. HMM regime detection—two latent states with annualised vols of 17% vs 43%. GARCH(1,1) volatility forecasting (RMSE: 0.00293). Combining regime signals and GARCH filters into a momentum strategy lifted the Sharpe ratio from 0.52 to 0.90 and cut maximum drawdown from -39.7% to -13.8%. Key takeaway: you don't need complex ML to beat a naive benchmark—just take the empirical properties of returns seriously.