Om Agrawal

Hey, I'm Om

I'm a CS student at UT Austin, passionate about building solutions to real problems.

Here are some notes to myself.

Projects

micrograd Project

Micrograd

Developed an Autograd engine implementing backpropagation (reverse-mode autodiff) over dynamically built DAGs forscalar operations, and built a PyTorch-style neural network library atop Micrograd

Python Jupyter
Cornucopia Smart Fridge System

Cornucopia

1st place + Best Use of GenAI at IEEE UT's 2025 Techathon

AI-powered system that tracks fridge inventory using computer vision, reduces food waste, and suggests recipes

Python Gemini HTML CSS Javascript Node.js PostgreSQL ESP32
Multithreaded Lock-Free Web Server

Multithreaded Lock-Free Web Server

Multithreaded Lock-Free Web Server in Rust

Rust
Barnes-Hut Algorithm

Barnes-Hut

Efficient N-body Simulation using the Barnes-Hut Algorithm

C++ MPI
2-Phase Commit in Rust

2-Phase Commit in Rust

Implemented a 2-Phase Commit Simulation in Rust

Rust
Kmeans on GPU

Kmeans on GPU

GPU Accelerated K-means Clustering Algorithm

C++ CUDA
BST Lightning

BST Lightning

Engineered a high-performance, multithreaded algorithm for identifying equivalent Binary Search Trees

Go
DDP Training Project

DDP Training

Implemented Distributed Data Parallel training of VGG11 model on CIFAR-10 dataset using AWS SageMaker, analyzing scaling performance across multiple GPUs to balance parallelism vs. communication overhead in gradient synchronization

PyTorch AWS SageMaker DistributedDataParallel NCCL
LoRA Parameter-Efficient Fine-Tuning Project

LoRA Parameter-Efficient Fine-Tuning

Applied LoRA for parameter-efficient fine-tuning of TinyLlama model on the guanaco dataset by injecting LoRA adapter modules into MLP linear layers, testing how training latency was affected by LoRA rank and module size

PyTorch PEFT Transformers BitsAndBytes TRL
Activation-Aware Weight Quantization Project

Activation-Aware Weight Quantization

Quantized facebook/opt-1.3b model to 3-bits while preserving performance by protecting top 1% of salient weights through hardware-friendly scaling approach rather than mixed-precision, with ablation study revealing optimal scaling factors

PyTorch Transformers Accelerate Datasets
Bhagavad-GPT Project

Bhagavad-GPT

Implemented a customized RAG model centered on the "Bhagavad Gita," optimizing output by referencing an authoritative knowledge base.

Python Langchain OpenAI API Flask Pinecone

Experience

Undergraduate Researcher

UT Networked Systems Research Group

Designing a resource allocation mechanism to reduce the variability in encoding time for input modalities in multimodal AI models, reducing pipeline bubbles during inference

ForeFlight logo

Software Engineering Intern

Incoming at ForeFlight
FAA Logo

Intern, Office of Senior Technical Experts

Federal Aviation Administration
  • Developed an algorithm using position and time calculations from 50 million datapoints of Authoritative FAA ADS-B data to detect aircraft GPS location "jumps"
  • Engineered an application displaying geographic flight track data with detected jumps using our "Jump" algorithm, Kepler.gl, and Streamlit
  • Worked directly under the FAA's Chief Scientific Technical Advisor for Satellite Navigation Systems and collaborated with Stanford/Virginia Tech researchers
UT Austin Logo

Undergraduate Course Assistant, Discrete Mathematics

UT Austin Computer Science
  • Helped students with topics including Proposition Logic, Proof Techniques, Graph Theory, and Asymptotic Notation
  • Conducted discussion sections, created practice questions, and graded assignments/exams