Aditya Raj

I am a final-year B.Tech student at NIT Patna, graduating in 2026 with a major in Electronics and Communication Engineering.

My interest in memory mechanisms developed during my Summer 2025 internship at IIIT-H, where I was fortunate to work on retrieval systems under the guidance of Dr. Kuldeep Kurte. This experience oriented my interest in memory systems for AI. My future of AI is "white-box": a personal collaborative assistant that empower humans, rather than replacing them.

Neural Network InternalsLLM Memory MechanismsGPUsLow-level Architecture

Research

Model Architecture

Knowledge Graph-Informed Query Decomposition (KG-IQD): Hybrid KG-RAG Reasoning in Noisy Context

Aditya Raj1, Kuldeep Kurte2

1National Institute of Technology, Patna, 2IIIT Hyderabad

ISWC 2025 (In Review)Poster

Achieved state-of-the-art results on a custom disaster QA benchmark, outperforming RQ-RAG by 14% and KG by 18%. Developed a neuro-symbolic framework for query decomposition, leveraging knowledge graphs to interrelate points for hybrid RAG-based reasoning over structured and unstructured data.

Achievements

NK Securities Research·Ranked 67th/2095 in IV Prediction; MSE=1.3e-5
2025
M2L Summer School·Selected from 1600+ global applicants (BTech to Industry); Split, Croatia
2025
Amazon ML Challenge·Ranked 184th/75,000+; F1 score=0.4667 using fine-tuned moondream
2024
AI of GOD - IIT ISM·Annual ML Challenge | Winner; WER=0.116 using TrOCR+T5
2024
Regional Mathematical Olympiad·Top 0.4% nationwide (classes 8–12) in India
2019

Experience

Quant Developer Intern @ QFI Research Capital

Hyderabad, India

May 2025 – July 2025

Worked on the trade execution part of the trading engine, fixed critical bugs which calculated few metrices differently, and built the complete internal tooling stack for workflow automation and business operations.
C++PythonLinux

Research Intern @ IIIT HyderabadISWC 2025 (In Review)

Hyderabad, India | Advisor: Dr. Kuldeep Kurte, Spatial Informatics Lab

Apr 2025 – July 2025

Achieved SOTA results on a custom disaster QA benchmark, outperforming RQ-RAG by 14% and KG by 18%, by developing a neuro-symbolic framework that guides query decomposition using Knowledge Graphs and interrelates points with RAG on sub-queries for robust reasoning over structured and unstructured data.
PyTorchKnowledge GraphRAG

Research Intern @ IIT Guwahati

Guwahati, India | Advisor: Dr. Arijit Sur, Mentor: Mr. Suklav Ghosh

June 2024 – July 2024

Developed a robust GAN-based shape occlusion detection system using a UNet generator on a synthetic dataset of geometric shapes, effectively reconstructing overlapping shapes and occlusions.
PyTorchOpenCVNumPy

Projects

IV Prediction - Indian Option Market

July 2025 – July 2025

Top 2% Rank (67/4100) in NKSR Research

Engineered a LightGBM pipeline that achieved an MSE of 1.3e-5 on sparse options data, utilizing nearest-neighbor TTE estimation and call/put-specific data representation.

PythonLightGBMPandasNumPy

Serverless Web Platform

Nov 2024 – Feb 2025

Replaced a $150/month SaaS solution by building a serverless platform that migrated 10,000+ users. Achieved <1s LCP on slow 4G networks using a Next.js/Vercel/Supabase stack with extensive frontend optimizations.

Next.jsVercelPostgreSQLTailwindReact

Multimodal-Reranker

Mar 2025 – Apr 2025

Engineered a multimodal reranking module for an AI search engine, leveraging Ollama LLM for semantic filtering and JSON filter extraction from natural queries, combined with vision-language models and a fast FAISS (HNSW+IVF) retriever on CLIP/mxbai embeddings, reranking top-k results via cosine for refined search based on user intent and visual semantics.

PythonCLIPmxbai-embed-large-v1FAISSOllama

OCR Model – LLM Re-correction

Oct 2024

1st on Kaggle | IIT-ISM Annual ML Competition Winner

Engineered an OCR pipeline for medieval-era handwritten Spanish manuscripts by combining TrOCR for initial transcription with a T5-based language model for post-correction, fine-tuned on domain-specific linguistic features. Designed and integrated a custom text augmentation and similarity-preserving algorithm to enhance LM robustness; achieved a Word Error Rate (WER) of 0.116, outperforming all competing models.

PythonOpenCVLLMsTrOCRT5