I grew up in a small town pulling apart Pentium 4 beige desktop towers in Bhusawal, India, convinced that the right piece of silicon could unlock new worlds. When I finally saved for a GPU in 2009 (NVIDIA 9400 GT), games became experiments: sliders turned into differential equations, frame rates hinted at dynamics, and I drifted toward pure math just to keep up with my own questions about motion and intelligence.
That restlessness pulled me into interdisciplinary labs, where I translated biological physics into software. I began by building chemical computing agents and CUDA-accelerated reaction-kinetics simulators, then explored in-silico DNA computers that implemented AI-like algorithms. During my PhD, I focused on scaling numerical methods and active-matter/living fluids solvers that matched high-resolution experimental observations and hinted why some biological cells adopt spherical shapes. Later, at Harvard CSE-Lab, I shipped GPU-heavy reinforcement learning tooling and continued publishing on living-fluid physics, because building systems kept teaching me more than describing them.
Today that philosophy is becoming Functoris: an agentic framework for biological engineering that translates natural-language intent into structured, constraint-aware DNA construct designs and integrated Design-Build-Test-Learn workflows. Ultimately, this is infrastructure for programmable medicine and automated science—helping teams move from ideas to rigorous experiments without the fragmented handoffs that slow discovery today.