Undergrad ML Researcher &
10x Hackathon Winner
Hi! I'm a junior at Carnegie Mellon University with research interests in Natural Language Processing, Reinforcement Learning, & Brain-Computer Interfaces. My past work includes conversational dialogue systems, cognitively realistic syntax parsing, and multimodal brainwave classification. My forthcoming work primarily focuses on linguistic RL and semantic composition.
Beyond research, much of my focus currently goes towards building a ramen profitable micro-SaaS to secure my intellectual freedom. I also enjoy reading Sanskrit philosophical literature, especially on language and epistemology.
Feel free to reach out via Twitter, LinkedIn, or email!
9/2021 - Present
- Investigating compositional representations in language models
- Leveraging semantic graphs for natural language inference
- Working with Uri Alon and Prof. Graham Neubig in NeuLab
12/2020 - 9/2021
RL Researcher @ Language, Logic, & Cognition Lab
- Applying reinforcement learning for cognitively realistic syntax parsing
- Extending work to semantic parsing and text-to-SQL
- Optimizing linguistic RL library under Prof. Adrian Brasoveanu
1/2021 - 8/2021
NLP Researcher @ Natural Language & Dialogue Systems Lab
8/2020 - 11/2020
ML Engineering Consultant @ Bunch
- Implemented TensorFlow.js computer vision models in the browser
- Built React web app to calculate force exertion of humans on video
- Product to be deployed in gyms to replace >$20k in equipment
6/2020 - 8/2021
BCI Researcher @ Intheon
- Developing deep learning models for NLP + BCI
- Due to research confidentiality, further details upon request
6/2020 - 9/2020
- Fine-tuned PyTorch NLP models for extractive question answering (F1 >0.9)
- Implemented neural and classical information retrieval approaches
- Productionized with Flask REST API for core company product
3/2020 - 6/2021
- Lead 5 teams (25 members) in building Brain-Computer Interface
- Project utilizes subvocal recognition for synthetic telepathy
- Organized neurotech curriculum for students
- Held weekly paper readings of cutting-edge neurotech research
1/2020 - 5/2020
NLP Researcher @ Applied ML Lab
- Architected high-dimensional document attention model
- Collected and cleaned >7 million textual data points from Internet
- Benchmarked our model on a mental health sentiment analysis task
- Examined relevant academic literature and wrote preprint under Prof. Narges Norouzi
7/2018 - 5/2019
Fullstack Web Dev Consultant @ Nevaka
- Built React app that dynamically loads thousands of database objects
- Coordinated migration from Google Realtime Database to Cloud Firestore
- Integrated Google Firebase authentication
Won 1st @ Facebook SF Dev Hackathon 2019
Tomorrow's AR social network
Won 2nd & FinTech @ LA Hacks 2019
Big data forecasting for sustainable businesses
Deployed with active users
Morphology visualizer for Sanskrit literature research & education
Won 1st @ SRC Code 2018
Cleaning neighborhoods with CV
Won 1st in US @ NeuroTechX 2020
Non-invasive synthetic telepathy
We & You
Won Google Cloud @ BASEHacks 2018
Peer-to-peer mental health services for teens
Won 3rd @ HackMIT 2020
Domain-specific neural audio compression for virtual bands
Won Amazon & Blockchain @ CruzHacks 2019
Facilitating blockchain donations with Alexa skill art
I'm interested in questions like...
How do humans perform semantic composition
and how can we build systems that analyze language compositionally
? Transformers have outpaced virtually all other architectures in NLP; is this just due to higher generalizability or is something about the self-attention mechanism inherently effective at expressing semantic composition?
How do humans ground language
in their environment and how can we build systems that understand language in relation to the real world
? The current approach of learning word representations from a large text corpus has gone a long way, but it falls into a trap
that can only be avoided by grounding language. Could linguistic RL agents be a solution?
What is the underlying relationship between symbolic and statistical approaches
? Why do some parts of nature seem so perfectly described
by symbolic relations while others don't? Is reality fundamentally symbolic or are symbols a formalism that humans apply to our environment?
And a few miscellaneous ones: What makes specifically human brains so good at manipulating symbols, genetically, structurally, and culturally? How does the brain represent non-linguistic thoughts and is all perception symbolic at some level
? How can classical theories from linguistics and philosophy of language
aid modern research in NLP? Is internality an inherent property of matter
Reinforcement Learning to Jointly Encode Prompts and Database Schemas for Text-to-SQL Semantic Parsing
Under Review at NAACL 2022
A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning
Athena 2.0: Contextualized Dialogue Management for an Alexa Prize SocialBot
Transfer Learning for Mental Health Evaluation from Natural Language
- I'm a philosophy enthusiast and tend to align with the Mīmāṃsā school of Indian thought.
- I value financial freedom and aim to produce $2k MRR from a micro-SaaS by the end of 2022.
- I still have much to learn in higher math, but I'm currently exploring homotopy type theory and potential extensions for natural language semantics.
- I've been teaching myself Mandarin Chinese and Classical Sanskrit for about 3 years. I'm fluent in English, Hindi, and Spanish.
- I enjoy building constructed languages, including a 2-channel parallelized experimental lang and a modern descendant of Sumerian.
- I'm the former VP of UCSC's AI club, SCAI, where I taught deep learning, NLP, and computational genomics.
- My current reading list includes Arthasaṁgraha (Bhāskara), Gödel, Escher, Bach (Hofstadter), and The Feeling of Life Itself (Koch).
- The source code for this site is based on Andrej Karpathy's personal website.
- My inbox is open for discussions!