Awarded by
Google IBM Amazon Docusign Facebook UCSC UCSD Berkeley UCLA UCSB NSF NeuroTechX MIT OpenBCI Capital One Angelhack Hedera Hashgraph MLH

9/2021 - Present

NLP Researcher @ CMU Language Technologies Institute

  • Investigating compositional representations in language models
  • Leveraging semantic graphs for natural language inference
  • Working with Uri Alon and Prof. Graham Neubig in NeuLab
12/2020 - 9/2021

RL Researcher @ Language, Logic, & Cognition Lab

  • Applying reinforcement learning for cognitively realistic syntax parsing
  • Extending work to semantic parsing and text-to-SQL
  • Optimizing linguistic RL library under Prof. Adrian Brasoveanu
1/2021 - 8/2021

NLP Researcher @ Natural Language & Dialogue Systems Lab

8/2020 - 11/2020

ML Engineering Consultant @ Bunch

  • Implemented TensorFlow.js computer vision models in the browser
  • Built React web app to calculate force exertion of humans on video
  • Product to be deployed in gyms to replace >$20k in equipment
6/2020 - 8/2021

BCI Researcher @ Intheon

  • Developing deep learning models for NLP + BCI
  • Due to research confidentiality, further details upon request
6/2020 - 9/2020

NLP Engineer @ SapientX

  • Fine-tuned PyTorch NLP models for extractive question answering (F1 >0.9)
  • Implemented neural and classical information retrieval approaches
  • Productionized with Flask REST API for core company product
3/2020 - 6/2021

President @ NeuroTechSC

  • Lead 5 teams (25 members) in building Brain-Computer Interface
  • Project utilizes subvocal recognition for synthetic telepathy
  • Organized neurotech curriculum for students
  • Held weekly paper readings of cutting-edge neurotech research
1/2020 - 5/2020

NLP Researcher @ Applied ML Lab

  • Architected high-dimensional document attention model
  • Collected and cleaned >7 million textual data points from Internet
  • Benchmarked our model on a mental health sentiment analysis task
  • Examined relevant academic literature and wrote preprint under Prof. Narges Norouzi
7/2018 - 5/2019

Fullstack Web Dev Consultant @ Nevaka

  • Built React app that dynamically loads thousands of database objects
  • Coordinated migration from Google Realtime Database to Cloud Firestore
  • Integrated Google Firebase authentication


Won 1st @ Facebook SF Dev Hackathon 2019

Tomorrow's AR social network


Won 2nd & FinTech @ LA Hacks 2019

Big data forecasting for sustainable businesses

Deployed with active users

Morphology visualizer for Sanskrit literature research & education


Won 1st @ SRC Code 2018

Cleaning neighborhoods with CV


Won 1st in US @ NeuroTechX 2020

Non-invasive synthetic telepathy

We & You

Won Google Cloud @ BASEHacks 2018

Peer-to-peer mental health services for teens

Latent Space

Won 3rd @ HackMIT 2020

Domain-specific neural audio compression for virtual bands


Won Amazon & Blockchain @ CruzHacks 2019

Facilitating blockchain donations with Alexa skill art


I'm interested in questions like...

How do humans perform semantic composition and how can we build systems that analyze language compositionally? Transformers have outpaced virtually all other architectures in NLP; is this just due to higher generalizability or is something about the self-attention mechanism inherently effective at expressing semantic composition?
How do humans ground language in their environment and how can we build systems that understand language in relation to the real world? The current approach of learning word representations from a large text corpus has gone a long way, but it falls into a trap that can only be avoided by grounding language. Could linguistic RL agents be a solution?
What is the underlying relationship between symbolic and statistical approaches? Why do some parts of nature seem so perfectly described by symbolic relations while others don't? Is reality fundamentally symbolic or are symbols a formalism that humans apply to our environment?
And a few miscellaneous ones: What makes specifically human brains so good at manipulating symbols, genetically, structurally, and culturally? How does the brain represent non-linguistic thoughts and is all perception symbolic at some level? How can classical theories from linguistics and philosophy of language aid modern research in NLP? Is internality an inherent property of matter?
Reinforcement Learning to Jointly Encode Prompts and Database Schemas for Text-to-SQL Semantic Parsing

Under Review at NAACL 2022
Rohan Pandey
A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement Learning

Adrian Brasoveanu, Rohan Pandey*, Maximilian Alfano-Smith*
Athena 2.0: Contextualized Dialogue Management for an Alexa Prize SocialBot

Juraj Juraska, Kevin K. Bowden, Lena Reed, Vrindavan Harrison, Wen Cui, Omkar Patil, Rishi Rajasekaran, Angela Ramirez, Cecilia Li, Eduardo Zamora, Phillip Lee, Jeshwanth Bheemanpally, Rohan Pandey, Adwait Ratnaparkhi, Marilyn Walker
Transfer Learning for Mental Health Evaluation from Natural Language

Kamil Kisielewicz*, Rohan Pandey*, Shivansh Rustagi, Narges Norouzi
Fun Facts