I'm a Deep Learning and Natural Language Processing researcher at Carnegie Mellon with 5 years of software experience, 10 hackathon wins, and ML papers at high impact conferences.
I've advised and worked with several startups to deploy AI in domains ranging from B2B sales to consumer voice assistants. In my free time, I bootstrap products while exploring algebraic topology & Sanskrit literature.
The AI Platform group at Microsoft builds infrastructure for enterprise-scale machine learning lifecycles on Azure.
I'm developing compute-less active learning models to aid in natural language data labeling at the edge.
Are big language models just learning co-occurence statistics, or can they capture compositional relations as encoded by semantic formalisms?
We applied graph algorithms to Abstract Meaning Representation to create a task that probes compositional ability. I presented our work at the 2021 SCS Research Fair.
Vizerto is a digital sales assistant that makes domain-specific knowledge easily available to B2B sellers.
I advised their ML team on novel approaches to information retrieval, graphical knowledge representations, and more.
Our conversational socialbot interacted with thousands of Amazon Alexa users every day, maintaining the top average user rating for 2 months straight against teams from Stanford, USC, and more.
My work on user modeling and entity graphs was included in our paper accepted at EMNLP 2021.
SapientX builds white label intelligent voice assistants for cars, phones, fridges, and stores.
I fine-tuned state-of-the-art models for extractive question answering to give Tele the ability to answer domain-specific user queries from large, unorganized document corpora.
Can deep reinforcement learning model how humans learn to parse syntax trees from experience?
We built a family of cognitively realistic parsing environments to explore how novel neural architectures & RL algorithms could inform psycholinguistic theory. Our work was accepted at NeurIPS 2021 Deep RL workshop.
Wordcab summarizes business meetings using the latest in abstractive neural summarization tech.
I worked with Aleks (CEO) to build topic-based summarization, a highly-demanded but technologically challenging feature.
Intheon builds neural data processing infrastructure used by labs across the world to simplify their brainwave analysis pipelines.
I undertook NSF-funded research to investigate how language models could aid brain-computer interfaces in assisting users.
The AMLL lab applies novel ML research to social good issues primarily in psychology and neuroscience.
Our work used hierarchical document representations to identify mental illness in social media discussions and quantify covid's diachronic effects.
Deployed with active users
Morphology visualizer for Sanskrit literature research & education
Won Amazon & Blockchain @ CruzHacks 2019
Facilitating blockchain donations with Alexa skill art