Hi. I'm Trapit.

A PhD student with Prof. Andrew McCallum at UMass. I am broadly interested in deep learning, reinforcement learning and applications to text data, knowledge bases, and multi-agent systems.

Learn more about what I do

Highlights

  • Our work on attention-based method for KG completion, A2N, accepted at ACL 2019.
  • Our ICLR 2018 paper on meta-learning won the best paper award!
  • Our work on competitive self-play featured in Wired, Quartz, MIT Tech Review, Discover Magazine and Business Insider.
  • In our recent work with OpenAI, we found that self-play allows simulated AI to discover remarkable physical skills without explicitly designing rewards for such skills. Check out the blog post.

Work Experience

Publications

Click the titles for more information about the work and to download paper/supplementary/code/data.

  • [New] Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks. Trapit Bansal*, Rishikesh Jha*, Andrew McCallum. Preprint, 2019. (* Equal contribution)
  • [New] Simultaneously Linking Entities and Extracting Relations from Biomedical Text without Mention-level Supervision. Trapit Bansal, Pat Verga, Neha Choudhary, Andrew McCallum. In Association for the Advancement of Artificial Intelligence (AAAI), 2020. (Oral)
  • A2N: Attending to Neighbors for Knowledge Graph Inference. Trapit Bansal, Da-Cheng Juan, Sujith Ravi, Andrew McCallum. In Association for Computational Linguistics (ACL short), 2019. (Oral)
  • Emergent Complexity via Multi-Agent Competition. Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, Igor Mordatch. In International Conference on Learning Representations (ICLR), 2018.
  • Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments. Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel. In International Conference on Learning Representations (ICLR), 2018. (Best Paper)
  • Marginal Likelihood Training of BiLSTM-CRF for Biomedical Named Entity Recognition from Disjoint Label Sets. Nathan Greenberg, Trapit Bansal, Patrick Verga and Andrew McCallum. In Empirical Menthods in Natural Language Processing (EMNLP short), 2018. (Oral)
  • RelNet: End-to-end Modeling of Entities and Relations. Trapit Bansal, Arvind Neelakantan, Andrew McCallum. In NIPS Workshop on Automated Knowledge Base Construction (AKBC), 2017.
  • Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling. Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum . In ICML Workshop on Deep Structured Prediction, 2017.
  • Ask the GRU: Multi-task Learning for Deep Text Recommendations. Trapit Bansal, David Belanger, Andrew McCallum. In ACM international conference on Recommender Systems (RecSys), 2016.
  • Content Driven User Profiling for Comment-Worthy Recommendations of News and Blog Articles. Trapit Bansal, Mrinal Das, Chiranjib Bhattacharyya. In ACM international conference on Recommender Systems (RecSys), 2015.
  • Ordered Stick-Breaking Prior for Sequential MCMC Inference of Bayesian Nonparametric Models. Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In International Conference on Machine Learning (ICML), 2015.
  • Relating Romanized Comments to News Articles by Inferring Multi-glyphic Topical Correspondence. Goutham Tholpadi, Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In AAAI conference on artificial intelligence (AAAI), 2015.
  • A Provable SVD-based Algorithm for Learning Topics in Dominant Admixture Corpus". Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan. In Neural Information Processing Systems (NIPS), 2014.
  • Going Beyond Corr-LDA for Detecting Specific Comments on News & Blogs. Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In ACM international conference on Web Search and Data Mining (WSDM), 2014.

Other Fun Stuff

  • Character LSTM for Sentiment Analysis on Twitter: Kate Silverstein, Jun Wang and I worked on a character-level LSTM model for sentiment analysis on Twitter. Read our short report on some of the cool results from the model. Code is available on Github. Kate also has an interesting post on it here.
  • Learning to play Atari: For my Deep Learning class mid-term project, I tried to implement double Q-learning on Atari Games. Check out the videos of the learned agent playing Boxing and Pong (spoiler: they kick ass!).

Contact Me ...

Best way is to email me at "trapitbansal at gmail dot com" and I'll get back to you.