Hi. I'm Trapit.

A second year PhD student with Prof. Andrew McCallum at UMass. I am broadly interested in machine learning applied to text data and recommendation systems.

Learn more about what I do

Research Interests

I am a second year PhD student in IESL working with Prof. Andrew McCallum. My current research is on semi-supervised Deep Learning methods for language understanding tasks. Broadly, I am interested in developing novel machine learning methods for NLP and Recommendation Systems. Check out some of my publications and try out some of the related codes.

Prior Work

I had a great time interning at Facebook in the Applied Machine Learning group, in Summer 2016, where I worked on deep learning models for some NLP tasks.
After graduation and before starting my PhD, I worked with with Prof. Ravi Kannan and Prof. Chiranjib Bhattacharyya at Indian Institute of Science, on Topic Models, Bayesian nonparametrics and approximate inference in probabilistic models.

Data Science Tool Kit

Algorithms and tools I use often (hover for more info)

Supervised: , , SVM, , Naive Bayes, kNN, , Bayesian Models
Unsupervised: Autoencoders, Word Embeddings, , , , PCA, Probabilistic Graphical Models
: , , , ,
Libraries: Theano, Lasagne, Numpy, Scipy, Scikit-Learn, Pandas, Gensim, ggPlot2, GSL

Publications

Click the titles for more information about the work and to download paper/supplementary/code/data.

  • [New] Ask the GRU: Multi-task Learning for Deep Text Recommendations. Trapit Bansal, David Belanger, Andrew McCallum. In ACM international conference on Recommender Systems (RecSys), 2016.
  • Content Driven User Profiling for Comment-Worthy Recommendations of News and Blog Articles. Trapit Bansal, Mrinal Das, Chiranjib Bhattacharyya. In ACM international conference on Recommender Systems (RecSys), 2015.
  • Ordered Stick-Breaking Prior for Sequential MCMC Inference of Bayesian Nonparametric Models. Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In International Conference on Machine Learning (ICML), 2015.
  • Relating Romanized Comments to News Articles by Inferring Multi-glyphic Topical Correspondence. Goutham Tholpadi, Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In AAAI conference on artificial intelligence (AAAI), 2015.
  • A Provable SVD-based Algorithm for Learning Topics in Dominant Admixture Corpus". Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan. In Neural Information Processing Systems (NIPS), 2014.
  • Going Beyond Corr-LDA for Detecting Specific Comments on News & Blogs. Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya. In ACM international conference on Web Search and Data Mining (WSDM), 2014.

Other Fun Stuff

  • Character LSTM for Sentiment Analysis on Twitter: Kate Silverstein, Jun Wang and I worked on a character-level LSTM model for sentiment analysis on Twitter. Read our short report on some of the cool results from the model. Code is available on Github. Kate also has an interesting post on it here.
  • Learning to play Atari: For my Deep Learning class mid-term project, I tried to implement double Q-learning on Atari Games. Check out the videos of the learned agent playing Boxing and Pong (spoiler: they kick ass!).

Code

  • Specific Correspondence Topic Models (SCTM): C code for the WSDM'14 paper. Implements collapsed Gibbs sampling for three models: LDA, CorrLDA and SCTM. This is one of the few implementations I know for CorrLDA on text data. Also supports the feature of "sparse" topic distributions for LDA and CorrLDA.
  • Thresholded SVD (TSVD): Matlab code for the NIPS'14 paper. Implements the Thresholded SVD based K-means algorithm for topic recovery. This is much faster than Gibbs sampling and works easily for upto 200,000 documents (more if you have the RAM for it). Give it a spin.

Contact Me ...

Best way is to email me at "trapitbansal at gmail dot com" and I'll get back to you.