" I think; therefore I am. " — René Descartes.
I research AI. My research interests lie broadly in reinforcement learning, representation learning and continual learning. Over the years, I have developed significant expertise in sequential decision making algorithms, specifically transformers and RNNs.
I completed my MSc in Computing Science at the University of Alberta and was co-supervised by Adam White and Marlos Machado; affliated with RLAI Lab and Alberta Machine Intelligence Institute (Amii). In my MSc thesis, I proposed a recurrent alternative to the transformer’s self-attention mechanism, which offers context-independent inference cost and parallelization over an input sequence. The proposed approach called the Recurrent Linear Transformer was shown to outperform state-of-the-art transformers and recurrent neural networks in partially observable reinforcement learning problems, both in terms of computational efficiency and performance. ( Thesis URL, Arxiv preprint).
I also have several years of industry experience in AI. I’m currently a senior machine learning engineer at an early-stage computer vision startup, where I apply transformers for image segmentation in architectural diagrams. During my MSc, I interned at Huawei Research Edmonton, applying reinforcement learning to neural network operator fusion. Previously, I worked with IBM Cloud as an ML Engineer (for around 2 years) and collaborated with IBM Research on various research projects in natural language processing and multi-modal learning. I also helped deploy several machine learning algorithms at scale in IBM and Kone.
Contact: spramanik [at] ualberta [dot] ca, email [at] subho [dot] in
MSc in Computer Science (thesis based, Fully funded), 2021 - 2023
University of Alberta
B.Tech in Computer Science and Engineering, 2015 - 2019
Vellore Institute of Technology
In this seminar presented by the Alberta Machine Intelligence Institute (Amii), Subhojeet Pramanik, University of Alberta, explains that Recurrent Linear Transformer addresses limitations in the transformer architecture by proposing a recurrent alternative to the self-attention mechanism.