mri-sim-py.epg: A GPU-accelerated Extended Phase Graph Algorithm for differentiable optimization and learning

mri-sim-py.epg: A GPU-accelerated Extended Phase Graph Algorithm for differentiable optimization and learning

Paper, Talk, Slides
The Extended Phase Graph Algorithm is a powerful tool for MRI sequence simulation and quantitative fitting, but such simulators are mostly written to run on CPU only and (with some exception) are poorly parallelized. A parallelized simulator compatible with other learning-based frameworks would be a useful tool to optimize scan parameters. Thus, we created an open source, GPU-accelerated EPG simulator in PyTorch. Since the simulator is fully differentiable by means of automatic differentiation, it can be used to take derivatives with respect to sequence parameters, e.g. flip angles, as well as tissue parameters, e.g. T1 and T2.

Relevance Prediction from Eye-movements Using Semi-interpretable Convolutional Neural Networks

Relevance Prediction from Eye-movements Using Semi-interpretable Convolutional Neural Networks

The primary purpose of Information Retrieval (IR) systems is to fetch content which is useful and relevant to people. IR systems have to cater to a variety of users, who may have wildly different mental models of what they consider to be useful and relevant.

Neuro-physiological methods, such as eye-tracking, provide an interesting avenue to observe users while they interact with information systems. Eye-tracking has been frequently used to assess if the screen-content is relevant to the user. Despite its many advantages such as being non-invasive and requiring very little effort, interpreting eye-tracking data is not straightforward.

Answerability Classification Using Hand-Crafted Features

Answerability Classification Using Hand-Crafted Features

In this project, I competed with the members in my class on a challenge to predict whether a visual question is answerable or not by using a given image and an associated question in the form of a text. For this task, we were required to create a multi-modal (computer vision + natural language processing) classification system.

First, Microsoft Azure Vision API was used to obtain the tags for each image. These tags were then joined together with spaces in between to create artifical sentences. Then, the TF-IDF representation for these artificially generated sentences were created to be used later as features. The TF-IDF representation for the questions were also created as features.

geograpy3

geograpy3

geograpy3 extracts place names from a URL or text, and adds context to those names – for example distinguishing between a country, region or city. It is a fork of Geograpy2, which is itself a fork of geograpy and inherits most of it, but solves several problems (such as support for utf8, places names with multiple words, confusion over homonyms etc). Also, geograpy3 is compatible with Python 3.6+, unlike Geography2. This project is under active development by these people. This project has been downloaded over 35k times as of Nov 2020.