Tajwar Abrar Aleef

I am a PhD Candidate at the University of British Columbia working under the supervision of Dr. Tim Salcudean and Dr. Sara Mahdavi in the Robotics and Control Laboratory.

My project is a joint collaboration between BC Cancer (Vancouver, Canada) and University of British Columbia to improve prostate cancer detection and treatment. Currently, I am developing a multi-parametric ultrasound imaging system that can image mechanical properties of tissue and aid in the non-invasive detection of prostate cancer.

In my free time, I like to play guitar, watch movies, and cook! I also enjoy all sorts of sports (nowadays doing powerlifting, ice-skating, biking, and bouldering). I wish to see and explore more parts of this wonderful earth; so far I have covered 23 countries and countless cities/towns (check my vlog from Thailand).

Email  |  CV  |  Google Scholar  |  Research Gate  |  Github  |  Twitter  |  YouTube

profile photo
Research

My research interest includes medical image analysis, computer vision, image processing, deep learning, and machine learning. Much of my research is centered around cancer detection/localization, solving constrained optimization problems for radiotherapy using adversarial learning, and ultrasound elastography.

An end-to-end framework capable of generating treatment plans for low-dose-rate prostate brachytherapy within seconds as opposed to the current standard of manually creating plans that take on average over 20 min. This is done using GANs and custom application-specific loss functions.

A novel approach to automating centre-specific treatment plans for low-dose-rate prostate brachytherapy. Here, conditional GANs was used to solve the optimization problem of placing needles into the prostate gland. These needles allow seeds to be placed in a way that radiates the gland alone and spares healthy surrounding tissues.

We propose a novel joint generation and segmentation strategy to learn a segmentation model with better generalization capability to domains that have no labeled data (since there is limited availability of labeled surgical robot-assisted prostatectomy data).
Malignancy estimation of pulmonary nodules using multi-view multi-time point convolutional neural networks
Tajwar Abrar Aleef, Colin Jacobs, Bram van Ginneken
MSc Thesis research, DIAG, Radboudumc, 2018   (Thesis Ranked 2nd in class)

In this thesis, we developed a novel multi-view multi-timepoint convolutional neural network (MVMT-CNN) to estimate malignancy in pulmonary lung nodules. We utilize scans of patients taken over time- providing the network with information on disease progression. This produced state-of-the-art results compared to standard single time point disease estimation.

A biomedical antenna was designed for wireless communication from inside a living body. The operating band for the antenna is chosen in the Industrial, Scientific and Medical (ISM) band (2.4–2.4835 GHz). The tiny dimension of the antenna, including the 9.45 μm thickness of the patch allows it to be highly flexible and provides excellent results even in extreme conditions where the antenna conforms with the curvature of the body.
CNN regressor for automating treatment planning of cervical brachytherapy
Tajwar Abrar Aleef, Marius Staring, Jan van Gemert
LKEB, LUMC, 2017

Developed a deep learning-based framework that can predict the geometric transformation of an object based on a reference. The idea is to see if such a regression network can be used to find the transformation parameters of a brachytherapy applicator in the cervix from MRI scans; hence pinpointing exact location of the device within the body.

Positron Emission Tomography scan images are extensively used in radiotherapy planning, clinical diagnosis, assessment of growth, and treatment of a tumor. These all rely on fidelity and speed of detection and delineation algorithm. This paper presents a fast positron emission tomography tumor segmentation method using superpixels, principal component analysis, and k-means clustering.

In this work, we designed a total 3D scanning system from scratch. The system is mainly made to scan people and uses Kinect sensors to capture point clouds. An inexpensive turntable is also designed that works with the system to allow 360-degree mapping of the person being scanned.

Automatic spatio-temporal analysis of cardiac flow was performed using Lucas Kanade and Farneback optical flow techniques. This method tracks the ventricle wall and displays sparse and dense patterns of the vector field of the blood flow. Local and global maximum velocities of the flow are also calculated and displayed for each frame. If any vorticity is detected, it is displayed using colormaps.
Past Engineering Projects

I always loved hands-on engineering and even long before I got formal education on engineering, I used to mess around with any electronic devices I could find to build new things out of them. I will add some of the documented builds slowly (most of these are from my high school and freshman year). Scroll to the bottom to see some janky yet wonderful rovers I made out of literally cardboard and broken toys.

5-Axis Robotic Arm
[Video]

Made this totally from scratch with no access to proper tools. The parts were scorched and carved out of plastic sheets using just a box cutter. Rubber from a balloon was stapled onto the gripper for extra grip (my mom's idea). The 6 servos at each articulation point are controlled by an Arduino Mega board. The program I wrote for this uses inverse kinematics to control the position of the end effector. In the video, the arm is programmed to move to position A, grab the object, take it to position B, and drop it.
Firefighting Bot
[Video]

Concept of autonomous fire detection and extinguishing. The bot has IR sensors mounted in front that look from fire in the surrounding environment. If a fire is detected, it goes towards it and turns on the extinguisher (here for simplicity a fan is used). Once the fire is out, the robot backs up and starts looking around for more possible fires.
Surveillance Robot
[Video]

Here, I made a custom android controller that controls this robot using Bluetooth. Another Android phone mounted on the robot acts as an IP camera and sends live video feed to the controller. The robot can also automatically avoid obstacles using its onboard sonar sensor.
Sonar Glasses for the visually impaired
[Video]

This is a simple concept of using sensors to assist visually impaired people. Sonar sensors were mounted onto glasses that detect if any obstacle is present in the periphery of the person. Based on the position of the object, two vibration motors mounted on each side of the glasses will vibrate with different amplitude- giving haptic feedback to the user and alerting them of nearby objects.

In my freshman year, as I started learning more about logic gates. I decided to make a line follower using just logic gates. Happy with the results, I wanted to see much can be done with logic gates alone and with no microcontroller. I used different patterns of line that the robot can read and based on that it knows if it has reached either end of the line. The robot has 1 bit of memory made from 1 JK flip-flop (also made out of logic gates). This memory is needed to store the state of the robot. If the robot has a payload, it will start its journey to the other side of the line. And only after the payload is removed, the rover will return back. Check the funny (and a bit inappropriate) story I made with this robot in the video.
SpruceBot
[Video]

Check the rovers below before reading this. Still obsessed with rovers, This was another attempt to make improvements. This version again got a gripper redesign. In this build, I figured out how to melt and shape empty pen casings to make a stronger gripper as opposed to the popsicle stick technique. The arm was pushed back into the rover- allowing it to lift even heavier loads. Other features I added include an IP camera (using a phone) to stream live video that can be rotated to cover 360 degrees of vision. During that time, I also got a fancy camera which I used to film the rover. Check the video where I made a short story using the rover as the hero. Interesting fact: Around that time, I just started to learn about circuits from my A-level physics course. I implemented a circuit in this rover that automatically controls the headlights based on the ambient light and thought it was super cool.
RRL2
[Video / Short Video]

This is the updated version of RRL1. After finding success with the electronic side of things, in this build, I focused on improving the mechanical design. Again with no proper tools and materials available, I used mostly things I found around the house. I improved the arm of the rover by trying to replicate designs I sourced online. This gripper was so much better that I could pick many different things using it. In the 11 min long video, I show the things I could do with this version (even tried to pick up a pen and draw with it). Interesting fact: I didn't have any servos back then so I made a string pulley system to increase the load capacity of the DC motor that raises the arm.
RRL1
[Video]

After seeing a documentary on NASA's Mars rovers: Spirit and Opportunity, I became obsessed with rovers. I have spent so many hours in my room trying to learn things on my own in the hopes of building my own rover. This journey started when I had no engineering training and was in 9th grade. After countless failed iterations, this was the first-ever version that actually worked. If you observe closely, this was made mostly out of cardboard, popsicle sticks, and foam board. The electronic parts were taken from RC cars. Fun fact: I was so happy with how it turned out that I named it after my high school sweetheart's initials.
Teaching Experience

Based on Jon Barron's site.
Last updated October 2021.