Awies Mohammad Mulla-R
Awies Mohammad Mulla-R
Awies Mohammad Mulla-R
Education
University of California, San Diego
MS in ECE (Major: Intelligent Systems, Robotics and Controls) Sept. 2022 – June 2024
Indian Institute of Technology (BHU), Varanasi
Bachelor of Technology in Mechanical Engineering Aug. 2018 – June 2022
Experience
Existential Robotics Lab, UC San Diego [website]
Graduate Student Researcher Mar. 2023 – Present
• Working on development of various SLAM and motion planning algorithms for autonomous vehicles.
• Worked on development of heirarchical motion planning approach using Large Language Models (LLMs) for UGV.
• In the latest work, I compressed the complete motion planning workfow (path planning, obstacle avoidance, and
path optimization) into a single Neural Network using Reinforcement Learning and deployed on F1-tenth car.
ARCLab, UC San Diego [website]
Graduate Student Researcher Oct. 2022 – Feb 2023
• Worked on development of modeling and simulation environment for surgical tools for simple tasks.
• Experimented with various approaches to model contact dynamics between soft (tissues) and rigid (tool) bodies.
• I tested the possibility to integrate the Incremental Potential Contact algorithm with the existing Position-Based
Dynamics environment developed by the Lab; to simulate simple surgical operations like suturing, incision, etc
Publications
[1] Awies Mohammad Mulla, ”Motion Planning via Reinforcement Learning”, Master’s Thesis, University of
California, San Diego, 2024. [Forthcoming]
Projects
Motion Planning via Reinforcement Learning | [Results]
• Successfully compressed the complete motion planning workfow into a single NN using RL and deployed the trained
NN on F1-tenth car with NVIDIA Jetson TX2 module and Intel realsense d435, using TensorRT SDK (in C).
• The agent receives relative goal coordinates and raw sensor data without prior environmental information. NN input
consists of odometry and depth images history, with the output being the agent’s steering angle and throttle.
• Agent was trained in PyBullet, MuJoCo simulations to achieve a stable policy for F1-tenth car. Analyzed MLP,
CNN, and Transformers to process raw sensor data, as well as Gaussian and Beta policy distributions for actions.
Infinite-horizon Stochastic Optimal Control | [link]
• Implemented a receding-horizon certainty equivalent controller and generalized policy iteration algorithm to solve
the tracking problem for a differential drive ground robot with a fundamental proportional controller as baseline
• CEC solves a Non-linear optimization problem by formulating it as Deterministic Optimal Control problem.
• GPI algorithm solved the Bellman’s Optimality equation by discretizing the state and control space with added
gaussian noise. Here, CEC performs better due to elementary and well-defined dynamics of the problem.
Visual Inertial SLAM | [link]
• Estimated the pose of the vehicle and landmarks from the provided ROSbag from IMU and a stereo camera.
• Pose of the vehicle and landmark was estiamted using Extended Kalman Filter (EKF). The observation model, in
terms of camera parameters, was used to update pose of vehicle and outdoor landmarks (gaussian distribution).
• The filter was robust to wide range of initializations of covariance of the pose of vehicle and landmarks. It was also
robust to a level of noise introduced in filter. Accuracy: Feature map - 90% ; Vehicle route Trajectory - 85%.
Technical Skills
Languages: Python, C/C++, SQL | Frameworks: Robot Operating System (ROS), TensorRT, Pytorch, MATLAB
Courses: Linear Algebra, Probability and Statistics, Calculus, Numerical Analysis, SLAM, Sensor Fusion, NLP
Underactuated Robotics, DSA, Optimization Algorithms, Motion Planning, Reinforcement Learning, Supervised
Learning, Unsupervised Learning, Stochastic Processes in Dynamic Systems, Optimal Control, Computer Vision