Felipe Jeon
I build a robot ๐ค autonomy combining design, perception, reasoning, planning, and control.
1. Motion (major) ๐คธ
A. Tasks
I focused on generating optimal motions for the below tasks:
- Visible motion to chase moving targets (Ph. D research topic, git (opens in a new tab), paper (opens in a new tab))
- Motion for enhancing color detectability (paper (opens in a new tab))
- Safe travel from A to B
- Distributed motion and task allocation
- Exploration in unknown environments
- Inverse kinematics of manipulators (paper (opens in a new tab))
B. Hardware Targets
The motions were targeted and tested to the below robot types:
C. Backgrounds
- Search / sampling-based planning
- Spline motion primitives such as B-spline, Bezier, piecewise polynomials. (git (opens in a new tab))
- Non-holonomic curves (Dubins, Reeds-Shepp, continuous curvature)
- Optimization-based planning (iLQR, iLQG, DDP, CHOMP)
- Reinforcement Learning (DDPG, PPO)
2. Perception ๐
To implement the motion autonomy in the real-world, I had multiple hands-on experiences in perception algorithms for traversability and localization.
A. Volumetric Mapping
Comfortable with optimization and tuning for the below algorithms to make occupancy from from 3D sensing.
- Octomap and distance field for 3D environments (git (opens in a new tab))
- Voxblox (TSDF, EDSF)
- 3D mesh generation and opengl render from pointcloud (git (opens in a new tab))
B. SLAM
- VIO (vins-mono, ZEDfu) (git (opens in a new tab))
- Graph-based SLAM (RTAB-Map) or Lidar SLAM (LOAM)
- Intrinsic or extrinsic calibration (Kalibr)
3. Reasoning ๐ง
In addition to perception (occupancy, ego-localization), reasoning about targets of interest (opens in a new tab) is a must for the aerial chasing system. For real-world experiments, most of the reasoning methods were tested on the Jetson onboard computer.
A. Detection
Hands-on code integration to detect targets from the vision of flying drones. The below algorithms were performed:
- 2D / 3D bounding box from image streams (opens in a new tab)
- Human skeleton detection (opens in a new tab)
B. Segmentation
Given RGB and depth streams, I have used pixel segmentation in RGB and extracted 3D points from depth information.
C. 3D position tracking and prediction
Beyond detection in a single frame, 3D positions of the targets are tracked and predicted to plan the chasing motion of drones.
- 3D Tracking (opens in a new tab): combining classical approaches (color, kalman filtering), I fine-tuned (opens in a new tab) to stably identify the same object even against occlusion and deformation.
- Prediction: based on the tracking algorithm, performed 3d position prediction (opens in a new tab) reflecting obstacles.
4. Design & Integration
A. Mechanical Design
- Part selection: multiple experiences in making multiple drones myself (opens in a new tab), where I deliberated on the selection of battery, actuator, control & computing board, etc.
- Parts design and building: Solidworks design (opens in a new tab) and simulation. Getting my hands dirty with soldering and other hardware stuffs.
B. Software Integration
- Project: Design pattern, cmake structuring for large projects (example template (opens in a new tab))
- Packaging and deployment: Qt for experiment gui (opens in a new tab), Web frontend (link (opens in a new tab))
- Simulation: testing ROS / cmake project in unreal engine using blueprint (my tutorial video (opens in a new tab))
โ You can click the links on images for relevant codes or media.
1. Fundamentals
- Mathematics: linear algebra, lie algebra, numerical methods and optimizations (SQP, MIQP).
- Robotics: representation (SE(3), exponential coordinate), kinematics (velocity, adjoint matrix, Jacobian), dynamics (wrench).
- Machine learning: reinforcement learning (DQN, PPO, DDPG), vision learning (CNN, ViT).
- Algorithms: graph & tree search, dynamic programming
2. Software
- Project Management: git, docker, jira, notion, cmake
- Robotics: C++(14-20), eigen, ros 1/2, qpOASES, unreal engine
- Machine Vision: opencv, open3d, PIL, opengl, meshlab
- Machine Learning: pytorch, stable-baseline, einops
- Web: typescript, react, nextjs, django, vercel, SQL
- Etc: adobe software, solidworks.
3. Hardware
โค๏ธThe below images show some of drones I made myself.
- Sensor: ZED 1/2, Bluefox, d435, d455, T265, Velodyne, Auster, IMU, ublox GPS
- Actuator & ESC: T-motor (opens in a new tab), DJI (opens in a new tab), xing motor (opens in a new tab)
- Control: Pixhawk
1. Education & Company
- 2013-2015: Started from architecture and architectural engineering @ Seoul National University (opens in a new tab).
- 2015-2017: BS in mechanical & aerospace engineering @ Seoul National University
- 2017-2022: PhD in Robotics in Lab for Autonomous Robotics Research (opens in a new tab), advisor: H. Jin Kim (opens in a new tab). 5 yrs graduation
- 2022-Current: Staff engineer @ Samsung Research, Robot Intelligence Team.
2. Projects
Graduate School
- Autonomous driving in unstructured environments @ Korea Electronics Technology Institute (KETI)
- Multi-fleet exploration for rescue robots @ Korea Institute of Robotics and Technology Convergence (KIRO)
Personal
- Diverse group generation using genetic algorithm (link (opens in a new tab))
- Polynomial trajectory generation with constraints (link (opens in a new tab))
- Inferring attention of a human toward ambient objects using body skeleton and head pose (youtube (opens in a new tab)).
- I love people and enjoy mingling โค๏ธ (in general)
- MBTI (opens in a new tab) is ESTJ. Love organizing and planning to solve meaningful problems.
- Respect nerds and geeks obsessed with coding, but I am not that kind, and I do not even want to be like them.
- Focus more on why and what. Sick and tired of purposeless work, studying and research. (e.g., writing a paper for making a paper, studying coding for a higher leet code score)
- I think Ph D. guys can be worse than a cleaning worker or a chef, unless their techs could reach and help others in the world.