Hi,I'm Chuan-Meng Chiu (邱耑萌)✨

Here is my portfolio — enjoy!

📧 damoncho510@gmail.com

放大圖

✨ Diffusion Rendering for Autonomous Driving

Delta Research Center.Oct 2024 - Jul 2025

In recent years, with the rapid advancement of artificial intelligence, the autonomous driving industry has thrived. Autonomous driving systems can undergo extensive learning through computer simulation, not only reproducing real-world scenarios but also generating novel situations that have never occurred. This enables the system to be trained in diverse environments, enhancing its adaptability and responsiveness. For example, when applying diffusion models to image generation, this technology can produce highly realistic images, bringing training data closer to real-world conditions. However, current diffusion models still face challenges—such as fabricating vehicles out of thin air, generating unrealistic objects, or producing illogical vehicle contours—which limit their effectiveness as training data. To address these issues, some studies have attempted to use ControlNet to regulate image structure. However, overly strong structural control can reduce realism, resulting in images that are structurally accurate but lack a sense of authenticity. On the other hand, LoRA can adjust image styles but lacks sufficient structural control, causing the generated images to misalign with real-world environments and thus affecting training outcomes. To overcome these limitations, we combined the strengths of both ControlNet and LoRA, and used the Carla simulator to generate diverse simulated environments. Images from these environments were used as references for the diffusion model. This approach allows us to generate images that maintain structural consistency with the simulation while reflecting varied styles, thereby improving the effectiveness of training for autonomous driving systems.

  • 🧪 LoRA: Trains the diffusion model’s style using real-world images
  • 🎯 ControlNet: Uses simulation conditions to control structural consistency

🧠 This work is part of my master’s thesis. For more details, please feel free to contact me.

🔑 Keywords: Python.Stable Diffusion.LoRA.ControlNet

放大圖

🛣️ Advanced Driver Assistance Systems (ADAS) Testing

Delta Research Center.Aug 2023 - Jan 2025

This is an ADAS (Advanced Driver Assistance Systems) research project at Delta Research Center, aiming to develop an automated scenario generation system. Based on user-defined driving events (e.g., vehicle cut-in, pedestrian crossing, left turn), the system automatically generates simulation environments, exports them to the Carla simulator, and conducts large-scale autonomous driving tests and validation.

  • 🚧 Automatically generate OpenDRIVE-compliant maps based on event requirements
  • 🛠️ Integrate with RoadRunner to construct road geometry and markings
  • 🎯 Evaluate flexibility and realism through Carla Leaderboard tests
  • 🧪 Design a ToF (Terminate-on-Failure) mechanism to ensure continuous event execution

🧠 In this project, I was responsible for scenario generation development. I designed a general map specification for various driving situations and extended the event editor's functionality to support diverse simulation testing.

🔑 Keywords: Procedural Generation.Carla.Unreal.RoadRunner.OpenDRIVE.ADAS testing.Python.C++.Bash.Docker

放大圖

🏓 IT3: Immersive Table Tennis Training Based on 3D Reconstruction of Broadcast Video SIGGRAPH Asia 2024

NTHU courses - Virtual Reality.Feb 2024 - Jun 2024

Professional players often study match footage to observe their opponent’s serve. However, broadcast videos are mostly captured from a side view, lacking immersion. This project aims to reconstruct 3D match scenes from any broadcast table tennis video, combining AI-predicted player motions and ball trajectories. The system also includes editing tools that allow users to adjust the table's position and correct the location of players and the ball. These corrected data are then used to simulate the ball's trajectory. Using this simulation system, players can train their reaction and receiving skills in VR from a first-person perspective, and observe their own and their opponent’s movements from any angle, enhancing training effectiveness.

  • ⛹️‍♂️ 3D Human Motion Reconstruction: Use SLAHMR to estimate human trajectories in world coordinates and embed SMPL models into the virtual scene, providing realistic 3D avatars for observing or returning serves
  • 🖥️ Physics Simulation: Compute ball bounces and angular velocity at impact instead of relying on Unity’s default physics materials
  • 📷 Motion Capture: Use Meta Movement SDK to capture full-body posture of the player, recording 3D skeletal joint positions frame-by-frame during training for playback

🧠 In this project, I was responsible for UI integration, designing the motion replay flow using motion capture data, and implementing first-person viewing operations. Through hands-on testing on the Quest 3 device, I continuously optimized the interface to ensure a smooth user experience. During development, I also explored how to ask meaningful questions related to improving table tennis skills, gathered feedback through playtesting with friends, and iteratively refined features and usability.

🔑 Keywords: VR table tennis.3D Reconstruction.Motion Capture.first-person perspective.Unity

放大圖

🚴 Procedural Generation of Bike Path Best Popularity Award

College Student Research Scholarship, NSC.Jan 2022 - Jan 2023

This project is part of an immersive cycling simulation system that displays surrounding environments. By using a stationary bike (flywheel device) and wearing augmented reality (AR) glasses, users can experience realistic outdoor cycling without leaving their home. During the ride, the environment is rendered in AR glasses using procedural generation techniques to reduce computational load. Since terrain data is required in the scene, we aim to compress this information into extremely compact data, avoiding the need to load massive terrain data all at once. Once a user selects a riding route, the system simplifies distant terrain details while preserving key mountain features, and reconstructs the terrain procedurally. Only terrain within the user’s line of sight is rendered, ensuring efficient performance without compromising visual quality.

  • 🚧 Procedural Generation: Preprocess road and building data; dynamically generate nearby elements
  • ⛰️ Terrain Feature Point Extraction: Due to the 30-meter resolution of DEM data, we store selective points as key features. Based on the route selected, we pre-split the terrain using QuadTree depending on the distance from the user. Non-key points in large areas are preserved using first- and second-order derivatives with thresholding
  • 🗺️ Terrain Reconstruction: Use KDTree to accelerate nearby feature point searches, then apply IDW (Inverse Distance Weighting) and Bilinear interpolation for reconstruction
  • 🏠 Shape Grammar: Define generation rules based on real-world building structures, dynamically loading and generating them in the runtime scene

🧠 In this project, the most interesting task was selecting realistic cycling routes that include buildings and varying terrain. We referenced popular bike routes like Yangjin Highway and Fengguizui. Trees were manually placed based on latitude, pitch, and yaw, referencing Google Street View, until the scene looked realistic.

🔑 Keywords: Procedural Generation.OpenStreetMap.Digital Elevation Models.Unity.Bluetooth Flywheel

放大圖

🎯 Physics Engine Implement

International Games System.Mar 2022 - Oct 2022

While working on Procedural Generation of Bike Path, I also participated in a physics simulation project with IGS (鈊象). My tasks included extending Cannon.js by implementing Continuous Collision Detection (CCD), which it originally lacked, and adding special physics behavior for gate-like animated objects—preventing their transformations from being altered by collisions. The simulation was rendered using Three.js and deployed as a 3D pinball machine in the browser. On the backend, I set up a Node.js server on EC2 using Nginx, which manages multiple physics verification threads. A socket-based system connects the client’s scene with the server, ensuring consistent and reproducible physical interactions at any point in time.

  • ⏳ CCD: Used bisection to estimate the next collision time, solving the tunneling problem caused by fast-moving objects
  • 🖥️ Deployed a backend physics verification system using EC2 + Nginx

🧠 I learned three core techniques in this project: CCD Implementation: Based on academic papers, I started with supersampling to detect collisions using smaller time steps, then applied bisection to accelerate the search for the precise collision time. I referenced diagrams from related papers to plot the collision time graphs. Physics Verification: To minimize transmitted data, the client sends only essential state variables to the server, which reconstructs and verifies the simulation. This required deep understanding of Cannon.js’s internals to determine which variables are necessary to achieve consistent state recovery. Server Deployment: I learned how to serve the client webpage using Nginx and finally connected everything via WebSockets—the moment it successfully synced brought me genuine excitement.

🔑 Keywords: CCD.EC2.Cannon.js.Three.js.JavaScript

放大圖

🧘 Postpartum Recovery App

NTHU Sports Tech Center.Mar 2023 - Sep 2023

This was an industry-academic collaboration project with the National Tsing Hua University Sports Technology Center, aimed at developing an application specifically designed for postpartum care homes and confinement centers. The app invited professional yoga instructors to curate 11 postpartum-friendly movements, and provided interactive guidance to help users perform them correctly. It featured built-in human skeleton tracking, allowing real-time evaluation of pose accuracy and visual feedback on targeted muscle areas. By combining technology and exercise, the app aimed to help postpartum women recover physical function more effectively.

  • 📷 Human Skeleton Tracking: Used MediaPipe to analyze 3D skeleton joint positions from 2D images
  • ⛹️‍♂️ Unity: Used for UI placement, coach model integration, and importing motions into Humanoid Avatars
  • 📱 Android: Integrated the front camera and exported as an APK for Android deployment

🧠 I was responsible for the entire app development. The most challenging part was integrating MediaPipe into mobile. First, I ensured platform compatibility and followed detailed tutorials to build the environment—installing and configuring MSYS2, Visual C++ Build Tools 2019, Bazelisk, Docker, etc. Installing Android NDK version 21 was also required. Any missing setup would result in either errors or black screens. Mobile-side testing was even more cumbersome, requiring each build to be installed on the phone for verification. After many adjustments and attempts, I successfully completed the integration.

🔑 Keywords: MediaPipe.Pose estimation.Unity.Android

放大圖

🏃 Graphics Rush First Place in Popular Vote among 22 Teams

NTUST courses – Computer Graphics.Dec 2020 - Jan 2021

A game built using computer graphics techniques, including Edge Detection, Uniform Quantization, Cardinal Spline, and Environment Mapping. The game also featured Level of Detail (LOD) based on camera distance, rain and explosion particle effects, and Billboards that always face the camera. Every object was textured for visual appeal, and shaders were used to render the scene. All graphics techniques were wrapped in the narrative of a student being chased by graphics assignments, forming the theme of the game.

  • 🔍 Level of Detail (LOD): Adjusted model complexity based on camera-object distance for performance
  • 💧 Used shaders to render raindrops and explosion effects across many objects

🧠 I was in charge of the technical implementation. After discussions with my teammates, we decided to showcase all the computer graphics techniques we had learned through a game. The game story featured a student escaping assignments via a six-level roller coaster, symbolizing five assignments and one hidden stage. Every element had meaning—for instance, the shop was named 140.118.127.125, which is the actual submission site for assignments. "Ver2" indicated a second chance to re-upload homework. Even the miniboss would try to disrupt the player using “assignments.”

🔑 Keywords: Shader.OpenGL.C++

放大圖

🐧 Animal Unite

Best Popularity Award Best Innovation Award Best Game Design Award

NTUST courses - Game Development & Design Final Project.Sep 2021 - Jan 2022

This was a course project for the class Game Planning and Design Principles. The course covered how to write a game proposal, ideation, first and second evaluations, and final production. Beyond just gameplay mechanics, visual aesthetics and special effects were essential to the final product. Maintaining a consistent visual style and communicating closely with teammates were critical.

🧠 I worked with three teammates to develop the game, which was inspired by Ultimate Chicken Horse. We explored the idea of a single-player game that allows control over multiple characters, each with unique abilities that must be combined cooperatively to complete a level. Our division of labor was: one focused on character functionality, another on art, a third on integration, while I was initially responsible for level design. However, as we realized that art production was taking too much time, I offered to switch roles and took over the art tasks. Starting from Unity’s default blue screen, I gradually created Tilemaps, character sprites, backgrounds, signs, titles, etc., ensuring a unified visual style. In the end, I’m thankful to my team for the collaboration and discussions that made this game possible—it gave me valuable teamwork experience.

🔑 Keywords: Unity.Shader

放大圖

📷 High Dynamic Range Imaging

NTU courses - Digital Visual Effects.Feb 2022 - Mar 2022

Due to the limitations of RGB in digital storage, some information is lost in photographs. To address this, we generated HDR images by analyzing multiple photos taken at different exposures. To sample the same pixel across different exposures, alignment must first be achieved. We used the Median Threshold Bitmap Alignment algorithm to align adjacent photos, analyzed the Camera Response Curves, solved a least-square problem to obtain the color energy distribution, and finally applied different types of Tone Mapping (e.g., Photographic or Bilateral-filtering) to create vivid HDR images.

  • 📷 Median Threshold Bitmap Alignment: XOR the threshold bitmaps of two photos and use the exclusion bitmap (within error tolerance) as a mask to hierarchically minimize error.
  • 🏙️ HDR Recovery: Sample the same point under different exposures, use SVD to solve the least-square problem and derive the response curves.
  • 👻 Ghost Removal: Add penalties to the average weights to reduce ghosting in the combined radiance map.
  • ☀️ Photographics: Analyze luminance performance across the panorama and stretch it into the RGB display range.
  • 📷 Bilateral Filtering: Use intensity to filter the radiance map, preserving high frequencies and compressing low frequencies.

🧠 In this project, I was responsible for Tone Mapping. During lectures, I realized the practicality of linear algebra and learned to use SVD to solve the least-square problem for obtaining the energy distribution of an image. I compared two Tone Mapping approaches and discovered that the luminance and intensity formulas differ. No single algorithm is always optimal—some produced unexpected results, while others led to poor exposures. Regardless of the method used, each photo became a part of an artistic composition.

🔑 Keywords: Tone Mapping、Python

放大圖

⛰️ Image Stitching

NTU courses - Digital Visual Effects.Mar 2022 - Apr 2022

By combining multiple photos, a large panorama can be created. First, the photos are projected cylindrically, followed by feature point detection using algorithms like SIFT, Harris, or MSOP. Pairs of images are then matched by finding the closest feature points. We applied RANSAC to filter the match points and solved a least-square problem to derive the optimal Affine Matrix, allowing one image to be seamlessly stitched onto another.

  • 🌏 Cylindrical Projection: Project images onto a cylindrical surface to facilitate feature matching.
  • 🔺 Multi-Scale Oriented Patches (MSOP): Detect image features at multiple scales by applying Gaussian blur, finding Harris corners, and using ANMS to distribute points evenly.
  • 📽️ Adaptive Non-Maximal Suppression (ANMS): Iteratively mask surrounding features, reduce masking range, and extract evenly distributed points.
  • 📷 Feature Matching: Feature points from each scale are represented as 8x8 patches with position, orientation, and normalization. Nearest Neighbors algorithm is used for matching.
  • 🚩 RANSAC: Iteratively sample feature point pairs, solve the least-square problem to derive the Affine Matrix.

🧠 I was responsible for implementing Cylindrical Projection, MSOP, ANMS, and Feature Matching. Using Nearest Neighbors to match points, I initially detected only a few features, but tuning parameters improved the matching quality. During blending, we noticed that moving objects (like trees or water) caused blur when in the stitching area, but could be duplicated cleanly if outside it—making the photo collection process quite interesting.

🔑 Keywords: Cylindrical projection、MSOP、ANMS、Python

放大圖

🏰 Game Programming Part I - Chronicle of the Genocidal Precedence – Seed of Destruction

NTUST courses - Game Programming.Feb 2021 - Mar 2021

We learned the Unity environment and benefited from its existing interactive packages and real-time scene capabilities, which simplified and accelerated development. To allow enemies to move and attack players, we used NavMesh to let AI find the shortest path to the player. When in range, enemies would fire projectiles. The tank's turret and body were under the same hierarchy: the turret rotated toward the target while the body changed direction according to movement.

  • 🛣️ NavMesh: Defines walkable terrain and guides enemy AI navigation.
  • 🦾 Hierarchy Transformation: Turret rotates based on mouse direction; the body rotates and moves with keyboard input.
  • ☀️ Particle System: Simulates explosion smoke and building collapse effects.

🧠 In this project, I was again responsible for Tone Mapping. I appreciated the practical use of linear algebra, used SVD to solve the least-square problem to obtain image energy distribution, and compared the pros and cons of two Tone Mapping methods. I found different formulas for luminance and intensity, and realized that no single algorithm is always best. Some results were unexpected or poorly exposed. However, every image, regardless of the method used, became part of the overall art piece.

🔑 Keywords: Unity、NaviMesh、Hierarchy Transformation、Particle System

放大圖

🏰 Game Programming Part II - Terrestrial Interference

NTUST courses - Game Programming.Mar 2021 - Apr 2021

2D side-scrolling games are a classic genre—examples include Mario and Mega Man. Unity offers many 2D features like sprites and physics collisions. Through this project, we experienced the full development process of a simple game. Special effects were easy to implement in 2D: we used distortion to simulate heat waves, layered moving noise for sandstorms, and pixelation to simulate disappearing enemies. With a storyline, we turned it into a complete single-player game.

  • 🦾 Hinge Joint 2D, Semi-Solid Platform: Adds 2D physical behavior.
  • ⏳ Distortion: Uses time-based parameters to distort visuals and simulate heatwaves.
  • 🏜️ Sandstorm: Layers transparent noise over the scene and animates it with offset movement.
  • 🎥 Pixelation: Multiplies a preprocessed grayscale mask over time to simulate vanishing.
  • 💬 AssetBundle: Loads dialogue batches based on story progress.
  • 🤖 Animator: Designs sprite animations for characters and monsters, switching states via variables.

🧠 This story continues from Part I. The game was designed as an action-adventure where players defeat monsters and collect hidden treasure chests across different areas. These treasures combine to form a world map. There are both good and bad endings—the good one reveals the password to Part III. Technically, we added visual feedback such as flashing when hit, freezing effects for ice weapons, and pixelation when enemies die. Using Animator, we implemented monster state machines. Most time was spent designing graphics to unify the pixel-art style and creating original character animations with classmates.

🔑 Keywords: Unity、Shader、AssetBundle、Animator

放大圖

🏰 Game Programming Part III - Warlord v.s. Government

NTUST courses - Game Programming.Apr 2021 - May 2021

By using a remote server, computers and mobile phones can be connected to play games online. One of Unity's advantages is its ability to deploy across various platforms—PC, mobile, or even web—streamlining the cross-platform development process. When paired with the Photon platform, phones and computers can battle each other. However, it's important to define synchronization rules for shared game objects, such as which side controls an object, to prevent data tampering that could favor one player.

  • 📶 Photon Unity Networking: Provides a platform for connecting mobile and desktop devices, and defines rules for object synchronization. Object ownership cannot be arbitrarily overridden by the other side.

🧠 In the second installment of our story, the enemy discovers our base and initiates an attack. The game is designed as a two-player co-op tower defense game. Because it uses Photon for networking, players on phones and computers can compete. During development, we found it necessary to define control rules for game objects to prevent unexpected behaviors. Due to Photon’s limited bandwidth, only command-like instructions can be transmitted to control the objects on both sides.

🔑 Keywords: Unity、Photon、Networking、Client–server model

放大圖

🏰 Game Programming Part IV - Culmination of War

NTUST courses - Game Programming.May 2021 - Jun 2021

Unlike Unity’s programming style, Unreal uses node-based scripting. Unreal is well-suited for first-person shooter (FPS) games, emphasizing fast bullet movement and damage processing. Ray tracing is used to detect bullet collisions, ensuring they don’t go through walls and hit players unfairly. For lighting, Global Illumination can be adjusted to enhance realism.

  • 🪁 Windzone: Adds regional wind effects to make flags flutter and trees sway, with a defined center.
  • 🚩 Cloth Simulation: Flags are cut into multiple mesh pieces and given special physics-based collision.
  • 💣 Particle System: Used for smoke, muzzle flashes, and fire effects.
  • 🎯 Ray Tracing: Ensures bullet behavior is realistic, avoiding wall-penetration bugs.

🧠 As the final chapter of the four-part series, the story connects all previous parts, inspired by Washington crossing the Delaware. The game’s goal, discussed with classmates, is to defeat a central figure. Unreal’s lighting and realism require more computing power than Unity. Technically, we discovered bullets occasionally passed through walls, so we used ray tracing to detect collisions and determine if damage should be applied.

🔑 Keywords: Unreal.FPS.Particle System.Ray Tracing

放大圖

⛰️ Image Editing

NTUST courses - Computer Graphics.Sep 2020

We learned how images are encoded, scaled, and processed to better understand image processing theory and to implement basic techniques. Starting from grayscale conversion to Populosity Quantization, which statistically assigns similar new colors, and Dithering effects using special algorithms, as well as implementing an NxN Gaussian Filter with adjustable strength and Edge Detection techniques.

  • 🎥 Populosity Quantization: Collects frequently used colors and maps old colors to nearby ones.
  • 🎥 Dithering - Brightness: Uses average grayscale values as a threshold.
  • 🎥 Dithering - Cluster: Applies a 4x4 threshold matrix for local filtering.
  • 🎥 Dithering - Floyd’s: Zig-zag passes error information to adjacent pixels.
  • 🎥 Gaussian Filter: Supports NxN kernel sizes and handles edge sampling issues.
  • 🎥 Edge Detection: Uses high-frequency filters to extract edges.

🧠 This assignment progressed from basic grayscale conversion to quantization that summarizes image statistics, and to functions built from unique algorithms. Challenges included handling color overflow and preventing out-of-bounds memory access, which provided a solid foundation for future graphics processing.

🔑 Keywords: C++.libtarga

放大圖

🏰 Maze Visibility and Rendering Graphics

NTUST courses - Computer Graphics.Oct 2020

We began learning how 3D objects are rendered onto a screen. In computer graphics, it’s essential to understand world transformations, camera FOV, and near/far planes. Starting with the Model View Matrix for world placement, and the Perspective Matrix for projection, rendering also involves Clipping to exclude walls outside the field of view. When no wall is detected, the algorithm recursively traces to the next visible wall. All 3D rendering is accomplished using only OpenGL’s 2D functionality.

  • 🎥 Model View Matrix, Perspective Matrix: Model transformations are written into the model-view matrix; the camera’s FOV and other parameters into the perspective matrix to display the scene.
  • 🎥 Clipping (View Frustum Culling): Renders only objects within the visible frustum.

🧠 The fun part of this assignment was the constraint to use only 2D code to draw 3D objects. Camera frustums and projection matrices had to be computed manually. We started with a simple room of 4 walls and then expanded to multiple walls. Recursive parameters had to be managed efficiently, and we encountered floating-point errors. After completing the assignment, we gained confidence in studying computer graphics.

🔑 Keywords: OpenGL.MVP Matrix.Clipping.C++

放大圖

🎢 Trains and Roller Coasters

NTUST courses - Computer Graphics.Oct 2020

Through designing a roller coaster, we learned how to mathematically create curved tracks and used OpenGL to set up 3D scenes and render lighting. Using matrix operations from linear algebra, object transformations (position, rotation, scale) were simplified. The Cardinal Spline and Cubic Spline paths were generated using matrix multiplications.

  • 🛤️ Cardinal Spline, Cubic Spline: Position and direction control points are multiplied with curve parameter matrices.
  • 🛤️ ArcLength: Uses progressive refinement to place new positions at equal arc-length intervals.

🧠 Tracks were formed by connecting control points. Using matrices, we computed positions on various spline curves and connected them one by one. To ensure a consistent speed, we used arc-length parameterization so the roller coaster wouldn't speed up or slow down too much between points. For added fun, we tried loading .obj models and added headlights to the roller coaster car.

🔑 Keywords: OpenGL.Spline.C++

放大圖

🌊 Water Surface and Rendering (GPU Shader)

NTUST courses - Computer Graphics.Nov 2020

A mesh is composed of vertices, edges, and faces. Due to the heavy computations required, GPU acceleration is essential. Objects are rendered using shaders, and two types of water surfaces are simulated: one with sine waves and another with heightmap-based undulations. The project focuses on learning how to simulate water reflection and refraction, with a key feature being the use of dynamic environment mapping to reflect surrounding objects on the water surface.

  • 🔆 Material: Combines directional light, spot light, and point light to calculate ambient, diffuse, and specular components, summing the results of multiple light sources.
  • ⛰️ Dynamic Environment Mapping: The scene is first rendered into framebuffers in six directions and mapped onto a skybox. When rendering the water surface, its shader references the skybox for reflection and refraction, solving the issue of only reflecting the sky without surrounding objects.

🧠 Water surface simulation always felt like a magical technology even before I learned it, and it was one of my first steps into the world of physical simulation. During implementation, I discovered that computing the vertex positions of the water mesh is very intensive. Without shaders, the computer would lag heavily and be difficult to use. Since this was my first experience with shaders, I encountered many issues when writing and debugging them — from incorrect shader placement to wrong data types — all of which led to unexpected results and took a lot of time to troubleshoot. However, these experiences became valuable for future shader development. To make the water surface capable of reflecting objects, I specifically researched dynamic environment mapping, learned that it requires framebuffers, and finally rendered reflected and refracted views using skybox images affected by lighting.

🔑 Keywords: OpenGL.Shader.Lighting.Dynamic Environment mapping.C++

放大圖

🤖 Hierarchy Robot Animation

NTUST courses - Fundamental Computer Graphics.Mar 2021

Using hierarchical transformations, the robot can perform complex multi-joint motions such as Gangnam Style, jumping jacks, and the moonwalk. It also supports actions like drawing a sword, which require switching transformation hierarchies. Shaders were used for visual effects such as lighting, particle systems, and motion blur to enhance the visual richness.

  • 🦾 Hierarchical Transformation: Calculates the combined transformation matrix from scaling, rotation, and translation matrices.
  • 🔆 Shadow Mapping: Captures a depth map from the light's perspective; requires fine-tuning the depth map’s resolution.
  • 🎥 Particle System: Assigns different start times and angles to particles for a scattered visual effect.

🧠 To implement a hierarchy, the model must first be grouped by movable parts. I used Maya for this process and discovered that material assignments were lost when converting model formats. To fix this, I manually reassigned the original materials to each object, gaining a better understanding that a single model can have multiple materials. After grouping, I used OpenGL to change transformation matrices over time according to the specified motions. Every movement — whether moving the left arm or flapping the right wing — had to consider the interdependence of parts due to the hierarchy. I also encountered challenges like transitioning from the left leg’s hierarchy to the right hand’s hierarchy during the sword-drawing animation. Finally, I enhanced the overall performance with shader effects, such as shadow mapping, to make the animation more vivid and expressive.

🔑 Keywords: OpenGL.Shader.Hierarchy Transformation.Lighting.Shadow Mapping.C++

放大圖

🔍 OpenMesh Parameterized Texture Mapping

NTUST courses - Fundamental Computer Graphics.May 2021

This section introduces the OpenMesh data structure, where a mesh is built from vertices, half-edges, edges, and faces. It includes one-ring structures such as face-face, face-vertex, and edge-face connections. Depending on the attributes, color, position, texture coordinates, and normals can also be processed. Using these features, the assignment goal was to compute texture parameterization over a connected mesh of triangles, applying any number of textures to a given model and performing transformations on them, including reading texture files.

  • 🎯 Parameterization - Mean Value Coordinates: This method solves for the UV coordinates of each vertex in a large mesh. For non-boundary edges, it calculates the angles on either side of the edge at its two vertices and assigns weights to the vertices based on these angles. Boundary edges are distributed across the four sides of UV space to determine boundary UV coordinates. Then, using the one-ring neighborhood, it builds a matrix of weights for non-boundary vertices and solves the linear equations to compute their UV coordinates.

🧠 Using OpenMesh required considerable time to understand its data structure, which helped me grasp efficient mesh processing methods. To work with input models, I had to understand the structure of an .obj file, which contains massive amounts of data. Rendering and manipulation must be offloaded to the GPU via shaders. Any incorrect parameter settings in the shader can prevent proper rendering, so every detail matters. After overcoming the challenge of accessing models via OpenMesh, the next step was solving for UV texture coordinates. This required a deep understanding of the underlying formulas and solving linear equations to derive the UVs. It helped me realize how much mathematical knowledge lies behind something as seemingly simple as texturing.

🔑 Keywords: OpenGL.OpenMesh.Parameterization.C++