General

General Questions


0. Tell me about yourself.

Keep it under one minute and adjust the details to fit your experience and the specific role. 

  1. What I work on
    • Started career over 10 years ago working on cg rendering and pipeline engineering in Anim/VFX/Games Pipeline before transitioning to 
      • Tech Art for CG Rendering and Synthetic Data Pipelines 
      • Software Prototyping for Minimal Viable Products for AI, AR, and Interactive Installations 
  2. Current Goal and Connection to Position:
    • Working with a team on challenging problems at the intersection of Creativity and Engineering 

My name is Victor Leung and I started career over 10 years ago working on traditional cg rendering and pipeine engineering in Anim/VFX/Games before transitioning to Tech industry to 

      • Technical Art for Synthetic Data Pipelines
      • Prototyping for Minimal Viable Products for AI, AR, and Interactive Installations

I work best at the intersection of creativity and engineering and want to be at a place that has challenging problems to solve. I'm interested in this job because.... 


1. Tell me why you will be a good fit for the position. 

Why do you want to work for X?

What are you looking for next role?

I’m looking for a role where I can work on challenging pipelines and collaborate with a multidisciplinary team to launch an impactful product. 

I’m excited about company’s work in [specific area], and I’m eager to bring my technical expertise and passion to the team. I’m impressed by X company’s innovative work in [specific area]. I admire your commitment to [a value or mission, e.g., open-source contributions or sustainability].

As someone who [your relevant strength,], I’m excited about the opportunity.


2. What happen to your last company? 

Situation:
I was part of a company-wide re-org that I suspect ended my contract short. The Design Program Management group hired me because of my experience working as a

Tasks:

Action:

Result:
In about three months, I lead the AR Demo Summit in June which showcased prototypes across the wearables division.



3. What project are you currently working on?

I'm working on upskilling my knowledge in AR and human centric AI

From 2017-2019, I was the lead AR engineer for the official Siggraph conference ScavengeAR App, an app designed for attendees photograph 3D creatures spawned from 2D Artwork hidden throughout the conference. We had over 1000 Daily Active Users throughout the conference. The Pandemic killed the app, but with the return of in person conferences, I've refactored the app to utilize the latest tech stack. 

My other specialty is real time digital humans. I have a background in Theater and Filmmaking, so I'm fascinated about solving the uncanny valley. The first part of my career I focused a lot on visuals, but with the emergence of LLM and synthetic voice, I realize that its about solving all the components. 


4. How do you stay up to date with latest tech 

I stay up to date with latest tech by turning doom-scrolling to micro-learning. I turn bad habits to good by making information come to me. 


5. Where Do You See Yourself in Five Years?

In five years, I see myself growing both technically, creatively, and in scope that allows me to make a meaningful impact on successful product.

In the future, I think the AI industry will have two paths. 

6. What are you passionate about outside of work
  1. Passions:

    • I'm a "serious" gamer, meaning I like to gamify good habits

      • One year streak on Duolingo 

      • Level 200 on Ring Fit

      • Currently playing Rocksmith to learn Guitar

      • Make my own serious games
    • Immersive Photographer

      • 360 degree photos for google streetview

      • photogrammetry

      • worked at Lytro that did lightfield 3d video before anyone else 


 

 

Communication Questions

What is one unpopular opinion you have that you are willing to wholehearted defend and why?

I believe that striving for perfection in every project can actually slow progress. Early in my career as a CG Artist working on linear content, perfection meant meticulously scrutinizing every offline rendered pixel. However, when I transitioned to interactive experiences, I learned that a “good enough” approach—combined with rapid iteration and real-world feedback—often leads to more innovative and resilient solutions.

For example, in 2017 I joined a volunteer group as lead AR Engineer to develop a mobile AR scavenger app for the SIGGRAPH conference. The app aimed to connect Artists, Scientists, and Educators by letting them “catch” 3D creatures spawned from real-life 2D artwork placed around the venue, with exclusive swag as an incentive.

When the initial release met with mixed reviews, I immediately sought user feedback. We discovered three key issues:

Originally, our roadmap focused on adding more mini-games and high-fidelity graphics. However, the feedback pushed us to refine the core experience, leading to significantly higher user satisfaction and thousands of downloads at the conference. Although the pandemic ended the app’s run in 2020, I’ve since refactored it using modern frameworks and design patterns to prepare for the return of in-person events.

This experience reinforced my belief that perfectionism can slow innovation. Embracing iterative development and gathering feedback early and often can lead to solutions that are both effective and adaptable.

6. What frustrates you?

Situation:
At Samsung Research, I was the first tech artist to bridge the gap between the design team and engineering team for for a video centric real-time digital human concierge product. 

Task: 
It took two weeks for artists to do production and post production before they can deliver to the machine learning team. Not scaleable in the long run. They want to speed up our delivery.

Action: 
I setup individual meetings with artists and engineers to understand their workflow in order to narrow down our requirements. Since i know cg art and engineernig, I ran current process across departments to identify inefficiencies. 

Result:
Sped up data delivery from 2 weeks to 1 day. 


7. How do you handle multiple stakeholders in cross-functional team?

Work with someone with larger scope (Director/Manager) to understand the politics and Mission Objective (bigger picture)

Situation:

At Meta I was a Product Design Prototyper on the Design Program Management Team in at AI/AR Wearables Division, where I had to improve and present prototypes of other departments  before presenting to VPs and External Partners to aid product roadmap 

Task:

Action:  

Result :
AR Demo Summit was success. 


8. Describe a situation where you had to explain a complex idea to a non-technical person.

Imagine you're at a restaurant, and you're hungry for a good meal. The way your meal is prepared can resemble how words and sentences are generated.

  • Autocomplete is like a vending machine—it predicts and suggests words one at a time based on basic patterns.
  • An LLM is like a chef—it understands context, selects words carefully, adjusts for meaning, and creates full, structured responses instead of just predicting the next word in isolation.

Basic Autocomplete = The Vending Machine

  • You press a button, and the vending machine gives you a snack based on pre-programmed choices.\
  • The machine doesn't "understand" what you're craving or consider a full meal—it just dispenses one predictable item based on your input.
LLM = The Chef Preparing a Thoughtful Meal
  1. Understands Your Order (Context Awareness)

    • The chef listens to your request: "I want something warm, savory, and comforting."
    • Similarly, an LLM doesn’t just look at one or two words, but understands the whole sentence, conversation, or even previous exchanges.
  2. Selects Ingredients Thoughtfully (Word Prediction and Structure)

    • The chef picks fresh ingredients (Tokens) that go well together (words, phrases, and sentence structure).
    • Instead of just adding the most common ingredient, they think about what makes sense for the dish (coherent sentence).
    • LLMs do this by analyzing patterns (probability) from large amounts of text to predict which words and structures fit best, one token at a time
    • Pasta > Eggs> Pancetta> Parmasean cheese and black pepper
  3. Adjusts for Taste and Style (Personalization & Tone)

    • If you ask for spicy food, the chef adjusts the seasoning to match your preference.
    • Similarly, an LLM adjusts its response based on your tone, style, or intent—formal, casual, humorous, etc.
  4. Prepares the Full Dish (Generates Full Responses, Not Just Words)

    • A vending machine only gives a single item, but a chef assembles an entire dish that is balanced and satisfying.
    • LLMs don’t just predict the next word—they construct entire paragraphs, explanations, or even creative works.
  5. Improves Over Time (Learning from Feedback)

    • A good chef learns from experience, improving recipes based on feedback.
    • LLMs don’t actually "learn" on the fly, but they are trained on massive datasets and fine-tuned over time to get better at responding in a human-like way.


Hallucinations = When the chef (LLM) makes up an ingredient (fact) that doesn’t exist or doesn’t belong.

They happen because LLMs predict based on patterns, not true knowledge.

The best way to avoid them is by verifying information, asking precise questions, and cross-checking sources.


How to Reduce Hallucinations?

  1. Verify Information (Taste Before Serving!)

    • A good chef tastes their dish before serving—similarly, LLMs should be double-checked against reliable sources.
    • If an LLM generates questionable facts, it’s best to cross-check with trusted sources.
  2. Provide More Context (Give the Chef a Clear Recipe!)

    • If you ask for a vague or broad answer, the LLM may improvise.
    • Instead of asking: "Tell me about the history of carbonara,"
      Ask: "What do food historians say about the origins of carbonara?"—this guides the LLM toward facts.
  3. Use External Fact-Checking (A Second Opinion)

    • Just like a chef consulting a recipe book, an LLM can be combined with search engines, databases, or APIs to fact-check. 

9. Tell me about a time you had a disagreement with your manager.

Tell me about a time when you had a conflict with a co-worker. 

Have you ever had to advocate for using a framework? 

Situation:
My manager and I had differing views on what needs to be prioritized for a major project video centric real-time digital human concierge was going from R&D to Product. Like most research groups, we captured ourselves and had the data stored on the filesystem. This was fine when we needed to maintain flexibility during research, but won't work in production. 

Task

Convince manager on a roadmap adjustment to prioritize Digital Asset Management and Security more. 

Action:

Result
Became a Hiring Manager and hired a DAM.

Was the lead of all things data.

Launched our DAM solution for our Product pipeline.

Samsung was actually hacked but our data was safe


10. Give an example of a time you received critical feedback and how you respond?

Going from IC to Lead, I had to learn how to run meetings properly. They were running too long

Result: Made meetings more efficient.

Situation:

I was finally given increased scope from Individual Contributer to Hiring Manager and Lead for our 3D digital human project at Samsung Reseach. I was a first time arbitrator for meetings, and initially my early meetings tend to run long.

I was told I need to figure out how to make meetings more efficient. 

Tasks:

The book that helped were Making of a Manager by Julie Zhuo, and Pip Decks. 

Utilize my improv skills and when talking to talent on Stages 

Action

Establish control in beginning and end of the meeting and be aware of time. 

Result:

Deployed apps and such


12. How Do You Use AI to Increase Productivity in Your Work?

  • I use GitHub Copilot and ChatGPT to learn new topics and write boilerplate code, while keeping in mind data security
  • Use role prompting for AI to give design feedback, like show it an animated GIF or using image2image genai for UI. 
  • Recognize halluncinations and that AI can take you down the wrong path if not familiar with architectural thinking and good practices. AI does not know the latest APIs depending on when data is scraped, so need to refactor accordingly. 

Situation:

Task 

Action: 

Result 

I use ChatGPT to streamline coding by suggesting boilerplate code or offering solutions for repetitive tasks. This allows me to focus more on solving complex problems and refining the architecture of my applications. I also leverage input like animated GIFs and images for feedback and generating new UI based on design terminology. 


Software Engineering Questions

14. Tell me about a time you solved a difficult technical problem

15. What was the most difficult bug that you fix?

When I first started ScavengeAR, I was a technical artist that inherited our master AR scene. It was built off of low-code Vuforia Sample scene, which consisted of translating 3d creatures in front of the AR camera and utilizing parent-child relationships in Unity Scene Hierarchy. This worked fine if only using a few 3d creatures but when adding 20+ creatures lead to a lot of problems:

We launched the app successfully despite these issues because:

In 2025, after years more experience as a software engineer, I now know instantiation and prefabs was the solution. Though I could implement this in Vuforia, AR Foundation has immerged as a native solution now. 

Result:

Able to support more creatures now.

Future:

Implement Object pooling: 

17. Tell me about a project where you faced unexpected challenges. How did you handle them?

18. Tell me about a time you met a tight deadline.

I was tasked with finding a solution for 3D motion capture on very short notice after our research team discovered that our primary face and skeletal tracker, Google Mediapipe, was producing poor 3D results—and our alternative tracker was being acquired and would soon lose support. This issue effectively stalled our engineering and research efforts.

Coming from a Visual Effects background, I understood that traditional 3D mocap is notoriously complex and expensive; building a motion capture stage alone can cost nearly a million dollars and take months to construct. Our specific need was for 3D facial and upper-body tracking of talent standing in front of a green screen, but there are no publicly available capture stages in the Bay Area. Fortunately, leveraging my prior experience in LA, I reached out to a trusted vendor who drove his equipment from LA to the Bay Area to capture the necessary 3D data—which we then used as ground truth for our machine learning models.

In parallel, I experimented with various third-party and open-source solutions. In a typical VFX pipeline, compositors key out green backgrounds and remove mocap markers. However, after reviewing several research papers, I discovered an open-source AI method from Bytedance that nearly automated the keying process perfectly. We also explored inpainting techniques to remove tracking markers from one frame—letting machine learning handle the rest—and evaluated Move.AI, an emerging solution that uses footage from multiple mobile phones and external cameras to extract 3D tracking data comparable to expensive mocap systems.

Ultimately, we decided on a more traditional approach: a synchronized broadcast multicam system paired with a Direct Linear Triangulation (DLT) algorithm to extract the skeletal data. Although it may seem counterintuitive given the promise of AI solutions, our experiments showed that this method best met our technical requirements without breaking the bank.

This experience reinforced that while emerging technologies can be exciting, they aren’t always the right fit—especially under tight constraints. I learned the importance of adaptability, thorough experimentation, and focusing on core requirements to deliver effective and cost-efficient solutions.


How do you prioritize your tasks?

Situation:
In 2024, I refactored a 5-year-old AR project originally built with Vuforia and Unity for augmented reality experiences. The project was outdated and relied on legacy libraries, which no longer aligned with modern AR frameworks like AR Foundation. Additionally, the codebase lacked modularity, and maintaining or expanding features had become cumbersome.

Task:
The legacy code used Vuforia 9, which had limitations in compatibility with newer Unity versions and modern AR SDKs. Furthermore, features like image tracking and ground planes were tightly coupled, making it difficult to switch to AR Foundation. Performance was also a concern due to inefficiencies in the original code, such as redundant object hierarchies and overuse of runtime-generated assets.

Action:
I began by analyzing the legacy project to identify reusable components, such as 3D models and animations, and separated them from code that required updating. Next, I mapped out the feature set provided by Vuforia and determined equivalents in AR Foundation. I set up a new Unity project with AR Foundation 5.1, progressively integrating updated features like tracked image management and ground plane detection. To ensure scalability and maintainability, I restructured the codebase to use modular design patterns, such as decoupling AR tracking logic from scene-specific behaviors. This also allowed me to implement sprite animations and improve performance with optimized lighting settings for AR environments.

Result:
The refactored project became significantly more maintainable and scalable. By transitioning to AR Foundation, I ensured compatibility with both iOS and Android devices using a single framework. The modular design allowed for easier integration of new features, such as XR simulation, and reduced build times by optimizing texture handling. The updated app achieved better performance and provided a smoother user experience, while also aligning with current AR standards.

6000 were full conference attendees 

had 50 percent download rate which is considered between mid and high success

3000 downloads, and 2000 Daily active users 


20. Describe a project where you improved the performance of a system.

scavengeAR?


21. Describe a project where you improved the scalability of a system.

Turning local Renderfarm to hybrid on-prem/cloud.  


22. Can you give an example of a time you made a mistake in your code? How did you fix it?

ScavengeAR. made everythign in HLSL.

Creating an entire Unity UI in HLSL (High-Level Shader Language) instead of using Unity's Canvas system can be problematic due to several technical and practical reasons. While HLSL is powerful for creating custom visual effects, using it exclusively for a UI introduces significant challenges that make it less suitable compared to Unity's Canvas-based system. Here's why:

1. Complexity of UI Layout and Interaction
Canvas:

Unity's Canvas system provides built-in tools for layout management, such as anchors, pivots, and RectTransforms.
Easily handles dynamic resizing, positioning, and responsiveness across various screen sizes and resolutions.
Includes event systems for detecting clicks, drags, and other user interactions (e.g., buttons, sliders).
HLSL:

HLSL is primarily designed for rendering and lacks the concept of layout or user interaction.
To recreate layout management in HLSL, you would need to manually calculate positions, handle transformations, and account for screen resolution changes, which is extremely time-consuming.
Implementing interactive elements like buttons or sliders would require additional logic in scripts, effectively recreating Unity’s existing UI framework from scratch.
2. Lack of Accessibility Features
Canvas:

Unity's UI system supports accessibility features such as screen readers and keyboard navigation.
You can easily add animations, transitions, and tooltips to UI elements.
HLSL:

You would need to manually program accessibility features, which is not only challenging but also prone to errors.
Building animations and transitions would require custom shader logic, making maintenance and iteration harder.
3. Performance Considerations
Canvas:

Unity's Canvas system is optimized for UI rendering. The engine batches and manages draw calls efficiently for most common UI use cases.
Unity provides tools like Canvas Scalers to adjust the UI for different screen sizes without extra performance overhead.
HLSL:

Writing the entire UI in HLSL would require a full-screen quad (or multiple quads) to render elements, which means every pixel might be processed unnecessarily.
Without careful optimization, this approach can result in excessive GPU usage, especially if shaders include complex calculations for every frame.
4. Lack of Unity Editor Integration
Canvas:

The Canvas system integrates seamlessly with the Unity Editor, allowing you to design UI visually with tools like the RectTransform Editor and Prefabs.
Designers and artists can contribute without needing to write code or shaders.
HLSL:

Designing a UI in HLSL would require writing code for every single visual element and interaction.
This lack of a visual editor makes the workflow slower and limits collaboration with non-programmers.
5. Debugging and Maintenance
Canvas:

The Canvas-based UI leverages Unity's debugging tools, including the Scene view and UI event system.
Issues like misaligned elements or broken interactions are easy to identify and fix.
HLSL:

Debugging shader-based UI involves interpreting pixel-level behavior, which is far less intuitive.
Small changes to the design could require significant rework of shader code.
6. Scalability
Canvas:

Unity's UI system scales well for typical 2D and 3D applications, supporting features like nested canvases, localization, and animations.
It’s easy to add or remove UI elements without disrupting the entire layout.
HLSL:

Adding new UI elements in HLSL requires modifying shader code, which can make the system fragile and error-prone.
Scaling the UI to different screen sizes or adding responsive layouts becomes a major challenge.
When to Use HLSL for UI
HLSL can still be a good choice for specific visual effects in the UI, such as:

Creating custom shaders for buttons, text, or backgrounds (e.g., animated gradients, outlines, or glows).
Implementing unique effects like holographic or glitch effects for menus.
Enhancing Canvas-based UI with shaders rather than replacing it entirely.
In these cases, HLSL complements the Unity Canvas rather than replacing it, allowing you to benefit from the strengths of both.

Conclusion
Using HLSL to create the entire Unity UI is not recommended because:

It lacks the layout, interaction, and accessibility features of Unity's Canvas system.
It introduces unnecessary complexity and performance overhead.
Maintenance and iteration become significantly harder.
Instead, leverage Unity's Canvas system for the core UI structure and use HLSL sparingly to add custom visual effects. This approach balances usability, performance, and flexibility, ensuring a more robust and maintainable solution.


Technical Art Questions

23. Tell me about a time you optimized a 3D asset pipeline.


24. Have you ever worked on a project where the artistic vision conflicted with technical constraints?

Performance is critical. Art is about hitting the essence, not hitting exact the concept art. 

Focus: Negotiation, technical expertise, and artistic understanding.


25. Tell me about a time you implemented a tool or workflow that improved efficiency for your team.

Start with experience building multiple tools (like pyqt tools, photogrammetry, etc), but talk avbout the QC preprocess pipeline 

Focus: Tool development and process improvements.

build Preprocessing for samsung


26. Give an example of a time you had to troubleshoot a rendering or asset issue in production.

Focus: Debugging and technical understanding. Learning from multiple occurances. 

3D? Deadline logs, understand trends in graphs 

2D? QCtools and preview diagnostics


Program Management

What’s your experience with planning and executing technology-driven experiences for live events?

Situation: 
How to plan a conference from the ground up?

Task:

Action:

Before the Conference 

Conference Setup

During the Conference

After Conference

Result 

100s of submissions, 30ish selected exhibits, 18,700 total Attendees, Amongst highest in 10 years before Pandemic
100s of submissions, 30ish selected exhibit, and 1000 attendees AR Demo Summit data, largest at the time across 4 sites

 


What constitutes as a successful event? 


How would you design an interactive exhibit that demonstrates the power of Meta’s AR tools to a business audience?

Situation:

If single player, 

If multiplayer, At Magic Leap, we followed the C3.

Task:

Action

Result. 


How would you improve audience engagement in a mixed-reality event experience?

Situation: 
MR is a great experience to the user, but boring to those seeing from the outside. 

Tasks:

Action:

Result


Imagine core feature is unstable. How would you handle this?

Can you describe a time when you had to quickly prototype a technical solution for an event?

Situation: 
Originally our Magic Leap experience was going to be utilizing two devices, one that goes through the experience with our digital human, and the brand ambassador watching the experience streamed to a monitor for attendees. We had a hard deadline for LeapCon, but the hardware features were not stable, leading to low performance, resulting in high latency and frequent crashes.      

Tasks:

Action:

Result

 


What do you do when things fail? 

Describe a time when you had to troubleshoot a technical issue at an event. 

What challenges have you faced when working with AR/VR/MR in a live event setting? 

Situation

The demo is not working, what do you do? 

Tasks

Actions 

Result

good things


What makes a good elevator pitch?

Use improv experience to stay positive and always say yes and


Situation

Come up with a elevator pitch that makes the demo look good. 

Tasks

Action

Have you ever setup your MR device and you are asked to type your wifi password? The options are to find a keyboard which we not have lying around, or go through the frustrating experience where we have to do air typing with a virtual keyboard. I’m here to present surface typing, a better way to type in MR. Surface Typing utilizes handtracking and a virtual keyboard projected on a flat surface, such as a table, so you can type on top of it.

In this demo, we have a typing game where you type the paragraph using surface typing and it will determine your words per minute. The average typer is around 40 words per minute on a keyboard, but air typing is at around 15 words per minute. Advanced typists are 100 words per minute.

Lets see how fast you can type!

Result

Eval Sheet

codinginterviewevalcriteria.jpg

Information Overload

In "The Organized Mind," Daniel J. Levitin explores how the modern world’s overwhelming amount of information impacts our ability to think clearly and make decisions. Drawing on insights from psychology, neuroscience, and cognitive science, Levitin provides practical strategies for organizing our thoughts, lives, and environments to improve productivity and mental clarity. Here are ten key lessons and insights from the book:
1. The Information Age Challenge: Levitin discusses the challenges posed by the Information Age, where we are bombarded with an excess of information. This overload can lead to cognitive overload, making it difficult to focus and make effective decisions.
2. The Role of Attention: The author emphasizes the importance of attention in organizing our thoughts and actions. He explains that our brains have limited attentional resources, and learning to manage and direct our attention is crucial for productivity and clarity.
3. Cognitive Offloading: Levitin introduces the concept of cognitive offloading, which refers to the practice of using external tools (like lists, calendars, and apps) to manage information and tasks. By offloading cognitive tasks, we can free up mental resources for more complex thinking.
4. The Importance of Structure: The book highlights the significance of creating structure in our lives. Levitin suggests organizing our environments, schedules, and tasks in ways that reduce chaos and enhance our ability to focus on what matters.
5. Categorization and Chunking: Levitin explains how our brains process information more effectively when it is categorized or "chunked." By grouping similar items or tasks together, we can enhance memory retention and streamline our decision-making processes.
6. Mindfulness and Presence: The author discusses the benefits of mindfulness and being present in the moment. Practicing mindfulness can help reduce distractions, improve focus, and enhance our ability to engage with the task at hand.
7. Creating Routines: Levitin advocates for the development of routines as a way to minimize decision fatigue. Establishing regular habits and rituals can reduce the number of decisions we need to make, allowing us to conserve mental energy for more important tasks.
8. The Power of Sleep: The book underscores the critical role of sleep in cognitive functioning. Levitin explains how adequate rest is essential for memory consolidation, emotional regulation, and overall mental clarity, and he encourages prioritizing sleep in our lives.
9. Emotional Regulation: Levitin emphasizes the connection between organization and emotional regulation. A well-organized life can lead to reduced stress and anxiety, while chaos and disorganization can exacerbate emotional challenges.
10. The Social Brain: Finally, the author highlights the significance of social connections. Maintaining relationships and social networks is essential for mental well-being and plays an important role in how we organize our lives and manage stress.
In "The Organized Mind," Daniel J. Levitin provides a comprehensive framework for understanding how to navigate the complexities of modern life. By applying these ten key lessons and insights, readers can develop practical strategies for organizing their thoughts, tasks, and environments, ultimately leading to enhanced productivity and improved mental clarity. The book serves as a valuable resource for anyone seeking to thrive in an increasingly information-rich world.

Cheat Sheet

If possible. Ask what is the biggest problem the role is trying to soive

 And always end with “did that answer your question? Happy to go into details”.

0. Tell me about yourself.

  1. The Start...
    • I have a degree
      • Computer Animation/VFX specialized in CG Lighting Rendering
      • Computer Science   
  2. I Like...
    • Collaborating with teams at the intersection of Creativity and Engineering  
    • Solving hard problems 
  3. For the past 10 years...
    • Technical Artist for CG Lighting/Rendering and 2D/3D Pipelines 
    • Prototyping for Minimal Viable Products for AI, AR, and Interactive Installations 

      I’ve worked on two types of digital humans. One was a real-time 3D character that uses machine learning for motion matching and gaze control to stay engaged with the user—focused on expressive, embodied behavior rather than language. The other was a 2D digital human created from real video footage and driven by a large language model, designed for conversational interaction.

       

      I specialize in capture pipelines working with photogrammetry, motion capture, volumetric video, and real-time broadcast tools to collect high-quality data for machine learning models.

SIGGRAPH ScavengeAR is the official augmented reality app I lead that was built with a team of volunteers for Computer Graphics conference from 2017–2019. It’s like Pokémon Go, but designed for a one-week event—attendees search for real-world ink blot markers to spawn 3D creatures in AR. The goal is to

We had thousands of daily active users until the pandemic killed the app since in person conference was banned. But now that the pandemic is over, I’m currently refactoring the app to use

In 2025, my coding skills leveled up.

Since then, we now have AR Foundation which is native to Unity and free.

Uses pure code and utilizes Prefab and Instantiation

In Addition:

Future: Object Pooling and memory management. 

3. How do you handle multiple stakeholders in cross-functional team?

Preprocessing Quality Control Pipeline for Machine Learning 

6. Tell me about a time you had a disagreement with your manager.

Tell me about a time when you had a conflict with a co-worker. 

Have you ever had to advocate for using a framework? 

Convince the need of a DAM

7. Tell me about a project where you faced unexpected challenges. How did you handle them? Tell me about a time you met a tight deadline.

MOCAP IN A HURRY