What are the major motion capture methods? There are four methods used to capture motion data: optical, video, electro-magnetic and mechanical.
Why is the optical method selected? Any of the following requirements drive selection of optical motion capture:
Real-time data access
Capture of high speed action
Tracking multiple subjects or objects
An expanded motion capture area footprint
What is optical motion capture? Simply stated, optical motion capture is the tracking of markers on a subject or object over time. In a typical optical motion capture scenario, cameras are placed on the perimeter of a capture area to track markers placed on subjects or objects. In a PhaseSpace optical motion capture system, active LED marker positions are detected by camera sensors hundreds of times per second and transmitted to a central processor. The PhaseSpace processor calculates, reports and stores the unique X,Y,Z positions for LED markers 480 times per second. Marker positional data is streamed to entertainment, engineering and motion analysis applications. Visit http://en.wikipedia.org/wiki/Motion_capture for more information.
How is motion capture data used? The ability to capture motion data enables solutions in many areas. Here is a small sample:
Engineering & Research
Video Game Creation
Television Program Production
Physical Training & Gait Analysis
Emergency Response Training
How do your customers use PhaseSpace optical motion capture systems? PhaseSpace customers use our systems in a wide variety of applications ranging from virtual reality environments, series television production, robotic operation, architectural design, animation production, video game creation to motion analysis.
Why is motion capture used? Motion capture is typically deployed to:
Reduce production costs
Shorten a production pipeline
Improve modeling and design
Enable special effects
Provide new visualization, analysis and feedback methods
Expand and simplify robotic operations
Measure human performance
What do you mean by real-time visualization during a motion capture session? PhaseSpace processes motion capture data in real-time allowing you to stream to your preferred viewing software (PhaseSpace Viewer, third party software, such as Kaydara MOCAP™, or your own process through the PhaseSpace API). Directors use their preferred software to map the motion capture data to the character(s) or objects producing a real-time visualization of the motion capture session. Real-time motion capture visualizations, increase Director efficiency, provide immediate Actor feedback and reduce costly recapture sessions.
What should one consider when evaluating motion capture systems? Selection of a motion capture system should include an evaluation of your requirements in the following areas:
Capture speed (frames per second)
Cost (initial and operational)
Ease of data access
What are the components in a PhaseSpace optical motion digitizer system? The components of a PhaseSpace optical motion capture system are:
A HUB connecting cameras and LED drivers to the server
Linux server which interacts with the HUB
Operation and viewing software
Open API: Dynamic link libraries which enable a user to construct client programs to access motion capture data
What is the capture speed of the PhaseSpace optical motion capture system? PhaseSpace cameras capture LED positions at 480 frames per second.
What is the optical resolution of the PhaseSpace optical motion capture system? PhaseSpace optical motion capture systems are high resolution at 3600 x 3600 pixels with a sub-pixel resolution of 30,000 x 30,000.
Why does PhaseSpace use LEDs for markers? PhaseSpace evaluated active and inactive marker tracking during system design. PhaseSpace implemented an active LED marker system because active markers provide higher quality motion capture data, enable real-time visualizations and maximize configuration options. Active LED markers enable the processor to quickly resolve occlusions resulting in extremely clean motion capture data. Clean data saves you money by reducing post processing time and shortening your production pipeline. The ability to see a subject’s motion mapped to a character in real-time, provides immediate visual feedback and allows the director to specify additional takes and eliminate costly re-capture sessions. Finally, since each LED marker has a unique ID, you are easily able to configure marker capture scenarios for any combination of single or multiple subjects, normal or high speed motion and object interaction.
Why is it important to have real-time data processing for motion capture? Real-time processing is critical to several motion capture solutions. In Virtual Reality settings, real-time processing eliminates the display latency in Head Mounted Displays (HMD), which invokes motion sickness in most individuals. Real-time data availability is also critical for robotic operations, medical solutions and many engineering applications. Finally, the real-time data access enables immediate visualizations (mapping motion data to characters/objects) eliminating costly returns to the studio to re-capture an unsatisfactory scene.
Where can I use the PhaseSpace optical motion capture system? PhaseSpace motion capture systems operate easily in natural environments. Our cameras and active LED markers enable system operation inside under artificial lights or outside in natural light. There is no need to restrict your operation to a darkened studio. The systems are portable, easy to set up and calibrate: allowing capture in the most appropriate environment.
How do I access the motion capture data? PhaseSpace provides real-time data access through our client software, third party software (via plug-ins) and an API.
How long will it take to set up the PhaseSpace system for a full body motion scene, i.e., a 360° view? Setting up the tripods and cables is usually the longest task. With practice a twelve camera system can be set up and calibrated in under an hour.
We are looking for a way to accurately control a robot in a hazardous environment. Could we use the PhaseSpace system? Yes. This is an ideal application for real-time motion capture and we have developed software tools and sample interfaces for these types of applications.
I am a game developer. Can I use the PhaseSpace system instead of the keyboard, mouse and joystick? Yes, the PhaseSpace system can greatly improve a player’s game interface. We have built several sample demonstration game interfaces using gloves and HMDs and numerous objects.
Can I use the PhaseSpace system to improve the user interface with Head Mounted Display augmented reality? Yes. We are working with several clients to improve HMD interfaces. PhaseSpace high resolution head tracking and real-time data streaming improve HMD visualization by reducing display latency and improving field of view accuracy.
Video and motion picture content is delivered at 30 frames per second speed. Why is it important to capture motion data at a rate higher than 30 frames per second? Capturing data at a rate higher than the 30 frames per second viewing rate is essential to accurately capture fast moving subjects or objects and to eliminate jitter in head mounted displays.
We would like to use the PhaseSpace system as part of a baseball batting trainer. Can the system capture something traveling as fast as a bat? Absolutely! We have tested and demonstrated the ability to track the motion of a bat including swings of professional players.