ESPi Patent Portfolio:
The Master Key to True 3D User Input

The ESPi patent portfolio covers many methods of using on-board optics including inside-out cameras for calculating absolute 3D position and orientation of an object.  The 3D position and orientation data thus obtained is expressed with respect to fixed points in the user's environment (absolute pose).  Changes in absolute pose are also tracked to enable the user to generate a rich stream of input data.  Such data is invaluable in determining what the user wants to communicate with their natural gestures and actions.  Since the tracked object can be a wearable item, such as goggles, or something that the user manipulates in 3D space, such as a stylus or a remote control, the interface options are truly limitless!
As mobile chips become faster, smaller and cheaper, user input devices have been gaining in precision, resolution, and responsiveness.  The coming generation of input devices will provide superior user experience that “feels like the real world”.  The high levels of performance will be needed to make the user feel present in the Augmented Reality (AR) or Virtual Reality (VR).  Game and TV controllers, AR/VR goggles, medical devices, digital pens, and consumer goods not yet conceived will track natural human movement to enable this new paradigm.  We believe that the ESPi's patent portfolio will be a cornerstone for supporting the development of next generation optics-based input devices and enabling this exciting transition.
Of course, 3D equations of motion have been known for over 100 years.
Our researchers’ contribution was to adapt them for generating user input. Having started over a decade ago, ESPi is the first player to patent the six degrees of freedom calculations for absolute 3D tracking in conjunction with optical methods that employ on-board or inside-out optical sensors to generate user input for applications.  Mathematically speaking, many of ESPi's calculations are the same as those used by standard navigation systems.  Persons skilled in the art will recognize that these equations are the only way to fully and accurately describe an object’s movement in 3D space.  The fundamental rules of 3D geometry preclude other approaches.
Question:  The Wii is pretty good, and it doesn’t calculate in true 3D (it doesn’t track six degrees of freedom) so why is such accuracy important?

Answer:  The Wii and other state-of-the-art user input devices that employ inertial sensors and limited optics (IR LEDs) have not yet implemented true 3D algorithms.  These devices make compromises to avoid performing the full calculations required to recover the user’s precise 3D motions.  The result is a loss of information, gradual drift, and accumulating position and orientation errors.  To the user, the device appears to gradually go “off track”.  In fact, a smart user can actually “cheat” in the motions he or she performs, as they don’t need to accurately mimic the real world.  Gamers sense such lack of the “real world feel”.  In games/applications requiring correspondence to the real world such shortcomings are deal breakers.
Question:  My application works with a 2D display. Why do I need input that reports all six degrees of freedom?

Answer:   Errors in capturing and translating 3D pose are inherited in 2D applications.  The Wii, or LG’s “magic wand” TV control drift even when interacting with 2D displays.   When unconstrained movement in 3D is translated to 2D, information is lost – it’s like watching a 3D object move by looking at its shadow.  Applications must work around this loss of 3D information by adding re-calibrations, simplifications or buttons. Special steps must be taken to avoid “aliasing” problems (e.g., when different 3D poses produce the same 2D projection) and to ensure motion continuity. Recovering all six degrees of freedom (true 3D) obviates such problems and eliminates the need for cumbersome workarounds.

Common workarounds constrain the range of motion of the input device to avoid having to calculate its natural three-dimensional movement. These workarounds are found in devices such as joysticks or trackballs. However, the marketplace is rewarding devices that don’t limit movement. User interface developers want to avoid artificial limitations in future devices.
Question:  What are the benefits to using ESPi’s hardware and algorithms?

1) Capture the natural direction of progress. For today’s user input devices to improve, they must be calibrated to present an accurate relationship to the real world. Calibrating relative motion sensors with on-board optical units during absolute pose recovery is a great use of ESPi’s calculations.

2) Strategic advantage. As computing power increases, high input accuracy is becoming cost efficient.  Thus, input devices can add resolution, speed, and precision.  ESPi’s on-board optics hardware and corresponding calculations are a great way of accomplishing this.

3) Competitive advantage. Users will experience full 3D precision as a responsive and natural input device that accurately captures their motions. Lifelike interactions between players in multi-player games will add a quantum leap to the gaming world. Medical applications will benefit.

4) Boad applications. ESPi patents are not limited to any specific purpose. They apply to user input device with on-board optics that can be held, manipulated or attached to the user’s body.
Question:  Why did ESPi choose optics to power NaviScribe solutions?

Answer:  ESPi uses on-board optics to recover the absolute position and orientation of the device being tracked. It’s the approach commonly used in computer vision and robotics and it has many advantages!

It's Simple.  Many of ESPi’s algorithms require no additional equipment in the environment.  They can work passively with existing visible features.

It's Cost Effective.  Optical devices are cost effective and rapidly becoming ubiquitous.  On-board cameras of sufficient quality are present in many consumer electronics products, including smart phones.

It's Highly Accurate.   Compared to competing solutions, optics deploy short wavelength radiation.  This choice endows them with very high spatial resolution capabilities.  In addition, optical radiation has low latency - travelling at the speed of light - and is relatively immune to noise.
Question:  What’s the advantage to on-board (inside-out) sensors versus off-board (outside-in) sensors?

Answer: Given on-board resources, ESPi’s sensor solution is more efficient, lower cost, and higher resolution than approaches with off-board sensors. Resolving an object’s movement from on-board the object is easier than resolving the same movement from a remote location.  Off-board sensors, such as those used by Microsoft’s Kinect or Sony’s Move, even when supported with reflected IR radiation, require more power, more resources, and more processing.  In addition, systems with off-board sensors cannot match the level of spatial resolution achievable with an on-board sensor.
Question:  What type of Intellectual Property Rights (IPRs) does ESPi own?

Answer: ESPi owns patents broadly classified as essential patents and implementation patents.  The essential U.S. patents cover the core apparatus and methods for six degrees of freedom 3D interfaces with on-board optical sensors.  They have international counterparts in Europe and Asia.  These IPRs are basically impossible to design around by third parties due to their early priority dates and broad claims built on the foundations of optical 3D pose recovery with on-board sensors.  The implementation patents address preferred embodiments of the technology in terms of advantageous hardware, firmware and software choices. Specific hardware platforms include wands, remote controls, smart phones, and tablets.
PROPOSITION: 

The ESPi portfolio of patents offers the powerful raw ingredients for absolute 3D motion tracking of wearables and manipulated devices for generating user input.  It is an ideal fit for Augmented and Virtual Reality applications.  ESPi researchers also possess technical know-how gained from developing a breadboard prototype and several alpha units.  We can directly apply this experience to ensure effective beta design and successful productization for our customers.  We even have some algorithms to help you conserve on-board resources by reducing the computational burden while obtaining full optical pose recovery under certain conditions.

Let's work together - give us a call!