User actions are replicated following anatomical criteria and the implementation of artificial intelligence models through the usage of deep learning.
Our algorithm is capable of detecting points across all key articulations in the human body. The resulting setpoints are used to evaluate a certain movement.
Our algorithm uses images to infer the user position, but the actual pictures are not stored. We only work with setpoints distributed across the human body.
The movements performed by the user are categorized based on a series of predefined motion routines. Our algorithm is capable of identifying key states concerning a particular action. We currently hold an extensive group of motion routines and we keep adding new movements to increase our variety of actions.