Human Action Understanding

 

Introduction

Human Action Understanding (HAU) engine is an engine that provides information on the subject's posture derived from a single camera image. The HAU engine detects the subject's predefined key points such as the head, pelvis, wrist, and ankle from a 2D image and derives their 3D coordinates using the camera coordinate system (CCS).

 

The HAU engine provided by ThinQ.AI supports the following features:

DSM 엔진의 특징
Feature Description
Tracking the positional data of a moving subject

Tracks the positional data of a moving subject's contour in real time and joints in the video.

Analyzing the posture data of a subject

Uses the positional data of specific 2D key points predefined in the subject’s body to derive 2D/3D coordinates of joints and body contour, providing information on the subject’s posture.

A GPU usage environment is recommended.

3D data analysis requires high-fps videos. So, a GPU usage environment is recommended.

 

Engine Structure

HAU engine receives video input being recorded by the camera or being saved in storage media in real time, analyzes the video, and detects relevant information before sending the result to an application.

It can receive video images and engine settings as input data and uses them for image analysis.

HAU 엔진구조

Usage

The HAU engine is used in the fields that need image-based services.

 

  • VR and AR

Recognizes the subject's motion and presents the motion in a virtual environment.

The example image for VR/AR.

  • Healthcare

Recognizes the subject’s motion and use the data for correcting the subject's posture.

The example image for healthcare.

  • Self-driving

Recognizes, determines, and predicts the subject's motion.

The example image for autonomous driving.