7/24/2023 0 Comments Zed camera for tk1BODY_38: New body model with feet and simplified hands.This new module goes beyond the existing pose models that are trained on only 17 key points and enable localization of a new topology of 38 and 70 human body key points, making it ideal for advanced body tracking use cases. The new module employs ML to infer up to 70 landmarks of a body from a single frame. Introduced new Body Tracking Gen 2 module.The Neural depth in 4.0 offers a glimpse of what's to come, as we plan to roll out an even more robust model later in the year. Our new ZEDnet Gen 2 AI model now powers the depth sensing module for stereo depth perception, providing enhanced performance.Improved NEURAL depth mode which is now more robust to challenging situations such as low-light, heavy compression, noise, and textureless areas such as interior plain walls, overexposed areas, and exterior sky.By fusing visual odometry with GPS data, we can compensate for GPS dropouts in challenging outdoor environments and provide more accurate and reliable positioning information in real time.During a geo-tracking session, the API can constantly update the device's position in the real world by combining data from an external GPS and ZED camera odometry as it moves, delivering latitude and longitude with centimeter-level accuracy. Introduced new Geo-tracking API for accurate global location tracking.All SDK capabilities will be added to the Fusion API in the final 4.0 release. In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. The new API can be found in the header sl/Fusion.hpp as sl::Fusion API.Additionally, the Fusion module offers redundancy in case of camera failure or occlusions, making it a reliable solution for critical applications. ![]() The new module allows seamless synchronization and integration of data from multiple cameras in real time, providing more accurate and reliable information than a single camera setup.
0 Comments
Leave a Reply. |