AI Yoga Gesture Detection

The purpose of this project is to develop a joint point analysis-based AI-powered yoga posture detection system. The main goal is to create a virtual trainer that can accurately recognize different yoga poses and provide users with immediate feedback. The technology employs sophisticated computer vision algorithms to detect the user's stance by analyzing key joint locations in their body and then advising them on how to correct their posture. The AI yoga gesture detection model achieved an impressive overall accuracy of 95% during training, demonstrating its ability to learn from the dataset and make accurate predictions. When tested on the testing dataset, the model maintained a high accuracy rate of 90%, indicating strong performance in classifying yoga poses on previously unseen data. However, a validation accuracy of 60% indicates a discrepancy between the model's performance on the testing set and its generalization ability. Despite this, the model's high overall accuracy during the training and testing stages demonstrates its ability to accurately identify yoga poses, assisting users in achieving proper alignment and form during yoga practice.


Introduction
The combination of artificial intelligence (AI) and the ancient practice of yoga represents a promising frontier in health and wellness technology.This project will investigate this fusion using AI Yoga Gesture Estimation and cutting-edge technologies such as Tensorflow, Mediapipe, and PoseNet.The goal is to revolutionize yoga practice by utilizing joint point analysis and camera technology to provide real-time posture identification and assessment.Yoga, which originated in ancient India, is well-known for its comprehensive approach to health, which includes physical postures (asanas), breathing techniques (pranayama), and meditation.Its popularity has grown in recent years due to its numerous health benefits, which include stress reduction, increased flexibility, and improved mental well-being.However, mastering yoga poses with preciseness can be challenging, especially for beginners.Enter AI Yoga Gesture Estimation.By incorporating AI into yoga practice, this project aims to provide practitioners with quick and accurate feedback on posture alignment, encouraging continuous improvement and deepening their yoga journey.The system's sophisticated algorithms and machine learning models aim to accurately recognize and assess yoga poses in real time, allowing practitioners to fine-tune their practice and achieve optimal alignment.Using Tensorflow, Mediapipe, and PoseNet, this project aims to deliver a comprehensive solution that not only identifies yoga postures but also provides valuable insights into alignment and form.This project aims to bridge the gap between ancient wisdom and modern innovation by utilizing cutting-edge technology, opening up new possibilities for yoga practitioners and improving their overall well-being.In summary, AI Yoga Gesture Estimation has the potential to revolutionize yoga practice by providing practitioners with unprecedented insights and feedback for continuous growth and improvement.This project aims to usher in a new era of personalized, data-driven wellness by seamlessly integrating AI technology and the timeless discipline of yoga, empowering individuals to achieve their full potential both on and off the mat.

Problem Statement
Despite the growing popularity of yoga and its associated health benefits, many practitioners struggle with achieving proper posture alignment and technique.Traditional methods of learning and refining yoga poses often lack immediate feedback, leading to inefficiencies and potential injury risks.Additionally, personalized guidance tailored to individual practitioners' needs is often limited in conventional yoga instruction.To address these challenges, there is a need for an advanced AI-powered yoga gesture detection system capable of providing realtime feedback on posture alignment, offering personalized guidance, and facilitating progress tracking.This system should integrate state-of-theart technologies such as MediaPipe, TensorFlow, and PoseNet to accurately identify yoga poses and deliver tailored feedback, ultimately enhancing the overall yoga experience and promoting safe, effective practice for practitioners of all levels.

Objectives and Proposed System Objectives
The primary objectives of AI Yoga Gesture Detection include: • Real-Time Detection: Create models and algorithms that can accurately identify and track human postures in real time, allowing for timely analysis and feedback.• Precision & Accuracy: Maintain a high level of precision and accuracy when determining the locations of key body joints and landmarks, ensuring accurate and comprehensive posture data.• Pose Recognition: Give the system the ability to recognise specific stances or movements made by people, resulting in a thorough understanding of the body's structure.• Feedback mechanism: Provides users with timely and informative feedback on posture alignment, thereby facilitating the improvement of their overall practice or activity.Personalization features can be added by tailoring the pose estimate algorithm to individual anatomy and movement patterns, resulting in more efficient and customized guidance.• Accessibility: Make physical activities and wellness practices more accessible by providing a tool that helps users adopt the proper postures, regardless of their location or skill level.• Injury Prevention through Posture Recognition: Reduce the risk of strain or injury during physical activities by detecting and alerting users to inappropriate postures or movements, thereby preventing injuries.

Expected Outcomes
• Accurate Pose Recognition: The system is designed to accurately identify and classify a variety of yoga poses in real time, providing precise feedback on posture alignment.• Real-Time Feedback: During yoga practice, practitioners will receive immediate feedback, allowing them to make necessary changes to improve their form and technique.• Injury Prevention: By providing alignment guidance and feedback, the system hopes to reduce the risk of injury caused by incorrect posture alignment, promoting safe and effective practice.• Increased Accessibility: The system will make yoga practice more accessible to a wider range of people, including those who have limited access to traditional yoga instruction, promoting inclusivity and diversity in yoga.• Improved Consistency and Motivation: With the ability to track progress and receive personalized feedback, practitioners can maintain consistency in their practice and stay motivated to achieve their wellness goals, fostering a longer-term engagement with yoga.• Email: editor@ijfmr.comIJFMR240319227 Volume 6, Issue 3, May-June 2024 6

Methodology
The methodology for developing the AI-powered yoga gesture detection system consists of several key steps: • Data Collection: Collect a diverse dataset of yoga practitioners demonstrating various yoga poses from various angles and perspectives.To improve the model's robustness and generalizability, make sure the dataset includes people of various body types, ages, and skill levels.The interface should display detected poses in real time and provide clear, understandable feedback to the practitioner.• Testing and Evaluation: To assess the system's performance and accuracy, run extensive tests on both synthetic and real-world data.Assess the system's ability to accurately detect and recognize yoga poses in a variety of conditions and scenarios.• Iterative Refinement: Continuously iterate on the system's design and implementation in response to user feedback and testing results.Fine-tune model parameters, improve pose recognition and feedback generation algorithms, and optimise system performance for real-world use.

Proposed System
The proposed AI yoga gesture detection system combines several key components to improve the yoga experience and facilitate engagement: AI Yoga Gesture Detection: The system uses advanced AI algorithms like PoseNet to accurately detect and analyze yoga poses in real time.The system provides immediate feedback on posture alignment using TensorFlow and MediaPipe, allowing practitioners to fine-tune their techniques and reduce the risk of injury.
Chatbot Integration with Gemini API: The system includes a chatbot feature that uses the Gemini API to seamlessly integrate with cryptocurrency exchange functionality.Users can interact with the chatbot to learn about cryptocurrency prices, make transactions, and get personalized recommendations based on their preferences and trading history.

AI Calculator for Body Mass Index (BMI):
• Creates an AI-powered calculator that computes Body Mass Index (BMI) based on user input of height and weight.
• Provides personalized health insights and tips based on BMI calculations, including diet recommendations, exercise plans, and lifestyle changes.• Uses machine learning algorithms to analyze user data and make personalized recommendations for improving overall health and well-being.Enhanced User Experience: By combining these features, the system provides an improved user experience for both yoga practitioners and cryptocurrency enthusiasts.

System Design
The system design includes the architecture and components required to integrate pose estimation algorithms with realtime feedback mechanisms, allowing for more accurate recognition and guidance for yoga practitioners.It entails the seamless integration of technologies such as TensorFlow, MediaPipe, and PoseNet into an intuitive user interface to provide an accessible and effective solution for AI-powered yoga gesture detection.

Flow Diagram
A data flow diagram (DFD) is a visual representation of how data flows through a system.It is made up of processes, data storage, data flows, and external entities.In this AI yoga gesture detection project, a data flow diagram could represent the flow of data from input sources (such as video streams or webcam feeds capturing yoga practitioners' movements) through the various processing stages (pose estimation, posture recognition, feedback generation) to the output, which includes the real-time display of detected poses and feedback provided to practitioners.External entities may include the user who interacts with the system via the user interface, as well as any external systems or devices that are integrated with the AI yoga gesture detection system.The data flow diagram would help visualize how data moves through the system, allowing you to better understand its functionality and identify potential areas for optimization or improvement.End: The process concludes when the user selects a mode, captures/uploads an image, and the system displays the pose detection results.

Use Case Diagram
A use case diagram depicts the interactions between actors (users or external systems) and the system being considered.It demonstrates the various ways in which users can interact with the system to accomplish specific goals or tasks.

Figure 2: Use case diagram
A use case diagram for the AI yoga gesture detection project could depict the system's various functionalities or features, as well as how different actors interact with them.The system have two primary components: a user and an administrator.This is how it works.User: • Pose: The user holds a yoga pose in front of the camera.
• Capture Image: The system uses the camera to take an image of the user's pose.
• Upload Image: The user can choose to upload the captured image to the system.Admin: • View Detected Poses: The administrator can view the yoga poses detected by the system, which may be via a live video feed or user-uploaded images.
• Provide Feedback: Based on the viewed poses, the administrator can give the user feedback on their form or posture.

System Architecture
The system architecture of the AI yoga gesture detection project includes the overall structure and components that allow for the detection of yoga poses using artificial intelligence algorithms.This is an overview

Conclusion
In conclusion, the implementation of the yoga pose detection system demonstrates the integration of various technologies and libraries to provide users with diverse methods for detecting yoga poses.The system uses OpenCV, MediaPipe, TensorFlow, Tkinter, and Streamlit to detect poses in real time from webcam feeds and analyze yoga poses from uploaded images.This multifaceted approach improves accessibility and usability while catering to users' diverse preferences and needs.By leveraging machine learning models and computer vision techniques, the system helps to advance health and wellness applications, allowing people to practice yoga with greater awareness and guidance.Overall, the implemented system exemplifies the fusion of technology and fitness, demonstrating the potential of AIdriven solutions to promote personal well-being and physical activity.

Acknowledgement
We would like to thank Dr. Suresha D, Head of the Department of Information Science and Engineering at AJIET, for his invaluable guidance and support throughout the duration of this research.

•
Data Preprocessing: Preprocessing the collected data improves its quality and consistency.This may include resizing images, standardizing lighting conditions, and removing background noise to improve pose estimation accuracy.• Model Selection and Training: Select an appropriate pose estimation model, such as PoseNet, that works with the TensorFlow framework.Train the selected model on the preprocessed dataset to identify the key body landmarks associated with each yoga pose.• Integration with MediaPipe: Use MediaPipe, a comprehensive library for creating perception pipelines, to perform real-time pose estimation from video streams or webcam inputs.Use MediaPipe's robust algorithms and pre-trained models to improve the accuracy and efficiency of pose detection.• Posture Recognition and Feedback Generation: Create algorithms for recognizing yoga poses based on key body landmarks detected.Implement logic to provide real-time feedback on posture alignment, highlighting areas for improvement and recommending corrective actions for the practitioner.• User Interface Design: Design an intuitive user interface for the AI yoga gesture detection system.

Figure 1 :
Figure 1: flow diagram of the system architecture Data Collection: Collect a wide range of yoga poses and gestures from various sources.Preprocessing: Prepare the data by standardizing image sizes, removing noise, and augmenting for diversity.Model Training: Use a Convolutional Neural Network (CNN) to learn features from data and predict yoga poses.Validation: Test the model's performance on a validation set to avoid overfitting during training.

Figure 5 :Figure 6 :
Figure 5: About Us Page created using Next.js and tailwind CSS