A computer vision project that detects and classifies hand signs in real-time using Google's MediaPipe and OpenCV.
- Real-time hand landmark detection
- Classification of 5 distinct hand signs:
- Claws
- Frogs
- Gigem
- Guns Up
- Horns
- Custom descriptor based on normalized landmark distances
- "Truth data" clustering for robust classification
- Clean, modular class-based architecture
-
Clone the repository:
git clone https://github.com/jacobgeorge3/HandSignDetector.git cd HandSignDetector -
Install dependencies:
pip install -r requirements.txt
Run the main script to start the webcam feed and detection:
python handsign.pyPress q to quit the application.
handsign.py- Main application withHandSignDetectorclassclassifier.py- Hand landmark descriptor creation and distance metricsstatic_handsign.py- Truth data generation from static imagescluster.py- K-means clustering for gesture centroidsconfig.py- Configuration constantsgesture.names- List of gesture class namesimages/- Training images for each gesturetest_refactor.py- Unit tests
The system uses MediaPipe to extract 21 hand landmarks. These landmarks are normalized and converted into a custom descriptor vector. This vector is compared against pre-computed cluster centers of "truth data" using a pseudo-Euclidean distance metric. A ratio test is applied to ensure high-confidence classifications.
Run the test suite:
python test_refactor.pyThis is a refactored version of the CVFinalProj repository, with improved code organization, type hints, and error handling.