Emeritus-21 commited on
Commit
56c47b6
Β·
verified Β·
1 Parent(s): 9231b52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -9
README.md CHANGED
@@ -1,14 +1,51 @@
1
  ---
2
- title: Itsekiri Sign Language Interpreter
3
- emoji: πŸ‘€
4
- colorFrom: indigo
5
- colorTo: purple
6
  sdk: gradio
7
- sdk_version: 5.39.0
8
- app_file: app.py
 
 
9
  pinned: false
10
- license: apache-2.0
11
- short_description: 'sign language for words to Itsekiri '
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Sign Language Interpreter
3
+ emoji: πŸ‘‹
4
+ colorFrom: yellow
5
+ colorTo: gray
6
  sdk: gradio
7
+ sdk_version: 4.36.0
8
+ python_version: 3.10.4
9
+ app_file: main.py
10
+ license: gpl-3.0
11
  pinned: false
 
 
12
  ---
13
 
14
+ # Sign Language Interpreter
15
+
16
+ Many people around the world use sign language to communicate. Communication occurs only when a message is sent and also received. Sign-language users are able to efficiently converse when the observer understands sign-language. This is usually not the case in reality. Hence, this tool would be extremely helpful to interpret and pronounce sign language to the listener. The tool would allow sign-language users to communicate with more people and enable them to more easily take part in society.
17
+
18
+ ## Table of Contents
19
+
20
+ - [About](#about)
21
+ - [Built With](#built-with)
22
+ - [Usage](#usage)
23
+ - [License](#license)
24
+
25
+ ## About
26
+
27
+ Sign-Language-Interpreter aims to allow a fluent sign-language user to sign into a camera and have the user's message be spoken aloud.
28
+ *Currently this project has been implemented as a demo website running on [HugggingFace](https://huggingface.co/spaces/HuggingFace-SK/Sign-Language-Interpreter). To reach a wider audience and eliminate dependency on internet availability, this [AndroidJS build](https://github.com/Shantanu-Khedkar/silangint) is being developed*
29
+
30
+ ## Built With
31
+
32
+ Sign-Language-Interpreter was built with these technologies and libraries:
33
+
34
+ - [Javascript](https://www.w3schools.com/js/DEFAULT.asp)
35
+ - [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/guide)
36
+ - [Tensorflow](https://www.tensorflow.org/)
37
+ - [TFLiteJS](https://js.tensorflow.org/api_tflite/0.0.1-alpha.4/)
38
+ - [WebSpeechAPI](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API)
39
+ - [Flask](https://flask.palletsprojects.com/)
40
+ - [HuggingFace Spaces](https://huggingface.co/docs/hub/en/index#spaces)
41
+
42
+ ## Usage
43
+
44
+ The user may sign into a camera and have the signed letters detected by the program.
45
+ Fingerspelling is supported as including many signs in the model would require more resources. Fingerspelling is simply using a standard set of stationary signs as letters and making words from the ground up.
46
+
47
+ A complete word based implementation is planned. Sign-language ommits many auxiliary words of English and mostly consists of nouns and verbs. The detected words may not make complete sense if directly pronounced. Hence, an LLM model will help to fill in the missing words or restructure the sentence by inferring its intended meaning. The restructured sentence will be spoken.
48
+
49
+ ## License
50
+
51
+ This project is licensed under the [GPL v3](https://www.gnu.org/licenses/gpl-3.0.en.html) or greater.