artificialguybr commited on
Commit
a2feb75
·
1 Parent(s): e460201

Complete Surya OCR Studio implementation with all features

Browse files

Features:
- OCR: Text recognition in 90+ languages with RecognitionPredictor
- Text Detection: Line-level detection with DetectionPredictor
- Layout Analysis: Identify document structure with LayoutPredictor
- Table Recognition: Extract tables to Markdown with TableRecPredictor
- LaTeX OCR: Convert equations to LaTeX with TexifyPredictor

Technical:
- Uses new Surya API (FoundationPredictor pattern)
- ZeroGPU support with @spaces.GPU decorators
- Lazy model loading for faster startup
- Modern dark UI with gradient theme
- Comprehensive error handling
- Performance optimizations for batch sizes

Files changed (4) hide show
  1. README.md +52 -6
  2. app.py +592 -219
  3. languages.json +0 -95
  4. requirements.txt +4 -2
README.md CHANGED
@@ -1,12 +1,58 @@
1
  ---
2
- title: Surya OCR
3
- emoji: 👀
4
- colorFrom: purple
5
- colorTo: green
6
  sdk: gradio
7
- sdk_version: 4.41.0
8
  app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Surya OCR Studio
3
+ emoji: 📄
4
+ colorFrom: green
5
+ colorTo: blue
6
  sdk: gradio
7
+ sdk_version: 5.0.0
8
  app_file: app.py
9
  pinned: false
10
  ---
11
 
12
+ # Surya OCR Studio
13
+
14
+ Complete document OCR toolkit supporting 90+ languages with ZeroGPU.
15
+
16
+ ## Features
17
+
18
+ | Feature | Description |
19
+ |---------|-------------|
20
+ | **OCR** | Text recognition in 90+ languages |
21
+ | **Text Detection** | Line-level text detection |
22
+ | **Layout Analysis** | Identify tables, figures, headers, captions, etc. |
23
+ | **Table Recognition** | Extract table structure to Markdown |
24
+ | **LaTeX OCR** | Convert equation images to LaTeX |
25
+
26
+ ## Usage
27
+
28
+ ### OCR
29
+ 1. Upload an image (JPG, PNG, etc.)
30
+ 2. Specify language codes (e.g., "en" or "en, pt, es")
31
+ 3. Click "Run OCR"
32
+
33
+ ### Table Recognition
34
+ 1. Upload an image of a table
35
+ 2. Click "Recognize Table"
36
+ 3. Get Markdown output
37
+
38
+ ### LaTeX OCR
39
+ 1. Upload a cropped equation image
40
+ 2. Click "Extract LaTeX"
41
+ 3. Copy the LaTeX code
42
+
43
+ ## Supported Languages
44
+
45
+ English (en), Portuguese (pt), Spanish (es), French (fr), German (de), Italian (it), Dutch (nl), Russian (ru), Chinese (zh), Japanese (ja), Korean (ko), Arabic (ar), Hindi (hi), and 80+ more.
46
+
47
+ ## Model
48
+
49
+ - [Surya OCR](https://github.com/datalab-to/surya) by datalab-to
50
+
51
+ ## License
52
+
53
+ - Code: GPL-3.0
54
+ - Model weights: Modified AI Pubs Open Rail-M license (free for research, personal use, and startups under $2M funding/revenue)
55
+
56
+ ## Space by
57
+
58
+ [@artificialguybr](https://twitter.com/artificialguybr)
app.py CHANGED
@@ -1,249 +1,622 @@
 
 
 
 
 
1
  import gradio as gr
 
2
  import logging
3
  import os
4
  import json
5
- from PIL import Image, ImageDraw
 
6
  import torch
7
- from surya.ocr import run_ocr
8
- from surya.detection import batch_text_detection
9
- from surya.layout import batch_layout_detection
10
- from surya.ordering import batch_ordering
11
- from surya.model.detection.model import load_model as load_det_model, load_processor as load_det_processor
12
- from surya.model.recognition.model import load_model as load_rec_model
13
- from surya.model.recognition.processor import load_processor as load_rec_processor
 
 
 
 
 
 
 
 
 
 
 
14
  from surya.settings import settings
15
- from surya.model.ordering.processor import load_processor as load_order_processor
16
- from surya.model.ordering.model import load_model as load_order_model
17
 
18
- # Configuração do TorchDynamo
19
- torch._dynamo.config.capture_scalar_outputs = True
20
 
21
- # Configuração de logging
22
- logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
23
- logger = logging.getLogger(__name__)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- # Configuração de variáveis de ambiente
26
- logger.info("Configurando variáveis de ambiente para otimização de performance")
27
- os.environ["RECOGNITION_BATCH_SIZE"] = "512"
28
- os.environ["DETECTOR_BATCH_SIZE"] = "36"
29
- os.environ["ORDER_BATCH_SIZE"] = "32"
30
- os.environ["RECOGNITION_STATIC_CACHE"] = "true"
31
-
32
- # Carregamento de modelos
33
- logger.info("Iniciando carregamento dos modelos...")
34
-
35
- try:
36
- logger.debug("Carregando modelo e processador de detecção...")
37
- det_processor, det_model = load_det_processor(), load_det_model()
38
- logger.debug("Modelo e processador de detecção carregados com sucesso")
39
- except Exception as e:
40
- logger.error(f"Erro ao carregar modelo de detecção: {e}")
41
- raise
42
-
43
- try:
44
- logger.debug("Carregando modelo e processador de reconhecimento...")
45
- rec_model, rec_processor = load_rec_model(), load_rec_processor()
46
- logger.debug("Modelo e processador de reconhecimento carregados com sucesso")
47
- except Exception as e:
48
- logger.error(f"Erro ao carregar modelo de reconhecimento: {e}")
49
- raise
50
-
51
- try:
52
- logger.debug("Carregando modelo e processador de layout...")
53
- layout_model = load_det_model(checkpoint=settings.LAYOUT_MODEL_CHECKPOINT)
54
- layout_processor = load_det_processor(checkpoint=settings.LAYOUT_MODEL_CHECKPOINT)
55
- logger.debug("Modelo e processador de layout carregados com sucesso")
56
- except Exception as e:
57
- logger.error(f"Erro ao carregar modelo de layout: {e}")
58
- raise
59
-
60
- try:
61
- logger.debug("Carregando modelo e processador de ordenação...")
62
- order_model = load_order_model()
63
- order_processor = load_order_processor()
64
- logger.debug("Modelo e processador de ordenação carregados com sucesso")
65
- except Exception as e:
66
- logger.error(f"Erro ao carregar modelo de ordenação: {e}")
67
- raise
68
-
69
- logger.info("Todos os modelos foram carregados com sucesso")
70
-
71
- # Compilação do modelo de reconhecimento
72
- logger.info("Iniciando compilação do modelo de reconhecimento...")
73
- try:
74
- rec_model.decoder.model = torch.compile(rec_model.decoder.model)
75
- logger.info("Compilação do modelo de reconhecimento concluída com sucesso")
76
- except Exception as e:
77
- logger.error(f"Erro durante a compilação do modelo de reconhecimento: {e}")
78
- logger.warning("Continuando sem compilação do modelo")
79
-
80
- class CustomJSONEncoder(json.JSONEncoder):
81
- def default(self, obj):
82
- if isinstance(obj, Image.Image):
83
- return "Image object (not serializable)"
84
- if hasattr(obj, '__dict__'):
85
- return {k: self.default(v) for k, v in obj.__dict__.items()}
86
- return str(obj)
87
-
88
- def serialize_result(result):
89
- return json.dumps(result, cls=CustomJSONEncoder, indent=2)
90
-
91
- def draw_boxes(image, predictions, color=(255, 0, 0)):
92
  draw = ImageDraw.Draw(image)
93
- if isinstance(predictions, list):
94
- for pred in predictions:
95
- if hasattr(pred, 'bboxes'):
96
- for bbox in pred.bboxes:
97
- draw.rectangle(bbox, outline=color, width=2)
98
- elif hasattr(pred, 'bbox'):
99
- draw.rectangle(pred.bbox, outline=color, width=2)
100
- elif hasattr(pred, 'polygon'):
101
- draw.polygon(pred.polygon, outline=color, width=2)
102
- elif hasattr(predictions, 'bboxes'):
103
- for bbox in predictions.bboxes:
104
- draw.rectangle(bbox, outline=color, width=2)
105
  return image
106
 
107
- def ocr_workflow(image, langs):
108
- logger.info(f"Iniciando workflow OCR com idiomas: {langs}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  try:
110
- image = Image.open(image.name)
111
- logger.debug(f"Imagem carregada: {image.size}")
112
- predictions = run_ocr([image], [langs.split(',')], det_model, det_processor, rec_model, rec_processor)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
- # Draw bounding boxes on the image
115
- image_with_boxes = draw_boxes(image.copy(), predictions[0].text_lines)
116
 
117
- # Format the OCR results
118
- formatted_text = "\n".join([line.text for line in predictions[0].text_lines])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
 
120
- logger.info("Workflow OCR concluído com sucesso")
121
- return serialize_result(predictions), image_with_boxes, formatted_text
122
  except Exception as e:
123
- logger.error(f"Erro durante o workflow OCR: {e}")
124
- return serialize_result({"error": str(e)}), None, ""
125
 
126
- def text_detection_workflow(image):
127
- logger.info("Iniciando workflow de detecção de texto")
 
 
 
128
  try:
129
- image = Image.open(image.name)
130
- logger.debug(f"Imagem carregada: {image.size}")
131
- predictions = batch_text_detection([image], det_model, det_processor)
132
-
133
- # Draw bounding boxes on the image
134
- image_with_boxes = draw_boxes(image.copy(), predictions)
135
-
136
- # Convert predictions to a serializable format
137
- serializable_predictions = []
138
- for pred in predictions:
139
- serializable_pred = {
140
- 'bboxes': [bbox.tolist() if hasattr(bbox, 'tolist') else bbox for bbox in pred.bboxes],
141
- 'polygons': [poly.tolist() if hasattr(poly, 'tolist') else poly for poly in pred.polygons],
142
- 'confidences': pred.confidences,
143
- 'vertical_lines': [line.tolist() if hasattr(line, 'tolist') else line for line in pred.vertical_lines],
144
- 'image_bbox': pred.image_bbox.tolist() if hasattr(pred.image_bbox, 'tolist') else pred.image_bbox
145
- }
146
- serializable_predictions.append(serializable_pred)
147
-
148
- logger.info("Workflow de detecção de texto concluído com sucesso")
149
- return serialize_result(serializable_predictions), image_with_boxes
 
 
 
 
 
 
 
 
 
 
 
150
  except Exception as e:
151
- logger.error(f"Erro durante o workflow de detecção de texto: {e}")
152
- return serialize_result({"error": str(e)}), None
153
 
154
- def layout_analysis_workflow(image):
155
- logger.info("Iniciando workflow de análise de layout")
 
 
 
156
  try:
157
- image = Image.open(image.name)
158
- logger.debug(f"Imagem carregada: {image.size}")
159
- line_predictions = batch_text_detection([image], det_model, det_processor)
160
- logger.debug(f"Detecção de linhas concluída. Número de linhas detectadas: {len(line_predictions[0].bboxes)}")
161
- layout_predictions = batch_layout_detection([image], layout_model, layout_processor, line_predictions)
162
-
163
- # Draw bounding boxes on the image
164
- image_with_boxes = draw_boxes(image.copy(), layout_predictions[0], color=(0, 255, 0))
165
-
166
- # Convert predictions to a serializable format
167
- serializable_predictions = []
168
- for pred in layout_predictions:
169
- serializable_pred = {
170
- 'bboxes': [
171
- {
172
- 'bbox': bbox.bbox.tolist() if hasattr(bbox.bbox, 'tolist') else bbox.bbox,
173
- 'polygon': bbox.polygon.tolist() if hasattr(bbox.polygon, 'tolist') else bbox.polygon,
174
- 'confidence': bbox.confidence,
175
- 'label': bbox.label
176
- } for bbox in pred.bboxes
177
- ],
178
- 'image_bbox': pred.image_bbox.tolist() if hasattr(pred.image_bbox, 'tolist') else pred.image_bbox
179
- }
180
- serializable_predictions.append(serializable_pred)
181
-
182
- logger.info("Workflow de análise de layout concluído com sucesso")
183
- return serialize_result(serializable_predictions), image_with_boxes
 
 
 
 
 
 
 
 
 
184
  except Exception as e:
185
- logger.error(f"Erro durante o workflow de análise de layout: {e}")
186
- return serialize_result({"error": str(e)}), None
 
187
 
188
- def reading_order_workflow(image):
189
- logger.info("Iniciando workflow de ordem de leitura")
 
 
190
  try:
191
- image = Image.open(image.name)
192
- logger.debug(f"Imagem carregada: {image.size}")
193
- line_predictions = batch_text_detection([image], det_model, det_processor)
194
- logger.debug(f"Detecção de linhas concluída. Número de linhas detectadas: {len(line_predictions[0].bboxes)}")
195
- layout_predictions = batch_layout_detection([image], layout_model, layout_processor, line_predictions)
196
- logger.debug(f"Análise de layout concluída. Número de elementos de layout: {len(layout_predictions[0].bboxes)}")
197
- bboxes = [pred.bbox for pred in layout_predictions[0].bboxes]
198
- order_predictions = batch_ordering([image], [bboxes], order_model, order_processor)
199
-
200
- # Draw bounding boxes on the image
201
- image_with_boxes = image.copy()
202
- draw = ImageDraw.Draw(image_with_boxes)
203
- for i, bbox in enumerate(order_predictions[0].bboxes):
204
- draw.rectangle(bbox.bbox, outline=(0, 0, 255), width=2)
205
- draw.text((bbox.bbox[0], bbox.bbox[1]), str(bbox.position), fill=(255, 0, 0))
206
-
207
- logger.info("Workflow de ordem de leitura concluído com sucesso")
208
- return serialize_result(order_predictions), image_with_boxes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
209
  except Exception as e:
210
- logger.error(f"Erro durante o workflow de ordem de leitura: {e}")
211
- return serialize_result({"error": str(e)}), None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
212
 
213
- with gr.Blocks(theme=gr.themes.Soft()) as demo:
214
- gr.Markdown("# Análise de Documentos com Surya")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
215
 
216
- with gr.Tab("OCR"):
217
- gr.Markdown("## Reconhecimento Óptico de Caracteres")
218
- with gr.Row():
219
- ocr_input = gr.File(label="Carregar Imagem ou PDF")
220
- ocr_langs = gr.Textbox(label="Idiomas (separados por vírgula)", value="en")
221
- ocr_button = gr.Button("Executar OCR")
222
- ocr_output = gr.JSON(label="Resultados OCR")
223
- ocr_image = gr.Image(label="Imagem com Bounding Boxes")
224
- ocr_text = gr.Textbox(label="Texto Extraído", lines=10)
225
- ocr_button.click(ocr_workflow, inputs=[ocr_input, ocr_langs], outputs=[ocr_output, ocr_image, ocr_text])
226
-
227
- with gr.Tab("Detecção de Texto"):
228
- gr.Markdown("## Detecção de Linhas de Texto")
229
- det_input = gr.File(label="Carregar Imagem ou PDF")
230
- det_button = gr.Button("Executar Detecção de Texto")
231
- det_output = gr.JSON(label="Resultados da Detecção de Texto")
232
- det_image = gr.Image(label="Imagem com Bounding Boxes")
233
- det_button.click(text_detection_workflow, inputs=det_input, outputs=[det_output, det_image])
234
-
235
- with gr.Tab("Análise de Layout"):
236
- gr.Markdown("## Análise de Layout e Ordem de Leitura")
237
- layout_input = gr.File(label="Carregar Imagem ou PDF")
238
- layout_button = gr.Button("Executar Análise de Layout")
239
- order_button = gr.Button("Determinar Ordem de Leitura")
240
- layout_output = gr.JSON(label="Resultados da Análise de Layout")
241
- layout_image = gr.Image(label="Imagem com Layout")
242
- order_output = gr.JSON(label="Resultados da Ordem de Leitura")
243
- order_image = gr.Image(label="Imagem com Ordem de Leitura")
244
- layout_button.click(layout_analysis_workflow, inputs=layout_input, outputs=[layout_output, layout_image])
245
- order_button.click(reading_order_workflow, inputs=layout_input, outputs=[order_output, order_image])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
246
 
247
  if __name__ == "__main__":
248
- logger.info("Iniciando aplicativo Gradio...")
249
- demo.launch()
 
1
+ """
2
+ Surya OCR Studio - Complete Implementation
3
+ Features: OCR, Text Detection, Layout Analysis, Table Recognition, LaTeX OCR
4
+ """
5
+
6
  import gradio as gr
7
+ import spaces
8
  import logging
9
  import os
10
  import json
11
+ from PIL import Image, ImageDraw, ImageFont
12
+ from typing import List, Optional
13
  import torch
14
+
15
+ # Configure logging
16
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
17
+ logger = logging.getLogger(__name__)
18
+
19
+ # Performance optimizations for ZeroGPU
20
+ os.environ["RECOGNITION_BATCH_SIZE"] = "64"
21
+ os.environ["DETECTOR_BATCH_SIZE"] = "8"
22
+ os.environ["LAYOUT_BATCH_SIZE"] = "8"
23
+ os.environ["TABLE_REC_BATCH_SIZE"] = "16"
24
+
25
+ # Surya imports
26
+ from surya.foundation import FoundationPredictor
27
+ from surya.recognition import RecognitionPredictor
28
+ from surya.detection import DetectionPredictor
29
+ from surya.layout import LayoutPredictor
30
+ from surya.table_rec import TableRecPredictor
31
+ from surya.texify import TexifyPredictor
32
  from surya.settings import settings
 
 
33
 
34
+ logger.info("Loading Surya models...")
 
35
 
36
+ # Initialize predictors (lazy loading for faster startup)
37
+ _foundation_predictor = None
38
+ _detection_predictor = None
39
+ _recognition_predictor = None
40
+ _layout_predictor = None
41
+ _table_rec_predictor = None
42
+ _texify_predictor = None
43
+
44
+ def get_foundation_predictor():
45
+ global _foundation_predictor
46
+ if _foundation_predictor is None:
47
+ _foundation_predictor = FoundationPredictor()
48
+ return _foundation_predictor
49
+
50
+ def get_detection_predictor():
51
+ global _detection_predictor
52
+ if _detection_predictor is None:
53
+ _detection_predictor = DetectionPredictor()
54
+ return _detection_predictor
55
+
56
+ def get_recognition_predictor():
57
+ global _recognition_predictor
58
+ if _recognition_predictor is None:
59
+ _recognition_predictor = RecognitionPredictor(get_foundation_predictor())
60
+ return _recognition_predictor
61
+
62
+ def get_layout_predictor():
63
+ global _layout_predictor
64
+ if _layout_predictor is None:
65
+ _layout_predictor = LayoutPredictor(
66
+ FoundationPredictor(checkpoint=settings.LAYOUT_MODEL_CHECKPOINT)
67
+ )
68
+ return _layout_predictor
69
+
70
+ def get_table_rec_predictor():
71
+ global _table_rec_predictor
72
+ if _table_rec_predictor is None:
73
+ _table_rec_predictor = TableRecPredictor()
74
+ return _table_rec_predictor
75
+
76
+ def get_texify_predictor():
77
+ global _texify_predictor
78
+ if _texify_predictor is None:
79
+ _texify_predictor = TexifyPredictor()
80
+ return _texify_predictor
81
+
82
+ logger.info("Models will be loaded on first use.")
83
+
84
+ # Layout labels and colors
85
+ LAYOUT_LABELS = {
86
+ 'Text': '#10B981', # Green
87
+ 'Title': '#EF4444', # Red
88
+ 'Section-header': '#F59E0B', # Amber
89
+ 'Table': '#3B82F6', # Blue
90
+ 'Figure': '#8B5CF6', # Purple
91
+ 'Picture': '#8B5CF6', # Purple
92
+ 'Caption': '#EC4899', # Pink
93
+ 'Page-header': '#6366F1', # Indigo
94
+ 'Page-footer': '#6366F1', # Indigo
95
+ 'Footnote': '#84CC16', # Lime
96
+ 'Formula': '#F97316', # Orange
97
+ 'List-item': '#14B8A6', # Teal
98
+ 'Form': '#A855F7', # Fuchsia
99
+ 'Handwriting': '#64748B', # Slate
100
+ 'Table-of-contents': '#0EA5E9', # Sky
101
+ }
102
 
103
+ # Supported languages
104
+ LANGUAGES = {
105
+ "en": "English", "pt": "Portuguese", "es": "Spanish", "fr": "French",
106
+ "de": "German", "it": "Italian", "nl": "Dutch", "ru": "Russian",
107
+ "zh": "Chinese", "ja": "Japanese", "ko": "Korean", "ar": "Arabic",
108
+ "hi": "Hindi", "bn": "Bengali", "tr": "Turkish", "vi": "Vietnamese",
109
+ "th": "Thai", "id": "Indonesian", "pl": "Polish", "uk": "Ukrainian",
110
+ "cs": "Czech", "sv": "Swedish", "da": "Danish", "no": "Norwegian",
111
+ "fi": "Finnish", "el": "Greek", "he": "Hebrew", "hu": "Hungarian",
112
+ "ro": "Romanian", "sk": "Slovak", "bg": "Bulgarian", "hr": "Croatian",
113
+ "sl": "Slovenian", "et": "Estonian", "lv": "Latvian", "lt": "Lithuanian",
114
+ "fa": "Persian", "ur": "Urdu", "ta": "Tamil", "te": "Telugu",
115
+ "ml": "Malayalam", "kn": "Kannada", "gu": "Gujarati", "mr": "Marathi",
116
+ "pa": "Punjabi", "ne": "Nepali", "si": "Sinhala", "my": "Burmese",
117
+ "km": "Khmer", "lo": "Lao", "ka": "Georgian", "hy": "Armenian",
118
+ }
119
+
120
+
121
+ def prepare_image(image) -> Image.Image:
122
+ """Prepare image for processing"""
123
+ if isinstance(image, str):
124
+ image = Image.open(image)
125
+ elif hasattr(image, 'name'):
126
+ image = Image.open(image.name)
127
+
128
+ if image.mode != 'RGB':
129
+ image = image.convert('RGB')
130
+ return image
131
+
132
+
133
+ def draw_text_lines(image, text_lines, color=(0, 255, 0)):
134
+ """Draw text line bounding boxes"""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
  draw = ImageDraw.Draw(image)
136
+ for line in text_lines:
137
+ if hasattr(line, 'bbox'):
138
+ bbox = line.bbox
139
+ if len(bbox) == 4:
140
+ draw.rectangle(bbox, outline=color, width=2)
 
 
 
 
 
 
 
141
  return image
142
 
143
+
144
+ def draw_layout_boxes(image, bboxes):
145
+ """Draw layout boxes with labels and colors"""
146
+ draw = ImageDraw.Draw(image)
147
+
148
+ for bbox in bboxes:
149
+ label = getattr(bbox, 'label', 'Text')
150
+ color = LAYOUT_LABELS.get(label, '#FFFFFF')
151
+
152
+ # Convert hex to RGB
153
+ rgb = tuple(int(color.lstrip('#')[i:i+2], 16) for i in (0, 2, 4))
154
+
155
+ box = bbox.bbox if hasattr(bbox, 'bbox') else bbox
156
+ if len(box) == 4:
157
+ draw.rectangle(box, outline=rgb, width=2)
158
+
159
+ # Draw label
160
+ label_text = label.replace('-', ' ').title()
161
+ draw.text((box[0], box[1] - 12), label_text, fill=rgb)
162
+
163
+ return image
164
+
165
+
166
+ def draw_table_cells(image, predictions):
167
+ """Draw table cells with row/column info"""
168
+ draw = ImageDraw.Draw(image)
169
+
170
+ # Draw rows in blue
171
+ for row in predictions.rows:
172
+ draw.rectangle(row.bbox, outline=(0, 0, 255), width=2)
173
+
174
+ # Draw columns in green
175
+ for col in predictions.cols:
176
+ draw.rectangle(col.bbox, outline=(0, 255, 0), width=2)
177
+
178
+ # Draw cells in red
179
+ for cell in predictions.cells:
180
+ draw.rectangle(cell.bbox, outline=(255, 0, 0), width=1)
181
+ # Draw cell text if available
182
+ if hasattr(cell, 'text') and cell.text:
183
+ draw.text((cell.bbox[0], cell.bbox[1]), cell.text[:10], fill=(100, 100, 100))
184
+
185
+ return image
186
+
187
+
188
+ @spaces.GPU(duration=120)
189
+ def process_ocr(image, languages: str, disable_math: bool = False):
190
+ """Run OCR on image"""
191
+ logger.info(f"Running OCR with languages: {languages}")
192
  try:
193
+ image = prepare_image(image)
194
+
195
+ # Parse languages
196
+ langs = [l.strip() for l in languages.split(',') if l.strip()]
197
+ if not langs:
198
+ langs = ['en']
199
+
200
+ # Get predictors
201
+ det_pred = get_detection_predictor()
202
+ rec_pred = get_recognition_predictor()
203
+
204
+ # Run OCR
205
+ predictions = rec_pred(
206
+ [image],
207
+ det_predictor=det_pred,
208
+ langs=[langs],
209
+ disable_math=disable_math
210
+ )
211
 
212
+ if not predictions or len(predictions) == 0:
213
+ return "", {}, None
214
 
215
+ pred = predictions[0]
216
+
217
+ # Extract text
218
+ text_lines = pred.text_lines if hasattr(pred, 'text_lines') else []
219
+ full_text = "\n".join([line.text for line in text_lines if hasattr(line, 'text')])
220
+
221
+ # Build JSON result
222
+ result = {
223
+ "text": full_text,
224
+ "languages": langs,
225
+ "num_lines": len(text_lines),
226
+ "lines": [
227
+ {
228
+ "text": line.text,
229
+ "confidence": round(line.confidence, 3) if hasattr(line, 'confidence') else 1.0,
230
+ "bbox": list(line.bbox) if hasattr(line, 'bbox') else []
231
+ }
232
+ for line in text_lines
233
+ ]
234
+ }
235
+
236
+ # Draw bounding boxes
237
+ img_with_boxes = draw_text_lines(image.copy(), text_lines)
238
+
239
+ return full_text, result, img_with_boxes
240
 
 
 
241
  except Exception as e:
242
+ logger.error(f"OCR Error: {e}", exc_info=True)
243
+ return f"Error: {str(e)}", {"error": str(e)}, None
244
 
245
+
246
+ @spaces.GPU(duration=60)
247
+ def process_detection(image):
248
+ """Run text detection on image"""
249
+ logger.info("Running text detection")
250
  try:
251
+ image = prepare_image(image)
252
+
253
+ det_pred = get_detection_predictor()
254
+ predictions = det_pred([image])
255
+
256
+ if not predictions or len(predictions) == 0:
257
+ return {}, None
258
+
259
+ pred = predictions[0]
260
+
261
+ # Build result
262
+ result = {
263
+ "num_lines": len(pred.bboxes) if hasattr(pred, 'bboxes') else 0,
264
+ "image_size": list(image.size),
265
+ "bboxes": [
266
+ {
267
+ "bbox": list(bbox.bbox) if hasattr(bbox, 'bbox') else list(bbox),
268
+ "confidence": round(bbox.confidence, 3) if hasattr(bbox, 'confidence') else 1.0
269
+ }
270
+ for bbox in (pred.bboxes if hasattr(pred, 'bboxes') else [])
271
+ ]
272
+ }
273
+
274
+ # Draw boxes
275
+ img_with_boxes = image.copy()
276
+ draw = ImageDraw.Draw(img_with_boxes)
277
+ for bbox in (pred.bboxes if hasattr(pred, 'bboxes') else []):
278
+ box = bbox.bbox if hasattr(bbox, 'bbox') else bbox
279
+ draw.rectangle(box, outline=(0, 255, 0), width=2)
280
+
281
+ return result, img_with_boxes
282
+
283
  except Exception as e:
284
+ logger.error(f"Detection Error: {e}", exc_info=True)
285
+ return {"error": str(e)}, None
286
 
287
+
288
+ @spaces.GPU(duration=60)
289
+ def process_layout(image):
290
+ """Run layout analysis on image"""
291
+ logger.info("Running layout analysis")
292
  try:
293
+ image = prepare_image(image)
294
+
295
+ layout_pred = get_layout_predictor()
296
+ predictions = layout_pred([image])
297
+
298
+ if not predictions or len(predictions) == 0:
299
+ return {}, None
300
+
301
+ pred = predictions[0]
302
+
303
+ # Count by label
304
+ label_counts = {}
305
+ for bbox in (pred.bboxes if hasattr(pred, 'bboxes') else []):
306
+ label = getattr(bbox, 'label', 'Unknown')
307
+ label_counts[label] = label_counts.get(label, 0) + 1
308
+
309
+ # Build result
310
+ result = {
311
+ "num_elements": len(pred.bboxes) if hasattr(pred, 'bboxes') else 0,
312
+ "label_counts": label_counts,
313
+ "elements": [
314
+ {
315
+ "label": getattr(bbox, 'label', 'Unknown'),
316
+ "confidence": round(getattr(bbox, 'confidence', 1.0), 3),
317
+ "position": getattr(bbox, 'position', -1),
318
+ "bbox": list(bbox.bbox) if hasattr(bbox, 'bbox') else []
319
+ }
320
+ for bbox in (pred.bboxes if hasattr(pred, 'bboxes') else [])
321
+ ]
322
+ }
323
+
324
+ # Draw layout boxes
325
+ img_with_boxes = draw_layout_boxes(image.copy(), pred.bboxes if hasattr(pred, 'bboxes') else [])
326
+
327
+ return result, img_with_boxes
328
+
329
  except Exception as e:
330
+ logger.error(f"Layout Error: {e}", exc_info=True)
331
+ return {"error": str(e)}, None
332
+
333
 
334
+ @spaces.GPU(duration=90)
335
+ def process_table(image):
336
+ """Run table recognition on image"""
337
+ logger.info("Running table recognition")
338
  try:
339
+ image = prepare_image(image)
340
+
341
+ table_pred = get_table_rec_predictor()
342
+ predictions = table_pred([image])
343
+
344
+ if not predictions or len(predictions) == 0:
345
+ return {}, None, ""
346
+
347
+ pred = predictions[0]
348
+
349
+ # Build markdown table
350
+ md_table = ""
351
+ if hasattr(pred, 'cells') and pred.cells:
352
+ # Find max row and col
353
+ max_row = max(c.row_id for c in pred.cells) if pred.cells else 0
354
+ max_col = max(c.col_id for c in pred.cells) if pred.cells else 0
355
+
356
+ # Create table data
357
+ table_data = [["" for _ in range(max_col + 1)] for _ in range(max_row + 1)]
358
+
359
+ for cell in pred.cells:
360
+ text = getattr(cell, 'text', '')
361
+ table_data[cell.row_id][cell.col_id] = text
362
+
363
+ # Build markdown
364
+ md_lines = []
365
+ for i, row in enumerate(table_data):
366
+ md_lines.append("| " + " | ".join(row) + " |")
367
+ if i == 0:
368
+ md_lines.append("| " + " | ".join(["---"] * len(row)) + " |")
369
+
370
+ md_table = "\n".join(md_lines)
371
+
372
+ # Build result
373
+ result = {
374
+ "num_rows": len(pred.rows) if hasattr(pred, 'rows') else 0,
375
+ "num_cols": len(pred.cols) if hasattr(pred, 'cols') else 0,
376
+ "num_cells": len(pred.cells) if hasattr(pred, 'cells') else 0,
377
+ "rows": [
378
+ {"row_id": r.row_id, "is_header": getattr(r, 'is_header', False)}
379
+ for r in (pred.rows if hasattr(pred, 'rows') else [])
380
+ ],
381
+ "cols": [
382
+ {"col_id": c.col_id, "is_header": getattr(c, 'is_header', False)}
383
+ for c in (pred.cols if hasattr(pred, 'cols') else [])
384
+ ]
385
+ }
386
+
387
+ # Draw table cells
388
+ img_with_boxes = draw_table_cells(image.copy(), pred)
389
+
390
+ return result, img_with_boxes, md_table
391
+
392
+ except Exception as e:
393
+ logger.error(f"Table Recognition Error: {e}", exc_info=True)
394
+ return {"error": str(e)}, None, ""
395
+
396
+
397
+ @spaces.GPU(duration=60)
398
+ def process_latex(image):
399
+ """Run LaTeX OCR on image (equation)"""
400
+ logger.info("Running LaTeX OCR")
401
+ try:
402
+ image = prepare_image(image)
403
+
404
+ texify_pred = get_texify_predictor()
405
+ predictions = texify_pred([image])
406
+
407
+ if not predictions or len(predictions) == 0:
408
+ return "", {}
409
+
410
+ pred = predictions[0]
411
+ latex = pred.text if hasattr(pred, 'text') else str(pred)
412
+
413
+ result = {
414
+ "latex": latex,
415
+ "markdown": f"$$\n{latex}\n$$"
416
+ }
417
+
418
+ return latex, result
419
+
420
  except Exception as e:
421
+ logger.error(f"LaTeX OCR Error: {e}", exc_info=True)
422
+ return f"Error: {str(e)}", {"error": str(e)}
423
+
424
+
425
+ # ============== GRADIO UI ==============
426
+
427
+ CSS = """
428
+ @import url('https://fonts.googleapis.com/css2?family=Outfit:wght@300;400;500;600;700;800&family=Fira+Code:wght@400;500&display=swap');
429
+
430
+ :root {
431
+ --bg: #0a0f1a;
432
+ --surf: #0f1629;
433
+ --card: #151d32;
434
+ --border: #1e2a45;
435
+ --border2: #2a3a5a;
436
+ --green: #10b981;
437
+ --blue: #3b82f6;
438
+ --text: #e2e8f0;
439
+ --muted: #64748b;
440
+ }
441
+
442
+ body, .gradio-container {
443
+ background: var(--bg) !important;
444
+ font-family: 'Outfit', sans-serif !important;
445
+ color: var(--text) !important;
446
+ }
447
+
448
+ .gradio-container::before {
449
+ content: '';
450
+ position: fixed; inset: 0; pointer-events: none; z-index: 0;
451
+ background: radial-gradient(ellipse 70% 50% at 50% -10%, rgba(16,185,129,0.08) 0%, transparent 65%);
452
+ }
453
+
454
+ .app-hero { padding: 40px 0 20px; text-align: center; }
455
+ .app-hero h1 {
456
+ font-size: 2.8rem; font-weight: 800; letter-spacing: -0.04em;
457
+ background: linear-gradient(135deg, #10b981, #06b6d4, #3b82f6);
458
+ -webkit-background-clip: text; -webkit-text-fill-color: transparent;
459
+ margin-bottom: 8px;
460
+ }
461
+ .app-hero .tagline { color: var(--muted); font-size: 1rem; }
462
+ .pills { display: flex; justify-content: center; gap: 8px; margin-top: 16px; flex-wrap: wrap; }
463
+ .pill {
464
+ background: var(--card); border: 1px solid var(--border2); border-radius: 100px;
465
+ padding: 5px 14px; font-size: 0.75rem; color: var(--muted); font-family: 'Fira Code', monospace;
466
+ }
467
+ .pill.green { color: var(--green); border-color: rgba(16,185,129,0.3); }
468
+
469
+ .tabs button { font-family: 'Outfit', sans-serif !important; font-weight: 500 !important; }
470
+
471
+ button.primary-btn {
472
+ background: linear-gradient(135deg, #10b981, #06b6d4) !important;
473
+ border: none !important; color: #000 !important; font-weight: 600 !important;
474
+ padding: 12px 24px !important;
475
+ }
476
+
477
+ .output-image img { border-radius: 8px; }
478
+
479
+ footer { display: none !important; }
480
+ """
481
 
482
+ # Language string for the dropdown
483
+ LANGUAGE_OPTIONS = [f"{code} - {name}" for code, name in sorted(LANGUAGES.items(), key=lambda x: x[1])]
484
+
485
+ with gr.Blocks(theme=gr.themes.Base(), css=CSS, title="Surya OCR Studio") as app:
486
+
487
+ gr.HTML("""
488
+ <div class="app-hero">
489
+ <h1>📄 Surya OCR Studio</h1>
490
+ <p class="tagline">Document OCR · Layout Analysis · Table Recognition · 90+ Languages</p>
491
+ <div class="pills">
492
+ <span class="pill green">ZeroGPU ⚡</span>
493
+ <span class="pill">90+ Languages</span>
494
+ <span class="pill">Layout Detection</span>
495
+ <span class="pill">Table Recognition</span>
496
+ <span class="pill">LaTeX OCR</span>
497
+ </div>
498
+ </div>
499
+ """)
500
 
501
+ with gr.Tabs() as tabs:
502
+
503
+ # ============ OCR TAB ============
504
+ with gr.TabItem("📝 OCR"):
505
+ gr.Markdown("### Optical Character Recognition\nExtract text from images in 90+ languages.")
506
+
507
+ with gr.Row():
508
+ with gr.Column(scale=2):
509
+ ocr_input = gr.Image(label="📄 Upload Image", type="pil", height=400)
510
+
511
+ with gr.Row():
512
+ ocr_langs = gr.Textbox(
513
+ label="Languages (comma-separated codes)",
514
+ value="en",
515
+ placeholder="en, pt, es, de, fr...",
516
+ info="Use language codes: en=English, pt=Portuguese, es=Spanish..."
517
+ )
518
+ ocr_disable_math = gr.Checkbox(label="Disable math detection", value=False)
519
+
520
+ ocr_btn = gr.Button("🚀 Run OCR", variant="primary", elem_classes=["primary-btn"])
521
+
522
+ with gr.Column(scale=3):
523
+ ocr_text = gr.Textbox(label="📝 Extracted Text", lines=12, show_copy_button=True)
524
+ ocr_json = gr.JSON(label="📊 Detailed Results")
525
+
526
+ ocr_image = gr.Image(label="🖼️ Detected Text Lines", elem_classes=["output-image"])
527
+
528
+ ocr_btn.click(
529
+ process_ocr,
530
+ [ocr_input, ocr_langs, ocr_disable_math],
531
+ [ocr_text, ocr_json, ocr_image]
532
+ )
533
+
534
+ # ============ TEXT DETECTION TAB ============
535
+ with gr.TabItem("🔍 Text Detection"):
536
+ gr.Markdown("### Text Line Detection\nDetect text lines in documents without OCR.")
537
+
538
+ with gr.Row():
539
+ det_input = gr.Image(label="📄 Upload Image", type="pil", height=400)
540
+ with gr.Column():
541
+ det_json = gr.JSON(label="📊 Detection Results")
542
+ det_btn = gr.Button("🔍 Detect Text Lines", variant="primary", elem_classes=["primary-btn"])
543
+
544
+ det_image = gr.Image(label="🖼️ Detected Lines", elem_classes=["output-image"])
545
+
546
+ det_btn.click(process_detection, [det_input], [det_json, det_image])
547
+
548
+ # ============ LAYOUT ANALYSIS TAB ============
549
+ with gr.TabItem("📊 Layout Analysis"):
550
+ gr.Markdown("### Document Layout Analysis\nIdentify document structure: titles, tables, figures, etc.")
551
+
552
+ with gr.Row():
553
+ layout_input = gr.Image(label="📄 Upload Image", type="pil", height=400)
554
+ with gr.Column():
555
+ layout_json = gr.JSON(label="📊 Layout Results")
556
+ layout_btn = gr.Button("📊 Analyze Layout", variant="primary", elem_classes=["primary-btn"])
557
+
558
+ layout_image = gr.Image(label="🖼️ Layout Elements", elem_classes=["output-image"])
559
+
560
+ # Legend
561
+ gr.Markdown("""
562
+ **Legend:**
563
+ 🟢 Text | 🔴 Title | 🟡 Section Header | 🔵 Table | 🟣 Figure/Picture | 🩷 Caption | 🔷 Header/Footer
564
+ """)
565
+
566
+ layout_btn.click(process_layout, [layout_input], [layout_json, layout_image])
567
+
568
+ # ============ TABLE RECOGNITION TAB ============
569
+ with gr.TabItem("📋 Table Recognition"):
570
+ gr.Markdown("### Table Recognition\nExtract table structure and convert to Markdown.")
571
+
572
+ with gr.Row():
573
+ table_input = gr.Image(label="📄 Upload Table Image", type="pil", height=400)
574
+ with gr.Column():
575
+ table_json = gr.JSON(label="📊 Table Structure")
576
+ table_btn = gr.Button("📋 Recognize Table", variant="primary", elem_classes=["primary-btn"])
577
+
578
+ table_image = gr.Image(label="🖼️ Table Cells", elem_classes=["output-image"])
579
+ table_md = gr.Textbox(label="📝 Markdown Output", lines=10, show_copy_button=True)
580
+
581
+ table_btn.click(process_table, [table_input], [table_json, table_image, table_md])
582
+
583
+ # ============ LATEX OCR TAB ============
584
+ with gr.TabItem("🔢 LaTeX OCR"):
585
+ gr.Markdown("### LaTeX Equation OCR\nConvert equation images to LaTeX code.\n\n**Tip:** Crop the image to just the equation for best results.")
586
+
587
+ with gr.Row():
588
+ latex_input = gr.Image(label="📄 Upload Equation Image", type="pil", height=300)
589
+ with gr.Column():
590
+ latex_code = gr.Textbox(label="🔢 LaTeX Code", lines=5, show_copy_button=True)
591
+ latex_json = gr.JSON(label="📊 Results")
592
+ latex_btn = gr.Button("🔢 Extract LaTeX", variant="primary", elem_classes=["primary-btn"])
593
+
594
+ latex_btn.click(process_latex, [latex_input], [latex_code, latex_json])
595
+
596
+ # ============ FOOTER ============
597
+ gr.Markdown("""
598
+ ---
599
+
600
+ ### ℹ️ About Surya OCR
601
+
602
+ | Feature | Description |
603
+ |---------|-------------|
604
+ | **OCR** | Text recognition in 90+ languages |
605
+ | **Detection** | Line-level text detection |
606
+ | **Layout** | Identify tables, figures, headers, etc. |
607
+ | **Tables** | Extract table structure to Markdown |
608
+ | **LaTeX** | Convert equations to LaTeX |
609
+
610
+ **Performance Tips:**
611
+ - Use higher resolution images for better accuracy
612
+ - For blurry text, try preprocessing (binarization, deskewing)
613
+ - Specify correct language codes for best OCR results
614
+
615
+ **Model:** [Surya OCR](https://github.com/datalab-to/surya) by datalab-to
616
+ **Space by:** [@artificialguybr](https://twitter.com/artificialguybr)
617
+ """)
618
+
619
 
620
  if __name__ == "__main__":
621
+ app.queue(max_size=20)
622
+ app.launch()
languages.json DELETED
@@ -1,95 +0,0 @@
1
- {
2
- "Afrikaans": "af",
3
- "Amharic": "am",
4
- "Arabic": "ar",
5
- "Assamese": "as",
6
- "Azerbaijani": "az",
7
- "Belarusian": "be",
8
- "Bulgarian": "bg",
9
- "Bengali": "bn",
10
- "Breton": "br",
11
- "Bosnian": "bs",
12
- "Catalan": "ca",
13
- "Czech": "cs",
14
- "Welsh": "cy",
15
- "Danish": "da",
16
- "German": "de",
17
- "Greek": "el",
18
- "English": "en",
19
- "Esperanto": "eo",
20
- "Spanish": "es",
21
- "Estonian": "et",
22
- "Basque": "eu",
23
- "Persian": "fa",
24
- "Finnish": "fi",
25
- "French": "fr",
26
- "Western Frisian": "fy",
27
- "Irish": "ga",
28
- "Scottish Gaelic": "gd",
29
- "Galician": "gl",
30
- "Gujarati": "gu",
31
- "Hausa": "ha",
32
- "Hebrew": "he",
33
- "Hindi": "hi",
34
- "Croatian": "hr",
35
- "Hungarian": "hu",
36
- "Armenian": "hy",
37
- "Indonesian": "id",
38
- "Icelandic": "is",
39
- "Italian": "it",
40
- "Japanese": "ja",
41
- "Javanese": "jv",
42
- "Georgian": "ka",
43
- "Kazakh": "kk",
44
- "Khmer": "km",
45
- "Kannada": "kn",
46
- "Korean": "ko",
47
- "Kurdish": "ku",
48
- "Kyrgyz": "ky",
49
- "Latin": "la",
50
- "Lao": "lo",
51
- "Lithuanian": "lt",
52
- "Latvian": "lv",
53
- "Malagasy": "mg",
54
- "Macedonian": "mk",
55
- "Malayalam": "ml",
56
- "Mongolian": "mn",
57
- "Marathi": "mr",
58
- "Malay": "ms",
59
- "Burmese": "my",
60
- "Nepali": "ne",
61
- "Dutch": "nl",
62
- "Norwegian": "no",
63
- "Oromo": "om",
64
- "Oriya": "or",
65
- "Punjabi": "pa",
66
- "Polish": "pl",
67
- "Pashto": "ps",
68
- "Portuguese": "pt",
69
- "Romanian": "ro",
70
- "Russian": "ru",
71
- "Sanskrit": "sa",
72
- "Sindhi": "sd",
73
- "Sinhala": "si",
74
- "Slovak": "sk",
75
- "Slovenian": "sl",
76
- "Somali": "so",
77
- "Albanian": "sq",
78
- "Serbian": "sr",
79
- "Sundanese": "su",
80
- "Swedish": "sv",
81
- "Swahili": "sw",
82
- "Tamil": "ta",
83
- "Telugu": "te",
84
- "Thai": "th",
85
- "Tagalog": "tl",
86
- "Turkish": "tr",
87
- "Uyghur": "ug",
88
- "Ukrainian": "uk",
89
- "Urdu": "ur",
90
- "Uzbek": "uz",
91
- "Vietnamese": "vi",
92
- "Xhosa": "xh",
93
- "Yiddish": "yi",
94
- "Chinese": "zh"
95
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,3 +1,5 @@
1
- torch
2
  surya-ocr
3
- pillow
 
 
 
 
 
1
  surya-ocr
2
+ gradio>=4.0.0
3
+ pillow
4
+ torch
5
+ spaces