-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels