While transcription is a mature industry for English and popular Western European languages, however, there is a lack of trained resources for other languages due to rarity or the demand for it, therefore most of the LSPs cannot easily increase production at an affordable cost.
Our client needed to deliver fast output, however, they did not have the luxury of training each and every transcribers. Even when they hired native workers who could transcribe fairly accurately, the outputs were still not meeting the quality standards the client had demanded. Because in transcription, there is a certain amount of learning curve for workers to be able to start producing quality results — therefore, hiring native speakers simply does not guarantee quality results.
Furthermore, these types of multilingual machine training transcription work are different from regular transcription jobs — it requires the transcribers to follow client provided Word Domain Convention (WDC) guidelines, which are generally lengthy and complicated even for native speakers. For example, the transcribers are required to follow annotation and speaker segmentation rules accurately to train machine transcription technologies.
Seamless Integration with the client’s work processes
Tools agnostic approach for work harmony and faster outputs
Hybrid workforce consisting of specialist annotators and native transcribers
Audio Bee’s trained resources with prior experience in machine transcription guidelines was perfectly suited to tackle this project from the get-go. We also have a very good understanding of annotation and speaker segmentation rules for such clients and thus were able to provide
The client was able to see quality output within the first week of starting the project, despite our resources having to pass their testing procedure, which had delays. Our time for first tasks submission for the first few languages were as long as 2 weeks but by the time we were on to the 10th language, we dropped it down to 1-3 days depending on the task length and how difficult the task was.
Our team was able to understand the feedback on errors our transcribers did, communicate with them and do a much better task of fixing those errors in future tasks. Our annotation team is the most experienced since the same resource gets used across multiple languages and thus we were able to incorporate quality improvement feedback very fast. Every client has their own nuances and we are able to understand them much better due to our experience working with training machine transcription technologies. This client interpreted some rules differently and we were able to incorporate those in our tasks easily.
The client wanted us to keep increasing our output for a few languages that they had a lot of data for and were heavily dependent on our team for delivery. Some of these languages were hard to find good resources for due to multiple reasons and we kept on finding new ways of finding good resources. We kept improving our training processes to get them producing good quality output fast.