The smartphone application of Google Translate shows an interesting use of the augmented reality. For some languages it is possible to translate in real-time a text focused by the camera and visualize it overlaid on the original text.
The interface it is pretty simple to use. The user selects the origin and destination languages and hit the camera button. It is impressive how quickly the system can recognize the characters even of different writing systems like Cyrillic and Greek, not only Latin. The quality of the translation depends on the quantity of words that are recognized since the translation algorithms use the context to improve it. To get a good character recognition it is important to hold the phone still. In any case, even if not all the words are recognized, the user is able to at least instantaneously get a clue of what might be written that he could not understand for anything otherwise.
This application could result really useful when we are in a foreign country in where it is spoken a language that we don’t know. It can be used to translate a full restaurant’s menu at once or a sign we find on the street without the necessity to type the text in the translator app.
An application like this shows that our mobile devices have finally reached a computational power enough high to cope with real-time data elaboration and visualization. This was the biggest issue of augmented reality until few years ago. It was hard to find a mobile device that had enough power to compute the tracking the graphics in a decent way in real-time.
Nowadays has been developed further devices specifically for AR like the Google Glasses and the Microsoft’s HoloLens and in the future we could even have contact lenses able to display virtual content seamlessly into the reality. I think the AR technology will play a big role in the future as long as it will be well integrated. It has to be the less invasive as possible in order to help the common user in the everyday tasks. An application like Google Translate will be really useful if integrated in such these devices, but the user should maintain a certain control over it. It could result annoying if the technology intervened always in our help even if we don’t need it. For example, if one day this was integrated in contact lenses, an artificial intelligence should be able to understand when the user needs a translation and when he doesn’t.
Here’s some examples of Google Translate AR: