Clevercast Translate@Home now supports remote simultaneous interpretation of hybrid events. This means that simultaneous translation via T@H is available to two kinds of users:
- live stream viewers, who can watch the translated video stream in large numbers via our video player which is embedded on a site or platform of their choice
- event participants, at the event location or at home, who need to hear the translation in real time (with or without video)
Live stream viewers, who don’t participate in the event, benefit from using the HTTP Live Streaming (HLS) protocol. This protocol typically has a limited latency of about 20 seconds, which has a number of advantages:
- It is fully scalable, designed for global HD live streaming without interruptions. By using global CDNs, the live stream is brought as close as possible to each viewer’s location.
- The latency allows the video player to do buffering, which ensures smooth streaming at all times, even in case of an unstable connection or changing bandwidth conditions.
- Viewer experience is optimal on all devices. Clevercast Player can choose between multiple resolutions (typically 1080p, 720p, 480p, 360p and 240p) and switch automatically to the most suitable one, depending on the screen size, bandwidth, GPU and CPU of each viewer.
Event participants, on the other hand, need to receive the audio and video streams without any form of latency. This is possible by sending the incoming streams directly via the WebRTC protocol, which is suited for delivering real-time video and audio to a limited number of participants in real time.
How does it work?
Your interpreters use Translate@Home for remote simultaneous interpretation. The translation provided by them is used for both event participants (real time) as well as live stream viewers (slight delay for smooth streaming).
Live stream viewers can watch the video and select their preferred translation via Clevercast player, which you can embed on your site or third-party platform. It doesn’t require authentication and comes with analytics about your viewers and their viewing behaviour. This works for all browsers and devices (iOS, Android).
Event participants receive a private link for a certain language, which administrators can copy directly from Clevercast. When using this link in a browser, the event participant gains direct access to a player for the given language in Clevercast. The player can be a video player (for participants from home) or audio-only player (for participants listening to the translation at the event location). This works for most modern browsers and devices (iOS, Android).
Real-time transmission to event participants
If the ‘hybrid participants’ option is enabled for your plan, scroll down on the Audio Languages detail page (left-hand menu) up until the ‘Real-time Participant Links’ panel. This panel contains a secure link for each language for viewing and/or listening in real time. As an event manager, you are just responsible for distributing these links to the relevant participants.
When the event starts, participants only need to open these links in a browser with sufficient support for WebRTC (Chrome, Firefox, Safari, Edge) and press the play button. Participants will automatically hear the event in the correct language, either with or without video (depending on the selected link). Note that the number of participants in real time is limited and their maximum number is determined by your plan.
Video and audio
A video + audio player is ideal for participants at home, for example speakers being added to a live stream via production software like vMix or an in-browser studio like Streamyard. This allows them to see the event video in real time, while listening to the simultaneous translation in their own language.
Note: if a real-time participant is already receiving the video feed in real time (e.g. via Microsoft Teams), it may be a better solution to send them the audio-only player instead. Since the audio-only player only contains the simultaneous interpretation, the participant won’t hear themself speak. See the ‘Behaviour of the real-time players’ section below for more info.
Audio-only
An audio-only player is ideal for participants at the event location. When the event starts, they just use their smartphones to click on the link, press the start button and listen to the translation (e.g. using headphones with a mini jack).
Behaviour of the real-time players
Since the video and audio-only players have different purposes, their behaviour is slightly different.
The real-time video player allows participants to hear the translations in their own language. When the interpreter is muted – which means that the current speaker is speaking their own language – they hear the floor audio instead. This means that they will also hear themselves speak. So they will either have to use a headset or mute the player when they are speaking.
The real-time audio-only player also allows participants to hear the translations in their own language. But when the interpreter is muted – which means that the current speaker is speaking their own language – they will hear nothing. Since they don’t hear the floor audio, they will not hear themselves speak.
The audio-only player can therefore also be useful in a remote setup where a participant already receives video and floor audio from the production (e.g. through WebRTC or MS Teams). Via our audio-only player, the participant can listen to real-time translations of other speakers. Note that, in this scenario, the participant has two audio sources and may have to reduce the floor audio when they want to listen to the translation.