This update includes a suite of features that radically reduce latency in Livepeer's livestreaming service and unlock a variety of new viewer experiences.
New Livestream Protocols (Open Beta)
We've enabled WebRTC as an output protocol for livestreaming. This protocol is significantly more performant than HLS and reduces the total latency in our pipeline to ~4 seconds.
To see the benefit of these upgrades, developers must either use the Livepeer Player, or implement a player with support for WebRTC playback
and HLS fallback logic.
This feature is in open beta, and can be accessed by using the Livepeer Player at
@email@example.com. It will be incorporated into the embeddable player when we move it into the stable release.
Multiparticipant Livestreaming (Closed Beta)
We've shipped a closed beta implementation of multiparticipant livestreaming with sub-second latency. This feature is suitable for video calls and other realtime communication. For early access to this feature, please reach out to firstname.lastname@example.org.
While all core management actions (create room, add/remove participant, etc) are contained within the Livepeer API, this implementation makes use of Livekit's open-source stack, and we recommend that frontend developers make use of Livekit's Client SDKs, initializing them with room details provided by Livepeer's API.
Bringing It All Together
While these features enable unique viewing experience on their own, they can be used together to create interactive experiences similar to Twitter Spaces or Instagram Live. With an additional API call, you can livestream a multiparticipant stream or call to a broader audience.
The resulting livestream can be used in conjunction with other livestream features like token-gated access control to build exclusive, community-based video features.
Notes and Limitations
--- In-browser streaming ---
We are currently developing in-browser streaming workflows using WebRTC. For early access, please send an email to email@example.com.
--- bframes ---
If your stream includes bframes, HLS will be used in lieu of WebRTC because. This limitation does not affect RTSP playback.
If bframes are present in your stream, you will see a warning on the stream's page in the Livepeer Studio UI. The simplest way to turn off bframes is to choose the "main" profile for H264 instead of the "high" profile in your broadcasting software, but you can usually find a bframes count setting in the advanced settings and/or pass the string "bframes=0" in advanced options (depending on encoder used)
This feature adds better monitoring of stream health, and suggests remedies for common errors. Specifically, the Stream page in the Studio UI will show a warning banner and suggest solutions when a potential problem with a stream is detected. This information will also be available via API, and is a complement to existing stream health monitoring.
This update includes an enhanced Viewership API that provides detailed information on viewer behavior and playback quality on your platform. These insights will help you optimize your application and provide valuable analytics to your creators.
All metrics can be queried by historical range, geographic location, browser / device type, and creator & viewer identifiers (if you’ve supplied them for a given video or viewing session).
For information on API usage and detailed descriptions of the data available, please refer to the general usage guide and the API Reference suitable to your use case:
- Retrieve Any Metric by Dimension
- Retrieve Creator-Specific Metrics
- Retrieve Views for public assets on dStorage
To segment data by creator or viewer id / address, you'll need to supply it when uploading and/or viewing. Support for segmenting livestreams by creator id will be added in an upcoming release.
In subsequent releases, we will offer SDK abstractions and visualization tools. In the meantime, please join our Discord for advice on best practices for visualizing and understanding viewership data.
We've recently released a set of performance and stability improvements to our VOD Asset ingestion pipeline. These improvements will apply to all new Assets.
Although we have retry mechanisms in place to ensure all assets are eventually processed successfully even when we hit temporary networking or other issues, having to begin processing an asset again comes with a cost in terms of the total "time to ready".
By rewriting the part of the pipeline that processes input files before sending them to be transcoded, we've reduced this retry rate from 7% of assets to less than 0.1%.
As part of the same rewrite, we've also increased the amount of parallelisation we're able to take advantage of and decreased the duration of this pre-processing step from several minutes for an hour-long file to under 30 seconds.
This update provides a new API for users that would like to utilise Livepeer’s VOD transcoding capabilities directly within their application, while managing metadata and storage of the transcoded video themselves.
The API currently supports outputting to S3-compatible providers such as Storj or web3storage and as with the Asset API, we support outputting
Documentation on the API can be found here.
We’ve also created guides with more detailed information about outputting to dStorage providers:
This update provides a webhook-based mechanism that developers can use to define playback policies that will control access to videos. With a webhook-based playback policy, it is possible to define arbitrary conditions under which a user will be allowed (or not allowed) to view a video.
In addition to enable arbitrary access control logic (ACL), this capability allows developers to easily token-gate videos using the method that is most suitable to their workflow (e.g., Lit Protocol, Unlock Protocol, bespoke implementation, etc).
We've created several guides to illustrate suggested usage of playback policies:
- VOD Access Control using Livepeer.js SDK
- Livestream Access Control using Livepeer.js SDK
- Webhook Reference for all other developers
In addition, we've created a sample application using Lit Protocol key management to illustrate a simple token-gating use case.
This update ships some bug fixes and quality-of-life improvements to the JS SDK. This addresses a common piece of feedback we've heard from developers about the experience of uploading video to IPFS using the SDK. It makes creating a video and uploading it to IPFS seamless.
It also unlocks the ability to autoplay without forcing mute. This works only in certain conditions where the site is considered "trusted" and the user has interacted with the site - see Chrome and Safari docs for further details on when this is allowed. We recommend testing on your site to ensure that the media will autoplay under the conditions that you expect the user to engage with your content.
If you use Livepeer.js, please upgrade to
- Adds the ability to autoplay videos without forcing mute.
- Adds IPFS upload on the creation of an asset, so no subsequent calls need to be made to upload to IPFS.
This update provides an optimized playback experience for short-form videos under two minutes in length by providing MP4 renditions. We have also improved our preloading logic and updated recommendations for building performant video experiences with the Livepeer Player in React or React Native.
By enabling streamlined file downloads and simpler preloading, this update drastically improves time-to-first-frame (TTFF) for short videos. Our internal tests have demonstrated sub-second TTFF for non-cached video and near-instant TTFF for videos that have been cached by the CDN.
If you are using Livepeer.js, please upgrade to
- We will now generate MP4 renditions for VOD uploads under two minutes in length, in addition to HLS renditions.
- The Livepeer React and React Native players have been updated to prefer MP4 renditions when available. If you are using your own player, we recommend that you implement logic to do the same.
- We've added a new usePlayerList hook to the React Native player to make it simpler to use the player in a list
- When retrieving playback info, the Livepeer API response will include MP4 renditions in the source array when available
We’ve created an example application to demonstrate how you can use dynamic preloading to create a performant short-form video experience.