This update allows builders to easily retrieve thumbnails for ongoing livestreams, as well as on-demand assets. Thumbnails are frequently used as poster images when displaying livestreams or assets within an application, and can also enable timeline hover previews (i.e., hovering over the player's timeline bar and seeing a preview image for a portion of the video).
For livestreams, we provide a single updating thumbnail URL - it will return the first frame of the most recent segment of video.
For on-demand assets we provide a way to retrieve a list of thumbnails for your video assets. Thumbnails will be generated as part of the asset processing step, with each “segment” (roughly 2-3 seconds) of video resulting in one image. These images are stored in a WebVTT file that is suitable for use in timeline hover previews in addition to retrieval of single images.
Thumbnail URLs for both Live and VOD assets will be accessible through the existing playback info API endpoint.
For more information, please refer to the guides linked below.
This update allows builders to easily create clips of ongoing livestreams. Currently, builders can create clips from the most recent N seconds of a given stream, or clip arbitrary sections of a long-running stream, such as a particular session from a live-streamed conference.
This release includes:To access these features:
This release of clipping is an expressive API to enable sophisticated livestream clipping workflows. By specifying arbitrary start and end times from a given viewer's perspective, application developers can create clips of ongoing livestreams to capture and share exciting moments.
These updates only allow clipping of livestreams; asset clipping will be included in a future release.
With this update, builders and creators can change multistream targets at any time before or during an ongoing stream.
The previous implementation of multistream was unstable and did not allow builders to change multistream targets mid-stream. No implementation details have changed, and developers do not need to change existing integrations to access these features.
With this update, builders and viewers can expect 95 - 99% faster time-to-ready for VOD assets, so that videos are playable as soon as file upload completes. This is particularly important for recordings of live events (where it's important to make the recording available as soon as possible) and for social applications (where fast uploads improve posting experience).
Builders do not need to take any action to access these improvements. Under the hood, source video is made playable as soon as upload completes. In the background video processing (including transcoding) continues as usual, and transcoded renditions are made available when ready.
For rapid processing of assets that will also be archived on IPFS or Arweave, we strongly encourage either (1) uploading to Livepeer with the IPFS storage option enabled, or (2) uploading the raw file to Livepeer via
useCreateAssetor the API prior to archiving on dStorage, rather than passing the IPFS / Arweave gateway URL. The gateway URL will work, but may incur longer-than-usual processing time.
With this update, builders and viewers can expect drastically improved latency for browser-based applications, with a target range of 3-5s for transcoded renditions and sub-second for the source.
To access this feature, builders must either:If you choose to use another player, we strongly recommend that you incorporate fallback logic, whereby the player reverts to HLS in cases where network conditions may not support WebRTC.
WebRTC playback is not implemented in the Livepeer React Native player. This is tracked as follow-up work.
This update allows you to upload encrypted videos and decrypt videos at play time, to better preserve privacy. This allows builders to create private assets on public-by-default dStorage solutions. We've designed this feature to be used in conjunction with Livepeer's Playback Policies so that you can fully control playback permissions - if you specify that an upload is encrypted,
you will be required to select a playback policy (which can be public if desired).
At launch, this feature supports video content encrypted using SubtleCrypto.encrypt with AES-CBC algorithm, which is also the encryption used by other web3 protocols like Lit(opens in a new tab). It can implemented in other environments with regular AES-CBC encryption using PKCS#7 padding.
After encrypting and storing your videos, you'll be able to:
To help you get started, we've created the following resources:
This update allows you to view minutes used for transcode, storage, and delivery in your dashboard and via API. If you have supplied a
creatorIdfor a livestream or VOD asset, you can query creator-specific usage.
The launch of usage tracking coincides with the rollout of new pricing plans for Livepeer Inc's hosted gateway, and is intended to help builders make more informed decisions about their usage.
This update includes a suite of features that radically reduce latency in Livepeer's livestreaming service and unlock a variety of new viewer experiences.
New Livestream Protocols (Open Beta)
We've enabled WebRTC as an output protocol for livestreaming. This protocol is significantly more performant than HLS and reduces the total latency in our pipeline to ~4 seconds.
To see the benefit of these upgrades, developers must either use the Livepeer Player, or implement a player with support for WebRTC playback
and HLS fallback logic.
This feature is in open beta, and can be accessed by using the Livepeer Player at
@email@example.com. It will be incorporated into the embeddable player when we move it into the stable release.
Multiparticipant Livestreaming (Closed Beta)
We've shipped a closed beta implementation of multiparticipant livestreaming with sub-second latency. This feature is suitable for video calls and other realtime communication. For early access to this feature, please reach out to firstname.lastname@example.org.
While all core management actions (create room, add/remove participant, etc) are contained within the Livepeer API, this implementation makes use of Livekit's open-source stack, and we recommend that frontend developers make use of Livekit's Client SDKs, initializing them with room details provided by Livepeer's API.
Bringing It All Together
While these features enable unique viewing experience on their own, they can be used together to create interactive experiences similar to Twitter Spaces or Instagram Live. With an additional API call, you can livestream a multiparticipant stream or call to a broader audience.
The resulting livestream can be used in conjunction with other livestream features like token-gated access control to build exclusive, community-based video features.
Notes and Limitations
--- In-browser streaming ---
We are currently developing in-browser streaming workflows using WebRTC. For early access, please send an email to email@example.com.
--- bframes ---
If your stream includes bframes, HLS will be used in lieu of WebRTC because. This limitation does not affect RTSP playback.
If bframes are present in your stream, you will see a warning on the stream's page in the Livepeer Studio UI. The simplest way to turn off bframes is to choose the "main" profile for H264 instead of the "high" profile in your broadcasting software, but you can usually find a bframes count setting in the advanced settings and/or pass the string "bframes=0" in advanced options (depending on encoder used)
This feature adds better monitoring of stream health, and suggests remedies for common errors. Specifically, the Stream page in the Studio UI will show a warning banner and suggest solutions when a potential problem with a stream is detected. This information will also be available via API, and is a complement to existing stream health monitoring.
This update includes an enhanced Viewership API that provides detailed information on viewer behavior and playback quality on your platform. These insights will help you optimize your application and provide valuable analytics to your creators.
All metrics can be queried by historical range, geographic location, browser / device type, and creator & viewer identifiers (if you’ve supplied them for a given video or viewing session).
For information on API usage and detailed descriptions of the data available, please refer to the general usage guide and the API Reference suitable to your use case:To segment data by creator or viewer id / address, you'll need to supply it when uploading and/or viewing. Support for segmenting livestreams by creator id will be added in an upcoming release.
In subsequent releases, we will offer SDK abstractions and visualization tools. In the meantime, please join our Discord for advice on best practices for visualizing and understanding viewership data.