First release

This commit is contained in:
Owen Quinlan 2021-07-02 19:29:34 +10:00
commit fa6c85266e
2339 changed files with 761050 additions and 0 deletions

50
node_modules/@videojs/http-streaming/docs/README.md generated vendored Normal file
View file

@ -0,0 +1,50 @@
# Overview
This project supports both [HLS][hls] and [MPEG-DASH][dash] playback in the video.js player. This document is intended as a primer for anyone interested in contributing or just better understanding how bits from a server get turned into video on their display.
## HTTP Live Streaming
[HLS][apple-hls-intro] has two primary characteristics that distinguish it from other video formats:
- Delivered over HTTP(S): it uses the standard application protocol of the web to deliver all its data
- Segmented: longer videos are broken up into smaller chunks which can be downloaded independently and switched between at runtime
A standard HLS stream consists of a *Master Playlist* which references one or more *Media Playlists*. Each Media Playlist contains one or more sequential video segments. All these components form a logical hierarchy that informs the player of the different quality levels of the video available and how to address the individual segments of video at each of those levels:
![HLS Format](images/hls-format.png)
HLS streams can be delivered in two different modes: a "static" mode for videos that can be played back from any point, often referred to as video-on-demand (VOD); or a "live" mode where later portions of the video become available as time goes by. In the static mode, the Master and Media playlists are fixed. The player is guaranteed that the set of video segments referenced by those playlists will not change over time.
Live mode can work in one of two ways. For truly live events, the most common configuration is for each individual Media Playlist to only include the latest video segment and a small number of consecutive previous segments. In this mode, the player may be able to seek backwards a short time in the video but probably not all the way back to the beginning. In the other live configuration, new video segments can be appended to the Media Playlists but older segments are never removed. This configuration allows the player to seek back to the beginning of the stream at any time during the broadcast and transitions seamlessly to the static stream type when the event finishes.
If you're interested in a more in-depth treatment of the HLS format, check out [Apple's documentation][apple-hls-intro] and the IETF [Draft Specification][hls-spec].
## Dynamic Adaptive Streaming over HTTP
Similar to HLS, [DASH][dash-wiki] content is segmented and is delivered over HTTP(s).
A DASH stream consits of a *Media Presentation Description*(MPD) that describes segment metadata such as timing information, URLs, resolution and bitrate. Each segment can contain either ISO base media file format(e.g MP4) or MPEG-2 TS data. Typically, the MPD will describe the various *Representations* that map to collections of segments at different bitrates to allow bitrate selection. These Representations can be organized as a SegmentList, SegmentTemplate, SegmentBase, or SegmentTimeline.
DASH streams can be delivered in both video-on-demand(VOD) and live streaming modes. In the VOD case, the MPD describes all the segments and representations available and the player can chose which representation to play based on it's capabilities.
Live mode is accomplished using the ISOBMFF Live profile if the segments are in ISOBMFF. There are a few different ways to setup the MPD including but not limited to updating the MPD after an interval of time, using *Periods*, or using the *availabilityTimeOffset* field. A few examples of this are provided by the [DASH Reference Client][dash-if-reference-client]. The MPD will provide enough information for the player to playback the live stream and seek back as far as is specified in the MPD.
If you're interested in a more in-depth description of MPEG-DASH, check out [MDN's tutorial on setting up DASH][mdn-dash-tut] or the [DASHIF Guidelines][dash-if-guide].
# Further Documentation
- [Architechture](arch.md)
- [Glossary](glossary.md)
- [Adaptive Bitrate Switching](bitrate-switching.md)
- [Multiple Alternative Audio Tracks](multiple-alternative-audio-tracks.md)
- [reloadSourceOnError](reload-source-on-error.md)
# Helpful Tools
- [FFmpeg](http://trac.ffmpeg.org/wiki/CompilationGuide)
- [Thumbcoil](http://thumb.co.il/): web based video inspector
[hls]: /docs/intro.md#http-live-streaming
[dash]: /docs/intro.md#dynamic-adaptive-streaming-over-http
[apple-hls-intro]: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
[hls-spec]: https://datatracker.ietf.org/doc/draft-pantos-http-live-streaming/
[dash-wiki]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[dash-if-reference-client]: https://reference.dashif.org/dash.js/
[mdn-dash-tut]: https://developer.mozilla.org/en-US/Apps/Fundamentals/Audio_and_video_delivery/Setting_up_adaptive_streaming_media_sources
[dash-if-guide]: http://dashif.org/guidelines/

28
node_modules/@videojs/http-streaming/docs/arch.md generated vendored Normal file
View file

@ -0,0 +1,28 @@
## HLS Project Overview
This project has three primary duties:
1. Download and parse playlist files
1. Implement the [HTMLVideoElement](https://html.spec.whatwg.org/multipage/embedded-content.html#the-video-element) interface
1. Feed content bits to a SourceBuffer by downloading and transmuxing video segments
### Playlist Management
The [playlist loader](../src/playlist-loader.js) handles all of the details of requesting, parsing, updating, and switching playlists at runtime. It's operation is described by this state diagram:
![Playlist Loader States](images/playlist-loader-states.png)
During VOD playback, the loader will move quickly to the HAVE_METADATA state and then stay there unless a quality switch request sends it to SWITCHING_MEDIA while it fetches an alternate playlist. The loader enters the HAVE_CURRENT_METADATA when a live stream is detected and it's time to refresh the current media playlist to find out about new video segments.
### HLS Tech
Currently, the HLS project integrates with [video.js](http://www.videojs.com/) as a [tech](https://github.com/videojs/video.js/blob/master/docs/guides/tech.md). That means it's responsible for providing an interface that closely mirrors the `<video>` element. You can see that implementation in [videojs-http-streaming.js](../src/videojs-http-streaming.js), the primary entry point of the project.
### Transmuxing
Most browsers don't have support for the file type that HLS video segments are stored in. To get HLS playing back on those browsers, contrib-hls strings together a number of technologies:
1. The [Netstream](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html) in [video.js SWF](https://github.com/videojs/video-js-swf) has a special mode of operation that allows binary video data packaged as an [FLV](http://en.wikipedia.org/wiki/Flash_Video) to be provided directly
1. [videojs-contrib-media-sources](https://github.com/videojs/videojs-contrib-media-sources) provides an abstraction layer over the SWF that operates like a [Media Source](https://w3c.github.io/media-source/#mediasource)
1. A pure javascript transmuxer that repackages HLS segments as FLVs
Transmuxing is the process of transforming media stored in one container format into another container without modifying the underlying media data. If that last sentence doesn't make any sense to you, check out the [Introduction to Media](media.md) for more details.
### Buffer Management
Buffering in contrib-hls is driven by two functions in videojs-hls.js: fillBuffer() and drainBuffer(). During its operation, contrib-hls periodically calls fillBuffer() which determines when more video data is required and begins a segment download if so. Meanwhile, drainBuffer() is invoked periodically during playback to process incoming segments and append them onto the [SourceBuffer](http://w3c.github.io/media-source/#sourcebuffer). In conjunction with a goal buffer length, this producer-consumer relationship drives the buffering behavior of contrib-hls.

View file

@ -0,0 +1,44 @@
# Adaptive Switching Behavior
The HLS tech tries to ensure the highest-quality viewing experience
possible, given the available bandwidth and encodings. This doesn't
always mean using the highest-bitrate rendition available-- if the player
is 300px by 150px, it would be a big waste of bandwidth to download a 4k
stream. By default, the player attempts to load the highest-bitrate
variant that is less than the most recently detected segment bandwidth,
with one condition: if there are multiple variants with dimensions greater
than the current player size, it will only switch up one size greater
than the current player size.
If you're the visual type, the whole process is illustrated
below. Whenever a new segment is downloaded, we calculate the download
bitrate based on the size of the segment and the time it took to
download:
![New bitrate info is available](images/bitrate-switching-1.png)
First, we filter out all the renditions that have a higher bitrate
than the new measurement:
![Bitrate filtering](images/bitrate-switching-2.png)
Then we get rid of any renditions that are bigger than the current
player dimensions:
![Resolution filtering](images/bitrate-switching-3.png)
We don't want to signficant quality drop just because your player is
one pixel too small, so we add back in the next highest
resolution. The highest bitrate rendition that remains is the one that
gets used:
![Final selection](images/bitrate-switching-4.png)
If it turns out no rendition is acceptable based on the filtering
described above, the first encoding listed in the master playlist will
be used.
If you'd like your player to use a different set of priorities, it's
possible to completely replace the rendition selection logic. For
instance, you could always choose the most appropriate rendition by
resolution, even though this might mean more stalls during playback.
See the documentation on `player.vhs.selectPlaylist` for more details.

View file

@ -0,0 +1,218 @@
# Creating Content
## Commands for creating tests streams
### Streams with EXT-X-PROGRAM-DATE-TIME for testing seekToProgramTime and convertToProgramTime
lavfi and testsrc are provided for creating a test stream in ffmpeg
-g 300 sets the GOP size to 300 (keyframe interval, at 30fps, one keyframe every 10 seconds)
-f hls sets the format to HLS (creates an m3u8 and TS segments)
-hls\_time 10 sets the goal segment size to 10 seconds
-hls\_list\_size 20 sets the number of segments in the m3u8 file to 20
-program\_date\_time an hls flag for setting #EXT-X-PROGRAM-DATE-TIME on each segment
```
ffmpeg \
-f lavfi \
-i testsrc=duration=200:size=1280x720:rate=30 \
-g 300 \
-f hls \
-hls_time 10 \
-hls_list_size 20 \
-hls_flags program_date_time \
stream.m3u8
```
## Commands used for segments in `test/segments` dir
### video.ts
Copy only the first two video frames, leave out audio.
```
$ ffmpeg -i index0.ts -vframes 2 -an -vcodec copy video.ts
```
### videoOneSecond.ts
Blank video for 1 second, MMS-Small resolution, start at 0 PTS/DTS, 2 frames per second
```
$ ffmpeg -f lavfi -i color=c=black:s=128x96:r=2:d=1 -muxdelay 0 -c:v libx264 videoOneSecond.ts
```
### videoOneSecond1.ts through videoOneSecond4.ts
Same as videoOneSecond.ts, but follows timing in sequence, with videoOneSecond.ts acting as the 0 index. Each segment starts at the second that its index indicates (e.g., videoOneSecond2.ts has a start time of 2 seconds).
```
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 1 -vcodec copy videoOneSecond1.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 2 -vcodec copy videoOneSecond2.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 3 -vcodec copy videoOneSecond3.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 4 -vcodec copy videoOneSecond4.ts
```
### audio.ts
Copy only the first two audio frames, leave out video.
```
$ ffmpeg -i index0.ts -aframes 2 -vn -acodec copy audio.ts
```
### videoMinOffset.ts
video.ts but with an offset of 0
```
$ ffmpeg -i video.ts -muxpreload 0 -muxdelay 0 -vcodec copy videoMinOffset.ts
```
### audioMinOffset.ts
audio.ts but with an offset of 0. Note that muxed.ts is used because ffmpeg didn't like
the use of audio.ts
```
$ ffmpeg -i muxed.ts -muxpreload 0 -muxdelay 0 -acodec copy -vn audioMinOffset.ts
```
### videoMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 6006 (0.066733 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 6006) / 90,000 = 95443.6509556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 95443.6509556 -muxpreload 95443.6509556 -output_ts_offset 95443.6509556 -vcodec copy videoMaxOffset.ts
```
### audioMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 11520 (0.128000 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 11520) / 90,000 = 95443.5896889
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 95443.5896889 -muxpreload 95443.5896889 -output_ts_offset 95443.5896889 -acodec copy audioMaxOffset.ts
```
### videoLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -vcodec copy videoLargeOffset.ts
```
### audioLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -acodec copy audioLargeOffset.ts
```
### videoLargeOffset2.ts
This takes videoLargeOffset.ts and adds the duration of videoLargeOffset.ts (6006 / 90,000 = 0.066733 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.066733 = 47721.9255886
```
$ ffmpeg -i videoLargeOffset.ts -muxdelay 47721.9255886 -muxpreload 47721.9255886 -output_ts_offset 47721.9255886 -vcodec copy videoLargeOffset2.ts
```
### audioLargeOffset2.ts
This takes audioLargeOffset.ts and adds the duration of audioLargeOffset.ts (11520 / 90,000 = 0.128 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.128 = 47721.9868556
```
$ ffmpeg -i audioLargeOffset.ts -muxdelay 47721.9868556 -muxpreload 47721.9868556 -output_ts_offset 47721.9868556 -acodec copy audioLargeOffset2.ts
```
### caption.ts
Copy the first two frames of video out of a ts segment that already includes CEA-608 captions.
`ffmpeg -i index0.ts -vframes 2 -an -vcodec copy caption.ts`
### id3.ts
Copy only the first five frames of video, leave out audio.
`ffmpeg -i index0.ts -vframes 5 -an -vcodec copy smaller.ts`
Create an ID3 tag using [id3taggenerator][apple_streaming_tools]:
`id3taggenerator -text "{\"id\":1, \"data\": \"id3\"}" -o tag.id3`
Create a file `macro.txt` with the following:
`0 id3 tag.id3`
Run [mediafilesegmenter][apple_streaming_tools] with the small video segment and macro file, to produce a new segment with ID3 tags inserted at the specified times.
`mediafilesegmenter -start-segments-with-iframe --target-duration=1 --meta-macro-file=macro.txt -s -A smaller.ts`
### mp4Video.mp4
Copy only the first two video frames, leave out audio.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -vframes 2 -an -vcodec copy mp4Video.mp4
```
### mp4Audio.mp4
Copy only the first two audio frames, leave out video.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -aframes 2 -vn -acodec copy mp4Audio.mp4
```
### mp4VideoInit.mp4 and mp4AudioInit.mp4
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Renamed from .m4s to .mp4
```
$ ffmpeg -i input.mp4 -f dash out.mpd
```
### webmVideoInit.webm and webmVideo.webm
```
$ cat mp4VideoInit.mp4 mp4Video.mp4 > video.mp4
$ ffmpeg -i video.mp4 -dash_segment_type webm -c:v libvpx-vp9 -f dash output.mpd
$ mv init-stream0.webm webmVideoInit.webm
$ mv chunk-stream0-00001.webm webmVideo.webm
```
## Other useful commands
### Joined (audio and video) initialization segment (for HLS)
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Note that -hls\_fmp4\_init\_filename defaults to init.mp4, but is here for readability.
Without specifying fmp4 for hls\_segment\_type, ffmpeg defaults to ts.
```
$ ffmpeg -i input.mp4 -f hls -hls_fmp4_init_filename init.mp4 -hls_segment_type fmp4 out.m3u8
```
[apple_streaming_tools]: https://developer.apple.com/documentation/http_live_streaming/about_apple_s_http_live_streaming_tools

View file

@ -0,0 +1,87 @@
# DASH Playlist Loader
## Purpose
The [DashPlaylistLoader][dpl] (DPL) is responsible for requesting MPDs, parsing them and keeping track of the media "playlists" associated with the MPD. The [DPL] is used with a [SegmentLoader] to load fmp4 fragments from a DASH source.
## Basic Responsibilities
1. To request an MPD.
2. To parse an MPD into a format [videojs-http-streaming][vhs] can understand.
3. To refresh MPDs according to their minimumUpdatePeriod.
4. To allow selection of a specific media stream.
5. To sync the client clock with a server clock according to the UTCTiming node.
6. To refresh a live MPD for changes.
## Design
The [DPL] is written to be as similar as possible to the [PlaylistLoader][pl]. This means that majority of the public API for these two classes are the same, and so are the states they go through and events that they trigger.
### States
![DashPlaylistLoader States](images/dash-playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the MPD is received and parsed.
- `HAVE_MASTER` the state before a media stream is setup but the MPD has been parsed.
- `HAVE_METADATA` the state after a media stream is setup.
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [DPL] and request the MPD.
- `parseMasterXml()` this will parse the MPD manifest and return the result.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the DASH source as a whole, from the MPD; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `minimumUpdatePeriod` signals that a update period has ended and the MPD must be requested again.
- `playlistunchanged` signals that no changes have been made to a MPD.
- `mediaupdatetimeout` signals that a live MPD and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![DPL with MPC and MG](images/dash-playlist-loader-mpc-mg-sequence.plantuml.png)
### Special Features
There are a few features of [DPL] that are different from [PL] due to fundamental differences between HLS and DASH standards.
#### MinimumUpdatePeriod
This is a time period specified in the MPD after which the MPD should be re-requested and parsed. There could be any number of changes to the MPD between these update periods.
#### SyncClientServerClock
There is a UTCTiming node in the MPD that allows the client clock to be synced with a clock on the server. This may affect the results of parsing the MPD.
#### Requesting `sidx` Boxes
To be filled out.
### Previous Behavior
Until version 1.9.0 of [VHS], we thought that [DPL] could skip the `HAVE_NOTHING` and `HAVE_MASTER` states, as no other XHR requests are needed once the MPD has been downloaded and parsed. However, this is incorrect as there are some Presentations that signal the use of a "Segment Index box" or `sidx`. This `sidx` references specific byte ranges in a file that could contain media or potentially other `sidx` boxes.
A DASH MPD that describes a `sidx` is therefore similar to an HLS master manifest, in that the MPD contains references to something that must be requested and parsed first before references to media segments can be obtained. With this in mind, it was necessary to update the initialization and state transitions of [DPL] to allow further XHR requests to be made after the initial request for the MPD.
### Current Behavior
In [this PR](https://github.com/videojs/http-streaming/pull/386), the [DPL] was updated to go through the `HAVE_NOTHING` and `HAVE_MASTER` states before arriving at `HAVE_METADATA`. If the MPD does not contain `sidx` boxes, then this transition happens quickly after `load()` is called, spending little time in the `HAVE_MASTER` state.
The initial media selection for `masterPlaylistLoader` is made in the `loadedplaylist` handler located in [MasterPlaylistController][mpc]. We now use `hasPendingRequest` to determine whether to automatically select a media playlist for the `masterPlaylistLoader` as a fallback in case one is not selected by [MPC]. The child [DPL]s are created with a media playlist passed in as an argument, so this fallback is not necessary for them. Instead, that media playlist is saved and auto-selected once we enter the `HAVE_MASTER` state.
The `updateMaster` method will return `null` if no updates are found.
The `selectinitialmedia` event is not triggered until an audioPlaylistLoader (which for DASH is always a child [DPL]) has a media playlist. This is signaled by triggering `loadedmetadata` on the respective [DPL]. This event is used to initialize the [Representations API][representations] and setup EME (see [contrib-eme]).
[dpl]: ../src/dash-playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md
[pl]: ../src/playlist-loader.js
[mpc]: ../src/master-playlist-controller.js
[representations]: ../README.md#hlsrepresentations
[contrib-eme]: https://github.com/videojs/videojs-contrib-eme

23
node_modules/@videojs/http-streaming/docs/glossary.md generated vendored Normal file
View file

@ -0,0 +1,23 @@
# Glossary
**Playlist**: This is a representation of an HLS or DASH manifest.
**Media Playlist**: This is a manifest that represents a single rendition or media stream of the source.
**Master Playlist Controller**: This acts as the main controller for the playback engine. It interacts with the SegmentLoaders, PlaylistLoaders, PlaybackWatcher, etc.
**Playlist Loader**: This will request the source and load the master manifest. It is also instructed by the ABR algorithm to load a media playlist or wraps a media playlist if it is provided as the source. There are more details about the playlist loader [here](./arch.md).
**DASH Playlist Loader**: This will do as the PlaylistLoader does, but for DASH sources. It also handles DASH specific functionaltiy, such as refreshing the MPD according to the minimumRefreshPeriod and synchronizing to a server clock.
**Segment Loader**: This determines which segment should be loaded, requests it via the Media Segment Loader and passes the result to the Source Updater.
**Media Segment Loader**: This requests a given segment, decrypts the segment if necessary, and returns it to the Segment Loader.
**Source Updater**: This manages the browser's [SourceBuffers](https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer). It appends decrypted segment bytes provided by the Segment Loader to the corresponding Source Buffer.
**ABR(Adaptive Bitrate) Algorithm**: This concept is described more in detail [here](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming). Our chosen ABR algorithm is referenced by [selectPlaylist](../README.md#hlsselectplaylist) and is described more [here](./bitrate-switching.md).
**Playback Watcher**: This attemps to resolve common playback stalls caused by improper seeking, gaps in content and browser issues.
**Sync Controller**: This will attempt to create a mapping between the segment index and a display time on the player.

20
node_modules/@videojs/http-streaming/docs/hlse.md generated vendored Normal file
View file

@ -0,0 +1,20 @@
# Encrypted HTTP Live Streaming
The [HLS spec](http://tools.ietf.org/html/draft-pantos-http-live-streaming-13#section-6.2.3) requires segments to be encrypted with AES-128 in CBC mode with PKCS7 padding. You can encrypt data to that specification with a combination of [OpenSSL](https://www.openssl.org/) and the [pkcs7 utility](https://github.com/brightcove/pkcs7). From the command-line:
```sh
# encrypt the text "hello" into a file
# since this is for testing, skip the key salting so the output is stable
# using -nosalt outside of testing is a terrible idea!
echo -n "hello" | pkcs7 | \
openssl enc -aes-128-cbc -nopad -nosalt -K $KEY -iv $IV > hello.encrypted
# xxd is a handy way of translating binary into a format easily consumed by
# javascript
xxd -i hello.encrypted
```
Later, you can decrypt it:
```sh
openssl enc -d -nopad -aes-128-cbc -K $KEY -iv $IV
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

View file

@ -0,0 +1,12 @@
<svg width="228" height="310" version="1.1" baseProfile="full" viewbox="0 0 228 310" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="134" y="111" style="font-weight:normal;">load()</text>
<path d="M114 81 L114 105 L114 129 L114 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 121 L114 125 L117.2 121 L114 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="134" y="211" style="font-weight:normal;">media()</text>
<path d="M114 181 L114 205 L114 229 L114 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 221 L114 225 L117.2 221 L114 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="37" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="57" y="60" style="">HAVE_NOTHING</text>
<rect x="40" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="60" y="160" style="">HAVE_MASTER</text>
<rect x="30" y="230" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="260" style="">HAVE_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View file

@ -0,0 +1,125 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="744.09448819"
height="1052.3622047"
id="svg2"
version="1.1"
inkscape:version="0.48.2 r9819"
sodipodi:docname="New document 1">
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.74898074"
inkscape:cx="405.31989"
inkscape:cy="721.1724"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1165"
inkscape:window-height="652"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g3832">
<g
transform="translate(-80,0)"
id="g3796">
<rect
style="fill:none;stroke:#000000;stroke-width:4.99253178;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3756"
width="195.00757"
height="75.007133"
x="57.496265"
y="302.08554" />
<text
xml:space="preserve"
style="font-size:39.94025421px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="80.563461"
y="353.93951"
id="text3758"
sodipodi:linespacing="125%"
transform="scale(0.99841144,1.0015911)"><tspan
sodipodi:role="line"
id="tspan3760"
x="80.563461"
y="353.93951">Header</tspan></text>
</g>
<g
transform="translate(-80,0)"
id="g3801">
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="278.44489"
y="354.50266"
id="text3762"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3764"
x="278.44489"
y="354.50266">Raw Bitstream Payload (RBSP)</tspan></text>
<rect
style="fill:none;stroke:#000000;stroke-width:5;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3768"
width="660.63977"
height="75"
x="252.5"
y="302.09293" />
</g>
</g>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="10.078175"
y="432.12851"
id="text3806"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3808"
x="10.078175"
y="432.12851">1 byte</tspan></text>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="-31.193787"
y="252.32137"
id="text3810"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3812"
x="-31.193787"
y="252.32137">H264 Network Abstraction Layer (NAL) Unit</tspan></text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

View file

@ -0,0 +1,26 @@
<svg width="304" height="610" version="1.1" baseProfile="full" viewbox="0 0 304 610" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="172" y="111" style="font-weight:normal;">load()</text>
<path d="M152 81 L152 105 L152 129 L152 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 121 L152 125 L155.2 121 L152 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="211" style="font-weight:normal;">media()</text>
<path d="M152 181 L152 205 L152 229 L152 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 221 L152 225 L155.2 221 L152 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="311" style="font-weight:normal;">media()/ start()</text>
<path d="M152 281 L152 305 L152 329 L152 329 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 321 L152 325 L155.2 321 L152 329 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 381 L152 405 L152 429 L152 429 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 421 L152 425 L155.2 421 L152 429 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 389 L152 385 L148.8 389 L152 381 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 481 L152 505 L152 529 L152 529 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 521 L152 525 L155.2 521 L152 529 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 489 L152 485 L148.8 489 L152 481 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="75" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="95" y="60" style="">HAVE_NOTHING</text>
<rect x="78" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="98" y="160" style="">HAVE_MASTER</text>
<rect x="56" y="230" height="50" width="192" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="76.3" y="260" style="">SWITCHING_MEDIA</text>
<rect x="68" y="330" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="88" y="360" style="">HAVE_METADATA</text>
<text x="67" y="460" style="font-weight:normal;font-style:italic;">mediaupdatetimeout</text>
<rect x="30" y="530" height="50" width="244" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="560" style="">HAVE_CURRENT_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

View file

@ -0,0 +1,119 @@
@startuml
header DashPlaylistLoader sequences
title DashPlaylistLoader sequences: Master Manifest with Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterDashPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioDashPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "mpdParser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
== Requesting Master Manifest ==
MPL -> MPL : start()
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL: media(x)
alt if no sidx
note over MPL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
note over MPL #lightblue: trigger 'loadedmetadata' on master loader [T1]
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
== Initializing Media Groups, Choosing Active Tracks ==
MPL -> MG: setupMediaGroups()
MG -> MG: initialize()
== Initializing Alternate Audio Loader ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> ASL: reset audio segment loader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> APL: zero delay to fake network request
break finish pending tasks
MG -> Tech: add audioTrack
MPL -> MPC: setupSourceBuffers_()
MPL -> MPC: setupFirstPlay()
loop mainSegmentLoader.monitorBufferTick_()
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
APL -> APL: zero delay over
APL -> APL: media(x)
alt if no sidx
note over APL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightblue: trigger 'loadedmetadata' [T2]
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.monitorBufferTick_()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View file

@ -0,0 +1,21 @@
#title: DASH Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [HAVE_METADATA]

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,246 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
group MasterPlaylistController.constructor()
MPC -> MPL : setting up MasterPlaylistLoader
note left #lightyellow
sets up mediaupdatetimeout
handler for live playlist staleness
end note
note over MPL #lightgray: state = 'HAVE_NOTHING'
MPC -> MPL: load()
end
group MasterPlaylistLoader.load()
MPL -> MPL : start()
note left #lightyellow: not started yet
== Requesting Master Manifest ==
group start()
note over MPL #lightgray: started = true
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
MPL -> MPL: set loader's master playlist
note over MPL #lightgray: state = 'HAVE_MASTER'
note over MPL #lightblue: trigger 'loadedplaylist' on master loader
== Requesting Video Manifest ==
group 'loadedplaylist' handler
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
note left #lightgray: select initial (video) playlist
note over MPL #lightyellow: state = 'SWITCHING_MEDIA'
group media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> MPL: haveMetadata()
note over MPL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightyellow
update master and media playlists
end note
opt live
MPL -> MPL: setup mediaupdatetimeout
end
note over MPL #lightblue
trigger 'loadedplaylist' on master loader.
This does not end up requesting segments
at this point.
end note
group MasterPlaylistLoader 'loadedplaylist' handler
MPL -> MPL : setup durationchange handler
end
end
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
note over SL #lightyellow: updates playlist
MPL -> SL: load()
note right #lightgray
This does nothing as mimeTypes
have not been set yet.
end note
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
group MediaGroups.setupMediaGroups()
group initialize()
MG -> APL: create child playlist loader for alt audio
note over APL #lightyellow: state = 'HAVE_NOTHING'
note left #lightgray
setup 'loadedmetadata' and 'loadedplaylist' listeners
on child alt audio playlist loader
end note
MG -> Tech: add audioTracks
end
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
note left #lightgray
There is no activePlaylistLoader at this point,
but there is an audio playlistLoader
end note
group onTrackChanged()
MG -> SL: reset mainSegmentLoader
note left #lightgray: Clears buffer, aborts all inflight requests
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
group startLoaders()
note over MG #lightyellow
activePlaylistLoader = AudioPlaylistLoader
end note
MG -> APL: load()
end
group AudioPlaylistLoader.load()
APL -> APL: start()
group alt start()
note over APL #lightyellow: started = true
APL -> ext: request alt audio media manifest
break MasterPlaylistLoader 'loadedmetadata' handler
MPL -> MPC: setupSourceBuffers()
note left #lightgray
This will set mimeType.
Segments can be loaded from now on.
end note
MPL -> MPC: setupFirstPlay()
note left #lightgray
Immediate exit since the player
is paused
end note
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
note over APL #lightyellow: state = 'HAVE_MASTER'
note left #lightgray: Infer a master playlist
APL -> APL: haveMetadata()
note over APL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
APL -> parser: parsing the child manifest again
parser -> APL: returning object representing child manifest
note over APL #lightyellow
update master and media references
end note
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
group 'loadedplaylist' handler
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightyellow: set playlist
end
end
note over APL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
end
end
end
end
end
end
end
end
end
@enduml

View file

@ -0,0 +1,114 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
MPL -> MPL : start()
== Requesting Master Manifest ==
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightblue: trigger 'loadedplaylist'
note over MPL #lightblue: handleing 'loadedplaylist'
MPL -> SL: playlist()
MPL -> SL: load()
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> SL: reset mainSegmentLoader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> ext: request alt audio media manifest
break finish pending tasks
MG -> Tech: add audioTracks
MPL -> MPC: setupSourceBuffers()
MPL -> MPC: setupFirstPlay()
loop on monitorBufferTick
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over APL #lightblue: trigger 'loadedmetadata'
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View file

@ -0,0 +1,25 @@
#title: Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [SWITCHING_MEDIA]
[SWITCHING_MEDIA] media()/ start()-> [HAVE_METADATA]
[HAVE_METADATA] <--> [<label> mediaupdatetimeout]
[<label> mediaupdatetimeout] <--> [HAVE_CURRENT_METADATA]

View file

@ -0,0 +1,13 @@
@startuml
state "Download Segment" as DL
state "Prepare for Append" as PfA
[*] -> DL
DL -> PfA
PfA : transmux (if needed)
PfA -> Append
Append : MSE source buffer
Append -> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

View file

@ -0,0 +1,57 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant "videojs-contrib-media-sources" order 3
participant mux.js order 4
participant "Native Source Buffer" order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
group Request
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
At end of all requests
(key/segment/init segment)
end note
SegmentLoader -> SegmentLoader : handleSegment(...)
note left
"Probe" (parse) segment for
timing and track information
end note
SegmentLoader -> "videojs-contrib-media-sources" : append to "fake" source buffer
note left
Source buffer here is a
wrapper around native buffers
end note
group Transmux
"videojs-contrib-media-sources" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...flush...)
"mux.js" -> "videojs-contrib-media-sources" : postMessage(...data...)
"videojs-contrib-media-sources" -> "Native Source Buffer" : append
"Native Source Buffer" -> "videojs-contrib-media-sources" : //updateend//
"videojs-contrib-media-sources" -> SegmentLoader : handleUpdateEnd(...)
end
end
SegmentLoader -> SegmentLoader : handleUpdateEnd_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View file

@ -0,0 +1,29 @@
@startuml
state "Request Segment" as RS
state "Partial Response (1)" as PR1
state "..." as DDD
state "Partial Response (n)" as PRN
state "Prepare for Append (1)" as PfA1
state "Prepare for Append (n)" as PfAN
state "Append (1)" as A1
state "Append (n)" as AN
[*] -> RS
RS --> PR1
PR1 --> DDD
DDD --> PRN
PR1 -> PfA1
PfA1 : transmux (if needed)
PfA1 -> A1
A1 : MSE source buffer
PRN -> PfAN
PfAN : transmux (if needed)
PfAN -> AN
AN : MSE source buffer
AN --> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

109
node_modules/@videojs/http-streaming/docs/lhls/index.md generated vendored Normal file
View file

@ -0,0 +1,109 @@
# LHLS
### Table of Contents
* [Background](#background)
* [Current Support for LHLS in VHS](#current-support-for-lhls-in-vhs)
* [Request a Segment in Pieces](#request-a-segment-in-pieces)
* [Transmux and Append Segment Pieces](#transmux-and-append-segment-pieces)
* [videojs-contrib-media-sources background](#videojs-contrib-media-sources-background)
* [Transmux Before Append](#transmux-before-append)
* [Transmux Within media-segment-request](#transmux-within-media-segment-request)
* [mux.js](#muxjs)
* [The New Flow](#the-new-flow)
* [Resources](#resources)
### Background
LHLS stands for Low-Latency HLS (see [Periscope's post](https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef)). It's meant to be used for ultra low latency live streaming, where a server can send pieces of a segment before the segment is done being written to, and the player can append those pieces to the browser, allowing sub segment duration latency from true live.
In order to support LHLS, a few components are required:
* A server that supports [chunked transfer encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding).
* A client that can:
* request segment pieces
* transmux segment pieces (for browsers that don't natively support the media type)
* append segment pieces
### Current Support for LHLS in VHS
At the moment, VHS doesn't support any of the client requirements. It waits until a request is completed and the transmuxer expects full segments.
Current flow:
![current flow](./current-flow.plantuml.png)
Expected flow:
![expected flow](./expected-flow.plantuml.png)
### Request Segment Pieces
The first change was to request pieces of a segment. There are a few approaches to accomplish this:
* [Range Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests)
* requires server support
* more round trips
* [Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
* limited browser support
* doesn't support aborts
* [Plain text MIME type](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data)
* slightly non-standard
* incurs a cost of converting from string to bytes
*Plain text MIME type* was chosen because of its wide support. It provides a mechanism to access progressive bytes downloaded on [XMLHttpRequest progress events](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequestEventTarget/onprogress).
This change was made in [media-segment-request](https://github.com/videojs/http-streaming/blob/master/src/media-segment-request.js).
### Transmux and Append Segment Pieces
Getting the progress bytes is easy. Supporting partial transmuxing and appending is harder.
Current flow:
![current transmux and append flow](./current-transmux-and-append-flow.plantuml.png)
In order to support partial transmuxing and appending in the current flow, videojs-contrib-media-sources would have to get more complicated.
##### videojs-contrib-media-sources background
Browsers, via MSE source buffers, only support a limited set of media types. For most browsers, this means MP4/fragmented MP4. HLS uses TS segments (it also supports fragmented MP4, but that case is less common). This is why transmuxing is necessary.
Just like Video.js is a wrapper around the browser video element, bridging compatibility and adding support to extend features, videojs-contrib-media-sources provides support for more media types across different browsers by building in a transmuxer.
Not only did videojs-contrib-media-sources allow us to transmux TS to FMP4, but it also allowed us to transmux TS to FLV for flash support.
Over time, the complexity of logic grew in videojs-contrib-media-sources, and it coupled tightly with videojs-contrib-hls and videojs-http-streaming, firing events to communicate between the two.
Once flash support was moved to a distinct flash module, [via flashls](https://github.com/brightcove/videojs-flashls-source-handler), it was decided to move the videojs-contrib-media-sources logic into VHS, and to remove coupled logic by using only the native source buffers (instead of the wrapper) and transmuxing somewhere within VHS before appending.
##### Transmux Before Append
As the LHLS work started, and videojs-contrib-media-sources needed more logic, the native media source [abstraction leaked](https://en.wikipedia.org/wiki/Leaky_abstraction), adding non-standard functions to work around limitations. In addition, the logic in videojs-contrib-media-sources required more conditional paths, leading to more confusing code.
It was decided that it would be easier to do the transmux before append work in the process of adding support for LHLS. This was widely considered a *good decision*, and provided a means of reducing tech debt while adding in a new feature.
##### Transmux Within media-segment-request
Work started by moving transmuxing into segment-loader, however, we quickly realized that media-segment-request provided a better home.
media-segment-request already handled decrypting segments. If it handled transmuxing as well, then segment-loader could stick with only deciding which segment to request, getting bytes as FMP4, and appending them.
The transmuxing logic moved to a new module called segment-transmuxer, which wrapped around the [WebWorker](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker) that wrapped around mux.js (the transmuxer itself).
##### mux.js
While most of the [mux.js pipeline](https://github.com/videojs/mux.js/blob/master/docs/diagram.png) supports pushing pieces of data (and should support LHLS by default), its "flushes" to send transmuxed data back to the caller expected full segments.
Much of the pipeline was reused, however, the top level audio and video segment streams, as well as the entry point, were rewritten so that instead of providing a full segment on flushes, each frame of video was provided individually (audio frames still flush as a group). The new concept of partial flushes was added into the pipeline to handle this case.
##### The New Flow
One benefit to transmuxing before appending is the possibility of extracting track and timing information from the segments. Previously, this required a separate parsing step to happen on the full segment. Now, it is included in the transmuxing pipeline, and comes back to us on separate callbacks.
![new segment loader sequence](./new-segment-loader-sequence.plantuml.png)
### Resources
* https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef
* https://github.com/jordicenzano/webserver-chunked-growingfiles

View file

@ -0,0 +1,118 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant XMLHttpRequest order 3
participant "segment-transmuxer" order 4
participant mux.js order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
"media-segment-request" -> XMLHttpRequest : request for segment/key/init segment
group Request
XMLHttpRequest -> "media-segment-request" : //segment progress//
note over "media-segment-request" #moccasin
If handling partial data,
tries to transmux new
segment bytes.
end note
"media-segment-request" -> SegmentLoader : progressFn(...)
note left
Forwards "progress" events from
the XML HTTP Request.
end note
group Transmux
"media-segment-request" -> "segment-transmuxer" : transmux(...)
"segment-transmuxer" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"segment-transmuxer" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"segment-transmuxer" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"segment-transmuxer" -> mux.js : postMessage(...partialFlush...)
note left #moccasin
Collates any complete frame data
from partial segment and
caches remainder.
end note
"segment-transmuxer" -> mux.js : postMessage(...flush...)
note left
Collates any complete frame data
from segment, caches only data
required between segments.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...trackinfo...)
"segment-transmuxer" -> "media-segment-request" : onTrackInfo(...)
"media-segment-request" -> SegmentLoader : trackInfoFn(...)
note left
Gets whether the segment
has audio and/or video.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...audioTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onAudioTimingInfo(...)
"mux.js" -> "segment-transmuxer" : postMessage(...videoTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onVideoTimingInfo(...)
"media-segment-request" -> SegmentLoader : timingInfoFn(...)
note left
Gets the audio/video
start/end times.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...caption...)
"segment-transmuxer" -> "media-segment-request" : onCaptions(...)
"media-segment-request" -> SegmentLoader : captionsFn(...)
note left
Gets captions from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...id3Frame...)
"segment-transmuxer" -> "media-segment-request" : onId3(...)
"media-segment-request" -> SegmentLoader : id3Fn(...)
note left
Gets metadata from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...data...)
"segment-transmuxer" -> "media-segment-request" : onData(...)
"media-segment-request" -> SegmentLoader : dataFn(...)
note left
Gets an fmp4 segment
ready to be appended.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...done...)
note left
Gathers GOP info, and calls
done callback.
end note
"segment-transmuxer" -> "media-segment-request" : onDone(...)
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
Queues callbacks on source
buffer queue to wait for
appends to complete.
end note
end
XMLHttpRequest -> "media-segment-request" : //segment request finished//
end
SegmentLoader -> SegmentLoader : handleAppendsDone_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View file

@ -0,0 +1,36 @@
# Transmux Before Append Changes
## Overview
In moving our transmuxing stage from after append (to a virtual source buffer from videojs-contrib-media-sources) to before appending (to a native source buffer), some changes were required, and others made the logic simpler. What follows are some details into some of the changes made, why they were made, and what impact they will have.
### Source Buffer Creation
In a pre-TBA (transmux before append) world, videojs-contrib-media-source's source buffers provided an abstraction around the native source buffers. They also required a bit more information than the native buffers. For instance, they used the full mime types instead of simply relying on the codec information, when creating the source buffers. This provided the container types, which let the virtual source buffer know whether the media needed to be transmuxed or not. In a post-TBA world, the container type is no longer required, therefore only the codec strings are passed along.
In terms of when the source buffers are created, in the post-TBA world, the creation of source buffers is delayed until we are sure we have all of the information we need. This means that we don't create the native source buffers until the PMT is parsed from the main media. Even if the content is demuxed, we only need to parse the main media, since, for now, we don't rely on codec information from the segment itself, and instead use the manifest-provided codec info, or default codecs. While we could create the source buffers earlier if the codec information is provided in the manifest, delaying provides a simpler, single, code path, and more opportunity for us to be flexible with how much codec info is provided by the attribute. While the HLS specification requires this information, other formats may not, and we have seen content that plays fine but does not adhere to the strict rules of providing all necessary codec information.
### Appending Init Segments
Previously, init segments were handled by videojs-contrib-media-sources for TS segments and segment-loader for FMP4 segments.
videojs-contrib-media-sources and TS:
* video segments
* append the video init segment returned from the transmuxer with every segment
* audio segments
* append the audio init segment returned from the transmuxer only in the following cases:
* first append
* after timestampOffset is set
* audio track events: change/addtrack/removetrack
* 'mediachange' event
segment-loader and FMP4:
* if segment.map is set:
* save (cache) the init segment after the request finished
* append the init segment directly to the source buffer if the segment loader's activeInitSegmentId doesn't match the segment.map generated init segment ID
With the transmux before append and LHLS changes, we only append video init segments on changes as well. This is more important with LHLS, as prepending an init segment before every frame of video would be wasteful.
### Test Changes
Some tests were removed because they were no longer relevant after the change to creating source buffers later. For instance, `waits for both main and audio loaders to finish before calling endOfStream if main loader starting media is unknown` no longer can be tested by waiting for an audio loader response and checking for end of stream, as the test will time out since MasterPlaylistController will wait for track info from the main loader before the source buffers are created. That condition is checked elsewhere.

29
node_modules/@videojs/http-streaming/docs/media.md generated vendored Normal file
View file

@ -0,0 +1,29 @@
This doc is just a stub right now. Check back later for updates.
# General
When we talk about video, we normally think about it as one monolithic thing. If you ponder it for a moment though, you'll realize it's actually two distinct sorts of information that are presented to the viewer in tandem: a series of pictures and a sequence of audio samples. The temporal nature of audio and video is shared but the techniques used to efficiently transmit them are very different and necessitate a lot of the complexity in video file formats. Bundling up these (at least) two streams into a single package is the first of many issues introduced by the need to serialize video data and is solved by meta-formats called _containers_.
Containers formats are probably the most recongnizable of the video components because they get the honor of determining the file extension. You've probably heard of MP4, MOV, and WMV, all of which are container formats. Containers specify how to serialize audio, video, and metadata streams into a sequential series of bits and how to unpack them for decoding. Containers are basically a box that can hold video information and timed media data:
![Containers](images/containers.png)
- codecs
- containers, multiplexing
# MPEG2-TS
![MPEG2-TS Structure](images/mp2t-structure.png)
![MPEG2-TS Packet Types](images/mp2t-packet-types.png)
- streaming vs storage
- program table
- program map table
- history, context
# H.264
- NAL units
- Annex B vs MP4 elementary stream
- access unit -> sample
# MP4
- origins: quicktime

75
node_modules/@videojs/http-streaming/docs/mse.md generated vendored Normal file
View file

@ -0,0 +1,75 @@
# Media Source Extensions Notes
A collection of findings experimenting with Media Source Extensions on
Chrome 36.
* Specifying an audio and video codec when creating a source buffer
but passing in an initialization segment with only a video track
results in a decode error
## ISO Base Media File Format (BMFF)
### Init Segment
A working initialization segment is outlined below. It may be possible
to trim this structure down further.
- `ftyp`
- `moov`
- `mvhd`
- `trak`
- `tkhd`
- `mdia`
- `mdhd`
- `hdlr`
- `minf`
- `mvex`
### Media Segment
The structure of a minimal media segment that actually encapsulates
movie data is outlined below:
- `moof`
- `mfhd`
- `traf`
- `tfhd`
- `tfdt`
- `trun` containing samples
- `mdat`
### Structure
sample: time {number}, data {array}
chunk: samples {array}
track: samples {array}
segment: moov {box}, mdats {array} | moof {box}, mdats {array}, data {array}
track
chunk
sample
movie fragment -> track fragment -> [samples]
### Sample Data Offsets
Movie-fragment Relative Addressing: all trun data offsets are relative
to the containing moof (?).
Without default-base-is-moof, the base data offset for each trun in
trafs after the first is the *end* of the previous traf.
#### iso5/DASH Style
moof
|- traf (default-base-is-moof)
| |- trun_0 <size of moof> + 0
| `- trun_1 <size of moof> + 100
`- traf (default-base-is-moof)
`- trun_2 <size of moof> + 300
mdat
|- samples_for_trun_0 (100 bytes)
|- samples_for_trun_1 (200 bytes)
`- samples_for_trun_2
#### Single Track Style
moof
`- traf
`- trun_0 <size of moof> + 0
mdat
`- samples_for_trun_0

View file

@ -0,0 +1,96 @@
# Multiple Alternative Audio Tracks
## General
m3u8 manifests with multiple audio streams will have those streams added to `video.js` in an `AudioTrackList`. The `AudioTrackList` can be accessed using `player.audioTracks()` or `tech.audioTracks()`.
## Mapping m3u8 metadata to AudioTracks
The mapping between `AudioTrack` and the parsed m3u8 file is fairly straight forward. The table below shows the mapping
| m3u8 | AudioTrack |
|---------|------------|
| label | label |
| lang | language |
| default | enabled |
| ??? | kind |
| ??? | id |
As you can see m3u8's do not have a property for `AudioTrack.id`, which means that we let `video.js` randomly generate the id for `AudioTrack`s. This will have no real impact on any part of the system as we do not use the `id` anywhere.
The other property that does not have a mapping in the m3u8 is `AudioTrack.kind`. It was decided that we would set the `kind` to `main` when `default` is set to `true` and in other cases we set it to `alternative` unless the track has `characteristics` which include `public.accessibility.describes-video`, in which case we set it to `main-desc` (note that this `kind` indicates that the track is a mix of the main track and description, so it can be played *instead* of the main track; a track with kind `description` *only* has the description, not the main track).
Below is a basic example of a mapping
m3u8 layout
``` JavaScript
{
'media-group-1': [{
'audio-track-1': {
default: true,
lang: 'eng'
},
'audio-track-2': {
default: false,
lang: 'fr'
},
'audio-track-3': {
default: false,
lang: 'eng',
characteristics: 'public.accessibility.describes-video'
}
}]
}
```
Corresponding AudioTrackList when media-group-1 is used (before any tracks have been changed)
``` JavaScript
[{
label: 'audio-tracks-1',
enabled: true,
language: 'eng',
kind: 'main',
id: 'random'
}, {
label: 'audio-tracks-2',
enabled: false,
language: 'fr',
kind: 'alternative',
id: 'random'
}, {
label: 'audio-tracks-3',
enabled: false,
language: 'eng',
kind: 'main-desc',
id: 'random'
}]
```
## Startup (how tracks are added and used)
> AudioTrack & AudioTrackList live in video.js
1. `HLS` creates a `MasterPlaylistController` and watches for the `loadedmetadata` event
1. `HLS` parses the m3u8 using the `MasterPlaylistController`
1. `MasterPlaylistController` creates a `PlaylistLoader` for the master m3u8
1. `MasterPlaylistController` creates `PlaylistLoader`s for every audio playlist
1. `MasterPlaylistController` creates a `SegmentLoader` for the main m3u8
1. `MasterPlaylistController` creates a `SegmentLoader` for a potential audio playlist
1. `HLS` sees the `loadedmetadata` and finds the currently selected MediaGroup and all the metadata
1. `HLS` removes all `AudioTrack`s from the `AudioTrackList`
1. `HLS` created `AudioTrack`s for the MediaGroup and adds them to the `AudioTrackList`
1. `HLS` calls `MasterPlaylistController`s `useAudio` with no arguments (causes it to use the currently enabled audio)
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux
## How tracks are switched
> AudioTrack & AudioTrackList live in video.js
1. `HLS` is setup to watch for the `changed` event on the `AudioTrackList`
1. User selects a new `AudioTrack` from a menu (where only one track can be enabled)
1. `AudioTrackList` enables the new `Audiotrack` and disables all others
1. `AudioTrackList` triggers a `changed` event
1. `HLS` sees the `changed` event and finds the newly enabled `AudioTrack`
1. `HLS` sends the `label` for the new `AudioTrack` to `MasterPlaylistController`s `useAudio` function
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux

View file

@ -0,0 +1,16 @@
# How to get player time from program time
NOTE: See the doc on [Program Time to Player Time](program-time-to-player-time.md) for definitions and an overview of the conversion process.
## Overview
To convert a program time to a player time, the following steps must be taken:
1. Find the right segment by sequentially searching through the playlist until the program time requested is >= the EXT-X-PROGRAM-DATE-TIME of the segment, and < the EXT-X-PROGRAM-DATE-TIME of the following segment (or the end of the playlist is reached).
2. Determine the segment's start and end player times.
To accomplish #2, the segment must be downloaded and transmuxed (right now only TS segments are handled, and TS is always transmuxed to FMP4). This will obtain start and end times post transmuxer modifications. These are the times that the source buffer will recieve and report for the segment's newly created MP4 fragment.
Since there isn't a simple code path for downloading a segment without appending, the easiest approach is to seek to the estimated start time of that segment using the playlist duration calculation function. Because this process is not always accurate (manifest timing values are almost never accurate), a few seeks may be required to accurately seek into that segment.
If all goes well, and the target segment is downloaded and transmuxed, the player time may be found by taking the difference between the requested program time and the EXT-X-PROGRAM-DATE-TIME of the segment, then adding that difference to `segment.videoTimingInfo.transmuxedPresentationStart`.

View file

@ -0,0 +1,47 @@
# Playlist Loader
## Purpose
The [PlaylistLoader][pl] (PL) is responsible for requesting m3u8s, parsing them and keeping track of the media "playlists" associated with the manifest. The [PL] is used with a [SegmentLoader] to load ts or fmp4 fragments from an HLS source.
## Basic Responsibilities
1. To request an m3u8.
2. To parse a m3u8 into a format [videojs-http-streaming][vhs] can understand.
3. To allow selection of a specific media stream.
4. To refresh a live master m3u8 for changes.
## Design
### States
![PlaylistLoader States](images/playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the m3u8 is received and parsed.
- `HAVE_MASTER` the state before a media manifest is parsed and setup but after the master manifest has been parsed and setup.
- `HAVE_METADATA` the state after a media stream is setup.
- `SWITCHING_MEDIA` the intermediary state we go though while changing to a newly selected media playlist
- `HAVE_CURRENT_METADATA` a temporary state after requesting a refresh of the live manifest and before receiving the update
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [PL] and request the m3u8.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the HLS source as a whole, from the m3u8; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `playlistunchanged` signals that no changes have been made to a m3u8.
- `mediaupdatetimeout` signals that a live m3u8 and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![PL with MPC and MG](images/playlist-loader-mpc-mg-sequence.plantuml.png)
[pl]: ../src/playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md

View file

@ -0,0 +1,141 @@
# How to get program time from player time
## Definitions
NOTE: All times referenced in seconds unless otherwise specified.
*Player Time*: any time that can be gotten/set from player.currentTime() (e.g., any time within player.seekable().start(0) to player.seekable().end(0)).<br />
*Stream Time*: any time within one of the stream's segments. Used by video frames (e.g., dts, pts, base media decode time). While these times natively use clock values, throughout the document the times are referenced in seconds.<br />
*Program Time*: any time referencing the real world (e.g., EXT-X-PROGRAM-DATE-TIME).<br />
*Start of Segment*: the pts (presentation timestamp) value of the first frame in a segment.<br />
## Overview
In order to convert from a *player time* to a *stream time*, an "anchor point" is required to match up a *player time*, *stream time*, and *program time*.
Two anchor points that are usable are the time since the start of a new timeline (e.g., the time since the last discontinuity or start of the stream), and the start of a segment. Because, in our requirements for this conversion, each segment is tagged with its *program time* in the form of an [EXT-X-PROGRAM-DATE-TIME tag](https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.6), using the segment start as the anchor point is the easiest solution. It's the closest potential anchor point to the time to convert, and it doesn't require us to track time changes across segments (e.g., trimmed or prepended content).
Those time changes are the result of the transmuxer, which can add/remove content in order to keep the content playable (without gaps or other breaking changes between segments), particularly when a segment doesn't start with a key frame.
In order to make use of the segment start, and to calculate the offset between the segment start and the time to convert, a few properties are needed:
1. The start of the segment before transmuxing
1. Time changes made to the segment during transmuxing
1. The start of the segment after transmuxing
While the start of the segment before and after transmuxing is trivial to retrieve, getting the time changes made during transmuxing is more complicated, as we must account for any trimming, prepending, and gap filling made during the transmux stage. However, the required use-case only needs the position of a video frame, allowing us to ignore any changes made to the audio timeline (because VHS uses video as the timeline of truth), as well as a couple of the video modifications.
What follows are the changes made to a video stream by the transmuxer that could alter the timeline, and if they must be accounted for in the conversion:
* Keyframe Pulling
* Used when: the segment doesn't start with a keyframe.
* Impact: the keyframe with the lowest dts value in the segment is "pulled" back to the first dts value in the segment, and all frames in-between are dropped.
* Need to account in time conversion? No. If a keyframe is pulled, and frames before it are dropped, then the segment will maintain the same segment duration, and the viewer is only seeing the keyframe during that period.
* GOP Fusion
* Used when: the segment doesn't start with a keyframe.
* Impact: if GOPs were saved from previous segment appends, the last GOP will be prepended to the segment.
* Need to account in time conversion? Yes. The segment is artificially extended, so while it shouldn't impact the stream time itself (since it will overlap with content already appended), it will impact the post transmux start of segment.
* GOPS to Align With
* Used when: switching renditions, or appending segments with overlapping GOPs (intersecting time ranges).
* Impact: GOPs in the segment will be dropped until there are no overlapping GOPs with previous segments.
* Need to account in time conversion? No. So long as we aren't switching renditions, and the content is sane enough to not contain overlapping GOPs, this should not have a meaningful impact.
Among the changes, with only GOP Fusion having an impact, the task is simplified. Instead of accounting for any changes to the video stream, only those from GOP Fusion should be accounted for. Since GOP fusion will potentially only prepend frames to the segment, we just need the number of seconds prepended to the segment when offsetting the time. As such, we can add the following properties to each segment:
```
segment: {
// calculated start of segment from either end of previous segment or end of last buffer
// (in stream time)
start,
...
videoTimingInfo: {
// number of seconds prepended by GOP fusion
transmuxerPrependedSeconds
// start of transmuxed segment (in player time)
transmuxedPresentationStart
}
}
```
## The Formula
With the properties listed above, calculating a *program time* from a *player time* is given as follows:
```
const playerTimeToProgramTime = (playerTime, segment) => {
if (!segment.dateTimeObject) {
// Can't convert without an "anchor point" for the program time (i.e., a time that can
// be used to map the start of a segment with a real world time).
return null;
}
const transmuxerPrependedSeconds = segment.videoTimingInfo.transmuxerPrependedSeconds;
const transmuxedStart = segment.videoTimingInfo.transmuxedPresentationStart;
// get the start of the content from before old content is prepended
const startOfSegment = transmuxedStart + transmuxerPrependedSeconds;
const offsetFromSegmentStart = playerTime - startOfSegment;
return new Date(segment.dateTimeObject.getTime() + offsetFromSegmentStart * 1000);
};
```
## Examples
```
// Program Times:
// segment1: 2018-11-10T00:00:30.1Z => 2018-11-10T00:00:32.1Z
// segment2: 2018-11-10T00:00:32.1Z => 2018-11-10T00:00:34.1Z
// segment3: 2018-11-10T00:00:34.1Z => 2018-11-10T00:00:36.1Z
//
// Player Times:
// segment1: 0 => 2
// segment2: 2 => 4
// segment3: 4 => 6
const segment1 = {
dateTimeObject: 2018-11-10T00:00:30.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0,
transmuxedPresentationStart: 0
}
};
playerTimeToProgramTime(0.1, segment1);
// startOfSegment = 0 + 0 = 0
// offsetFromSegmentStart = 0.1 - 0 = 0.1
// return 2018-11-10T00:00:30.1Z + 0.1 = 2018-11-10T00:00:30.2Z
const segment2 = {
dateTimeObject: 2018-11-10T00:00:32.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.3,
transmuxedPresentationStart: 1.7
}
};
playerTimeToProgramTime(2.5, segment2);
// startOfSegment = 1.7 + 0.3 = 2
// offsetFromSegmentStart = 2.5 - 2 = 0.5
// return 2018-11-10T00:00:32.1Z + 0.5 = 2018-11-10T00:00:32.6Z
const segment3 = {
dateTimeObject: 2018-11-10T00:00:34.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.2,
transmuxedPresentationStart: 3.8
}
};
playerTimeToProgramTime(4, segment3);
// startOfSegment = 3.8 + 0.2 = 4
// offsetFromSegmentStart = 4 - 4 = 0
// return 2018-11-10T00:00:34.1Z + 0 = 2018-11-10T00:00:34.1Z
```
## Transmux Before Append Changes
Even though segment timing values are retained for transmux before append, the formula does not need to change, as all that matters for calculation is the offset from the transmuxed segment start, which can then be applied to the stream time start of segment, or the program time start of segment.
## Getting the Right Segment
In order to make use of the above calculation, the right segment must be chosen for a given player time. This time may be retrieved by simply using the times of the segment after transmuxing (as the start/end pts/dts values then reflect the player time it should slot into in the source buffer). These are included in `videoTimingInfo` as `transmuxedPresentationStart` and `transmuxedPresentationEnd`.
Although there may be a small amount of overlap due to `transmuxerPrependedSeconds`, as long as the search is sequential from the beginning of the playlist to the end, the right segment will be found, as the prepended times will only come from content from prior segments.

View file

@ -0,0 +1,43 @@
# Using the reloadSourceOnError Plugin
Call the plugin to activate it:
```js
player.reloadSourceOnError()
```
Now if the player encounters a fatal error during playback, it will automatically
attempt to reload the current source. If the error was caused by a transient
browser or networking problem, this can allow playback to continue with a minimum
of disruption to your viewers.
The plugin will only restart your player once in a 30 second time span so that your
player doesn't get into a reload loop if it encounters non-transient errors. You
can tweak the amount of time required between restarts by adjusting the
`errorInterval` option.
If your video URLs are time-sensitive, the original source could be invalid by the
time an error occurs. If that's the case, you can provide a `getSource` callback
to regenerate a valid source object. In your callback, the `this` keyword is a
reference to the player that errored. The first argument to `getSource` is a
function. Invoke that function and pass in your new source object when you're ready.
```js
player.reloadSourceOnError({
// getSource allows you to override the source object used when an error occurs
getSource: function(reload) {
console.log('Reloading because of an error');
// call reload() with a fresh source object
// you can do this step asynchronously if you want (but the error dialog will
// show up while you're waiting)
reload({
src: 'https://example.com/index.m3u8?token=abc123ef789',
type: 'application/x-mpegURL'
});
},
// errorInterval specifies the minimum amount of seconds that must pass before
// another reload will be attempted
errorInterval: 5
});
```

View file

@ -0,0 +1,289 @@
# Supported Features
## Browsers
Any browser that supports [MSE] (media source extensions). See
https://caniuse.com/#feat=mediasource
Note that browsers with native HLS support may play content with the native player, unless
the [overrideNative] option is used. Some notable browsers with native HLS players are:
* Safari (macOS and iOS)
* Chrome Android
* Firefox Android
However, due to the limited features offered by some of the native players, the only
browser on which VHS defaults to using the native player is Safari (macOS and iOS).
## Streaming Formats and Media Types
### Streaming Formats
VHS aims to be mostly streaming format agnostic. So long as the manifest can be parsed to
a common JSON representation, VHS should be able to play it. However, due to some large
differences between the major streaming formats (HLS and DASH), some format specific code
is included in VHS. If you have another format you would like supported, please reach out
to us (e.g., file an issue).
* [HLS] (HTTP Live Streaming)
* [MPEG-DASH] (Dynamic Adaptive Streaming over HTTP)
### Media Container Formats
* [TS] (MPEG Transport Stream)
* [MP4] (MPEG-4 Part 14: MP4, M4A, M4V, M4S, MPA), ISOBMFF
* [AAC] (Advanced Audio Coding)
### Codecs
If the content is packaged in an [MP4] container, then any codec supported by the browser
is supported. If the content is packaged in a [TS] container, then the codec must be
supported by [the transmuxer]. The following codecs are supported by the transmuxer:
* [AVC] (Advanced Video Coding, h.264)
* [AVC1] (Advnced Video Coding, h.265)
* [HE-AAC] (High Efficiency Advanced Audio Coding, mp4a.40.5)
* LC-AAC (Low Complexity Advanced Audio Coding, mp4a.40.2)
## General Notable Features
The following is a list of some, but not all, common streaming features supported by VHS.
It is meant to highlight some common use cases (and provide for easy searching), but is
not meant serve as an exhaustive list.
* VOD (video on demand)
* LIVE
* Multiple audio tracks
* Timed [ID3] Metadata is automatically translated into HTML5 metedata text tracks
* Cross-domain credentials support with [CORS]
* Any browser supported resolution (e.g., 4k)
* Any browser supported framerate (e.g., 60fps)
* [DRM] via [videojs-contrib-eme]
* Audio only (non DASH)
* Video only (non DASH)
* In-manifest [WebVTT] subtitles are automatically translated into standard HTML5 subtitle
tracks
* [AES-128] segment encryption
## Notable Missing Features
Note that the following features have not yet been implemented or may work but are not
currently suppported in browsers that do not rely on the native player. For browsers that
use the native player (e.g., Safari for HLS), please refer to their documentation.
### Container Formats
* [WebM]
* [WAV]
* [MP3]
* [OGG]
### Codecs
If the content is packaged within an [MP4] container and the browser supports the codec, it
will play. However, the following are some codecs that are not routinely tested, or are not
supported when packaged within [TS].
* [MP3]
* [Vorbis]
* [WAV]
* [FLAC]
* [Opus]
* [VP8]
* [VP9]
* [Dolby Vision] (DVHE)
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] (E-AC-3)
### HLS Missing Features
Note: features for low latency HLS in the [2nd edition of HTTP Live Streaming] are on the
roadmap, but not currently available.
VHS strives to support all of the features in the HLS specification, however, some have
not yet been implemented. VHS currently supports everything in the
[HLS specification v7, revision 23], except the following:
* Use of [EXT-X-MAP] with [TS] segments
* [EXT-X-MAP] is currently supported for [MP4] segments, but not yet for TS
* I-Frame playlists via [EXT-X-I-FRAMES-ONLY] and [EXT-X-I-FRAME-STREAM-INF]
* [MP3] Audio
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] Audio (E-AC-3)
* KEYFORMATVERSIONS of [EXT-X-KEY]
* [EXT-X-DATERANGE]
* [EXT-X-SESSION-DATA]
* [EXT-X-SESSION-KEY]
* [EXT-X-INDEPENDENT-SEGMENTS]
* Use of [EXT-X-START] (value parsed but not used)
* Alternate video via [EXT-X-MEDIA] of type video
* ASSOC-LANGUAGE in [EXT-X-MEDIA]
* CHANNELS in [EXT-X-MEDIA]
* Use of AVERAGE-BANDWIDTH in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of FRAME-RATE in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of HDCP-LEVEL in [EXT-X-STREAM-INF]
* SAMPLE-AES segment encryption
In the event of encoding changes within a playlist (see
https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-6.3.3), the
behavior will depend on the browser.
### DASH Missing Features
DASH support is more recent than HLS support in VHS, however, VHS strives to achieve as
complete compatibility as possible with the DASH spec. The following are some notable
features in the DASH specification that are not yet implemented in VHS:
Note that many of the following are parsed by [mpd-parser] but are either not yet used, or
simply take on their default values (in the case where they have valid defaults).
* Audio and video only streams
* Audio rendition switching
* Each video rendition is paired with an audio rendition for the duration of playback.
* MPD
* @id
* @profiles
* @availabilityStartTime
* @availabilityEndTime
* @minBufferTime
* @maxSegmentDuration
* @maxSubsegmentDuration
* ProgramInformation
* Metrics
* Period
* @xlink:href
* @xlink:actuate
* @id
* @duration
* Normally used for determing the PeriodStart of the next period, VHS instead relies
on segment durations to determine timing of each segment and timeline
* @bitstreamSwitching
* Subset
* AdaptationSet
* @xlink:href
* @xlink:actuate
* @id
* @group
* @par (picture aspect ratio)
* @minBandwidth
* @maxBandwidth
* @minWidth
* @maxWidth
* @minHeight
* @maxHeight
* @minFrameRate
* @maxFrameRate
* @segmentAlignment
* @bitstreamSwitching
* @subsegmentAlignment
* @subsegmentStartsWithSAP
* Accessibility
* Rating
* Viewpoint
* ContentComponent
* Representation
* @id (used for SegmentTemplate but not exposed otherwise)
* @qualityRanking
* @dependencyId (dependent representation)
* @mediaStreamStructureId
* SubRepresentation
* CommonAttributesElements (for AdaptationSet, Representation and SubRepresentation elements)
* @profiles
* @sar
* @frameRate
* @audioSamplingRate
* @segmentProfiles
* @maximumSAPPeriod
* @startWithSAP
* @maxPlayoutRate
* @codingDependency
* @scanType
* FramePacking
* AudioChannelConfiguration
* SegmentBase
* @presentationTimeOffset
* @indexRangeExact
* RepresentationIndex
* MultipleSegmentBaseInformation elements
* SegmentList
* @xlink:href
* @xlink:actuate
* MultipleSegmentBaseInformation
* SegmentURL
* @index
* @indexRange
* SegmentTemplate
* MultipleSegmentBaseInformation
* @index
* @bitstreamSwitching
* BaseURL
* @serviceLocation
* Template-based Segment URL construction
* Live DASH assets that use $Time$ in a SegmentTemplate, and also have a SegmentTimeline
where only the first S has a t and the rest only have a d do not update on playlist
refreshes
See: https://github.com/videojs/http-streaming#dash-assets-with-time-interpolation-and-segmenttimelines-with-no-t
* ContentComponent elements
* Right now manifests are assumed to have a single content component, with the properties
described directly on the AdaptationSet element
* SubRepresentation elements
* Subset elements
* Early Available Periods (may work, but has not been tested)
* Access to subsegments via a subsegment index ('ssix')
* The @profiles attribute is ignored (best support for all profiles is attempted, without
consideration of the specific profile). For descriptions on profiles, see section 8 of
the DASH spec.
* Construction of byte range URLs via a BaseURL byteRange template (Annex E.2)
* Multiperiod content where the representation sets are not the same across periods
* In the event that an S element has a t attribute that is greater than what is expected,
it is not treated as a discontinuity, but instead retains its segment value, and may
result in a gap in the content
[MSE]: https://www.w3.org/TR/media-source/
[HLS]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[MPEG-DASH]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[TS]: https://en.wikipedia.org/wiki/MPEG_transport_stream
[MP4]: https://en.wikipedia.org/wiki/MPEG-4_Part_14
[AAC]: https://en.wikipedia.org/wiki/Advanced_Audio_Coding
[AVC]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[AVC1]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[HE-AAC]: https://en.wikipedia.org/wiki/High-Efficiency_Advanced_Audio_Coding
[ID3]: https://en.wikipedia.org/wiki/ID3
[CORS]: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
[DRM]: https://en.wikipedia.org/wiki/Digital_rights_management
[WebVTT]: https://www.w3.org/TR/webvtt1/
[AES-128]: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[WebM]: https://en.wikipedia.org/wiki/WebM
[WAV]: https://en.wikipedia.org/wiki/WAV
[MP3]: https://en.wikipedia.org/wiki/MP3
[OGG]: https://en.wikipedia.org/wiki/Ogg
[Vorbis]: https://en.wikipedia.org/wiki/Vorbis
[FLAC]: https://en.wikipedia.org/wiki/FLAC
[Opus]: https://en.wikipedia.org/wiki/Opus_(audio_format)
[VP8]: https://en.wikipedia.org/wiki/VP8
[VP9]: https://en.wikipedia.org/wiki/VP9
[overrideNative]: https://github.com/videojs/http-streaming#overridenative
[the transmuxer]: https://github.com/videojs/mux.js
[videojs-contrib-eme]: https://github.com/videojs/videojs-contrib-eme
[2nd edition of HTTP Live Streaming]: https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-07.html
[HLS specification v7, revision 23]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23
[EXT-X-MAP]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.5
[EXT-X-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.2
[EXT-X-SESSION-DATA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.4
[EXT-X-DATERANGE]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-I-FRAMES-ONLY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.3.6
[EXT-X-I-FRAME-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.3
[EXT-X-SESSION-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.5
[EXT-X-INDEPENDENT-SEGMENTS]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.1
[EXT-X-START]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.2
[EXT-X-MEDIA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.1
[Dolby Vision]: https://en.wikipedia.org/wiki/High-dynamic-range_video#Dolby_Vision
[Dolby Digital]: https://en.wikipedia.org/wiki/Dolby_Digital
[Dolby Digital Plus]: https://en.wikipedia.org/wiki/Dolby_Digital_Plus
[mpd-parser]: https://github.com/videojs/mpd-parser

View file

@ -0,0 +1,67 @@
# Troubleshooting Guide
## Other troubleshooting guides
For issues around data embedded into media segments (e.g., 608 captions), see the [mux.js troubleshooting guide](https://github.com/videojs/mux.js/blob/master/docs/troubleshooting.md).
## Tools
### Thumbcoil
Thumbcoil is a video inspector tool that can unpackage various media containers and inspect the bitstreams therein. Thumbcoil runs entirely within your browser so that none of your video data is ever transmitted to a server.
http://thumb.co.il<br/>
http://beta.thumb.co.il<br/>
https://github.com/videojs/thumbcoil<br/>
## Table of Contents
- [Content plays on Mac but not on Windows](#content-plays-on-mac-but-not-windows)
- ["No compatible source was found" on IE11 Win 7](#no-compatible-source-was-found-on-ie11-win-7)
- [CORS: No Access-Control-Allow-Origin header](#cors-no-access-control-allow-origin-header)
- [Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers](#desktop-safariios-safariandroid-chromeedge-exhibit-different-behavior-from-other-browsers)
- [MEDIA_ERR_DECODE error on Desktop Safari](#media_err_decode-error-on-desktop-safari)
- [Network requests are still being made while paused](#network-requests-are-still-being-made-while-paused)
## Content plays on Mac but not Windows
Some browsers may not be able to play audio sample rates higher than 48 kHz. See https://docs.microsoft.com/en-gb/windows/desktop/medfound/aac-decoder#format-constraints
Potential solution: re-encode with a Windows supported audio sample rate
## "No compatible source was found" on IE11 Win 7
videojs-http-streaming does not support Flash HLS playback (like the videojs-contrib-hls plugin does)
Solution: include the FlasHLS source handler https://github.com/brightcove/videojs-flashls-source-handler#usage
## CORS: No Access-Control-Allow-Origin header
If you see an error along the lines of
```
XMLHttpRequest cannot load ... No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin ... is therefore not allowed access.
```
you need to properly configure CORS on your server: https://github.com/videojs/http-streaming#hosting-considerations
## Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers
Some browsers support native playback of certain streaming formats. By default, we defer to the native players. However, this means that features specific to videojs-http-streaming will not be available.
On Edge and mobile Chrome, 608 captions, ID3 tags or live streaming may not work as expected with native playback, it is recommended that `overrideNative` be used on those platforms if necessary.
Solution: use videojs-http-streaming based playback on those devices: https://github.com/videojs/http-streaming#overridenative
## MEDIA_ERR_DECODE error on Desktop Safari
This error may occur for a number of reasons, as it is particularly common for misconfigured content. One instance of misconfiguration is if the source manifest has `CLOSED-CAPTIONS=NONE` and an external text track is loaded into the player. Safari does not allow the inclusion any captions if the manifest indicates that captions will not be provided.
Solution: remove `CLOSED-CAPTIONS=NONE` from the manifest
## Network requests are still being made while paused
There are a couple of cases where network requests will still be made by VHS when the video is paused.
1) If the forward buffer (buffered content ahead of the playhead) has not reached the GOAL\_BUFFER\_LENGTH. For instance, if the playhead is at time 10 seconds, the buffered range goes from 5 seconds to 20 seconds, and the GOAL\_BUFFER\_LENGTH is set to 30 seconds, then segments will continue to be requested, even while paused, until the buffer ends at a time greater than or equal to 10 seconds (current time) + 30 seconds (GOAL\_BUFFER\_LENGTH) = 40 seconds. This is expected behavior in order to provide a better playback experience.
2) If the stream is LIVE, then the manifest will continue to be refreshed even while paused. This is because it is easier to keep playback in sync if we receieve manifest updates consistently.