First release

This commit is contained in:
Owen Quinlan 2021-07-02 19:29:34 +10:00
commit fa6c85266e
2339 changed files with 761050 additions and 0 deletions

782
node_modules/@videojs/http-streaming/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,782 @@
<a name="2.9.1"></a>
## [2.9.1](https://github.com/videojs/http-streaming/compare/v2.9.0...v2.9.1) (2021-06-22)
### Bug Fixes
* actually default maxPlaylistRetries to Infinity ([#1142](https://github.com/videojs/http-streaming/issues/1142)) ([4428e3a](https://github.com/videojs/http-streaming/commit/4428e3a)), closes [#1098](https://github.com/videojs/http-streaming/issues/1098)
* don't decay average bandwidth value if system bandwidth did not change ([#1137](https://github.com/videojs/http-streaming/issues/1137)) ([c22749b](https://github.com/videojs/http-streaming/commit/c22749b))
* ts segments that don't define all streams in the first pmt ([#1144](https://github.com/videojs/http-streaming/issues/1144)) ([36a8be4](https://github.com/videojs/http-streaming/commit/36a8be4))
### Tests
* moving average should not decay without new data ([#1141](https://github.com/videojs/http-streaming/issues/1141)) ([55726af](https://github.com/videojs/http-streaming/commit/55726af)), closes [#1137](https://github.com/videojs/http-streaming/issues/1137)
<a name="2.9.0"></a>
# [2.9.0](https://github.com/videojs/http-streaming/compare/v2.8.2...v2.9.0) (2021-06-11)
### Features
* Add support for encrypted init segments ([#1132](https://github.com/videojs/http-streaming/issues/1132)) ([4449ed5](https://github.com/videojs/http-streaming/commit/4449ed5))
* allow clients to limit the number of times a playlist attempts to reload following an error ([#1098](https://github.com/videojs/http-streaming/issues/1098)) ([44905d4](https://github.com/videojs/http-streaming/commit/44905d4))
* Caption services (608/708) metadata ([#1138](https://github.com/videojs/http-streaming/issues/1138)) ([39782c6](https://github.com/videojs/http-streaming/commit/39782c6)), closes [/datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-08#section-4](https://github.com//datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-08/issues/section-4) [videojs/mpd-parser#131](https://github.com/videojs/mpd-parser/issues/131)
* do fast rendition changes on fullscreen changes and user actions ([#1074](https://github.com/videojs/http-streaming/issues/1074)) ([5405c18](https://github.com/videojs/http-streaming/commit/5405c18))
* stats for timeToLoadedData, appendsToLoadedData, mainAppendsToLoadedData, audioAppendsToLoadedData, and mediaAppends ([#1106](https://github.com/videojs/http-streaming/issues/1106)) ([3124fbc](https://github.com/videojs/http-streaming/commit/3124fbc))
* Use ll-hls query directives: segment skipping and requesting a specific segment/part ([#1079](https://github.com/videojs/http-streaming/issues/1079)) ([458be2c](https://github.com/videojs/http-streaming/commit/458be2c))
### Bug Fixes
* add part level sync points, fix LL hls sync issues, add part timing info ([#1125](https://github.com/videojs/http-streaming/issues/1125)) ([ee5841d](https://github.com/videojs/http-streaming/commit/ee5841d))
* Append valid syncRequests, better sync request choice, less getMediaInfoForTime rounding ([#1127](https://github.com/videojs/http-streaming/issues/1127)) ([ce03f66](https://github.com/videojs/http-streaming/commit/ce03f66))
### Chores
* fix coverage ci run ([#1135](https://github.com/videojs/http-streaming/issues/1135)) ([82b6781](https://github.com/videojs/http-streaming/commit/82b6781))
<a name="2.8.2"></a>
## [2.8.2](https://github.com/videojs/http-streaming/compare/v2.8.1...v2.8.2) (2021-05-20)
### Bug Fixes
* add tests for data uri, fix data uri in demo page ([#1133](https://github.com/videojs/http-streaming/issues/1133)) ([0be51eb](https://github.com/videojs/http-streaming/commit/0be51eb))
<a name="2.8.1"></a>
## [2.8.1](https://github.com/videojs/http-streaming/compare/v2.8.0...v2.8.1) (2021-05-19)
### Bug Fixes
* add master referenced id/uri for audio playlists. Add playlists to hls media groups ([#1124](https://github.com/videojs/http-streaming/issues/1124)) ([740d2ee](https://github.com/videojs/http-streaming/commit/740d2ee))
* m3u8-parser/eme updates ([#1131](https://github.com/videojs/http-streaming/issues/1131)) ([29ece75](https://github.com/videojs/http-streaming/commit/29ece75))
* only append/request init segments when they change ([#1128](https://github.com/videojs/http-streaming/issues/1128)) ([a4af004](https://github.com/videojs/http-streaming/commit/a4af004))
* set audio status on loaders when setting up media groups ([#1126](https://github.com/videojs/http-streaming/issues/1126)) ([a44f984](https://github.com/videojs/http-streaming/commit/a44f984))
### Chores
* update vhs utils to 3.0.1 ([#1123](https://github.com/videojs/http-streaming/issues/1123)) ([552b012](https://github.com/videojs/http-streaming/commit/552b012))
<a name="2.8.0"></a>
# [2.8.0](https://github.com/videojs/http-streaming/compare/v2.7.1...v2.8.0) (2021-04-28)
### Features
* add initialBandwidth option at the tech level ([#1122](https://github.com/videojs/http-streaming/issues/1122)) ([2071008](https://github.com/videojs/http-streaming/commit/2071008))
### Bug Fixes
* don't clear DASH minimum update period timeout on pause of a media loader ([#1118](https://github.com/videojs/http-streaming/issues/1118)) ([82ff4f5](https://github.com/videojs/http-streaming/commit/82ff4f5))
* null check sidx on sidxmapping, check that end > start on remove ([#1121](https://github.com/videojs/http-streaming/issues/1121)) ([92f1333](https://github.com/videojs/http-streaming/commit/92f1333))
### Code Refactoring
* drop support for the partial muxer and handlePartial ([#1119](https://github.com/videojs/http-streaming/issues/1119)) ([ab305f8](https://github.com/videojs/http-streaming/commit/ab305f8))
* offload mp4/ts probe to the web worker ([#1117](https://github.com/videojs/http-streaming/issues/1117)) ([3c9f721](https://github.com/videojs/http-streaming/commit/3c9f721))
* segment/part choice and add more logging around the choice ([#1097](https://github.com/videojs/http-streaming/issues/1097)) ([b8a5aa5](https://github.com/videojs/http-streaming/commit/b8a5aa5))
<a name="2.7.1"></a>
## [2.7.1](https://github.com/videojs/http-streaming/compare/v2.7.0...v2.7.1) (2021-04-09)
### Bug Fixes
* experimentalLLHLS option should always be passed ([#1114](https://github.com/videojs/http-streaming/issues/1114)) ([684fd08](https://github.com/videojs/http-streaming/commit/684fd08))
### Chores
* dont run tests on chromium ([#1116](https://github.com/videojs/http-streaming/issues/1116)) ([c2154d7](https://github.com/videojs/http-streaming/commit/c2154d7))
<a name="2.7.0"></a>
# [2.7.0](https://github.com/videojs/http-streaming/compare/v2.6.4...v2.7.0) (2021-04-06)
### Features
* Add EXT-X-PART support behind a flag for LL-HLS ([#1055](https://github.com/videojs/http-streaming/issues/1055)) ([b33e109](https://github.com/videojs/http-streaming/commit/b33e109))
* mark Video.js as a peer dependency ([#1111](https://github.com/videojs/http-streaming/issues/1111)) ([99480d5](https://github.com/videojs/http-streaming/commit/99480d5))
* support serverControl and preloadSegment behind experimentalLLHLS flag ([#1078](https://github.com/videojs/http-streaming/issues/1078)) ([fa1b6b5](https://github.com/videojs/http-streaming/commit/fa1b6b5))
* usage and logging on rendition change with reasons ([#1088](https://github.com/videojs/http-streaming/issues/1088)) ([1b990f1](https://github.com/videojs/http-streaming/commit/1b990f1))
### Bug Fixes
* audio only media group playlists, audio group playlists, and audio switches for audio only ([#1100](https://github.com/videojs/http-streaming/issues/1100)) ([6d83de3](https://github.com/videojs/http-streaming/commit/6d83de3))
* better time to first frame for live playlists ([#1105](https://github.com/videojs/http-streaming/issues/1105)) ([1e94680](https://github.com/videojs/http-streaming/commit/1e94680))
* catch remove errors, remove all data on QUOTA_EXCEEDED ([#1101](https://github.com/videojs/http-streaming/issues/1101)) ([86f77fe](https://github.com/videojs/http-streaming/commit/86f77fe))
* Only add sidxMapping on successful sidx request and parse. ([#1099](https://github.com/videojs/http-streaming/issues/1099)) ([de0b55b](https://github.com/videojs/http-streaming/commit/de0b55b)), closes [#1107](https://github.com/videojs/http-streaming/issues/1107)
* support automatic configuration of audio and video only DRM sources ([#1090](https://github.com/videojs/http-streaming/issues/1090)) ([9b116ce](https://github.com/videojs/http-streaming/commit/9b116ce))
### Chores
* never skip main ci runs ([#1108](https://github.com/videojs/http-streaming/issues/1108)) ([b2d2c91](https://github.com/videojs/http-streaming/commit/b2d2c91))
* turn checkWatch back on for rollup ([87947fc](https://github.com/videojs/http-streaming/commit/87947fc))
* update to mux.js[@5](https://github.com/5).11.0 ([#1109](https://github.com/videojs/http-streaming/issues/1109)) ([af5841c](https://github.com/videojs/http-streaming/commit/af5841c))
<a name="2.6.4"></a>
## [2.6.4](https://github.com/videojs/http-streaming/compare/v2.6.3...v2.6.4) (2021-03-12)
### Bug Fixes
* Monitor playback for stalls due to gaps in the beginning of stream when a new source is loaded ([#1087](https://github.com/videojs/http-streaming/issues/1087)) ([64a1f35](https://github.com/videojs/http-streaming/commit/64a1f35))
* retry appends on QUOTA_EXCEEDED_ERR ([#1093](https://github.com/videojs/http-streaming/issues/1093)) ([008aeaf](https://github.com/videojs/http-streaming/commit/008aeaf))
### Chores
* Get test coverage working again with mock/sync worker ([#1094](https://github.com/videojs/http-streaming/issues/1094)) ([035e8c0](https://github.com/videojs/http-streaming/commit/035e8c0))
* pin CI to ubuntu 18.04 ([#1091](https://github.com/videojs/http-streaming/issues/1091)) ([01ca182](https://github.com/videojs/http-streaming/commit/01ca182))
<a name="2.6.3"></a>
## [2.6.3](https://github.com/videojs/http-streaming/compare/v2.6.2...v2.6.3) (2021-03-05)
### Bug Fixes
* **playback-watcher:** Skip over playback gaps that occur in the beginning of streams ([#1085](https://github.com/videojs/http-streaming/issues/1085)) ([ccd9352](https://github.com/videojs/http-streaming/commit/ccd9352))
* Add exclude reason and skip duplicate playlist-unchanged ([#1082](https://github.com/videojs/http-streaming/issues/1082)) ([0dceb5b](https://github.com/videojs/http-streaming/commit/0dceb5b))
* prevent changing undefined baseStartTime to NaN ([#1086](https://github.com/videojs/http-streaming/issues/1086)) ([43aa69a](https://github.com/videojs/http-streaming/commit/43aa69a))
* update to mux.js 5.10.0 ([#1089](https://github.com/videojs/http-streaming/issues/1089)) ([1cfdab6](https://github.com/videojs/http-streaming/commit/1cfdab6))
### Chores
* ie 11 demo fixes ([0760d45](https://github.com/videojs/http-streaming/commit/0760d45))
* use deferred scripts for faster demo startup ([#1083](https://github.com/videojs/http-streaming/issues/1083)) ([c348174](https://github.com/videojs/http-streaming/commit/c348174))
<a name="2.6.2"></a>
## [2.6.2](https://github.com/videojs/http-streaming/compare/v2.6.1...v2.6.2) (2021-02-24)
### Bug Fixes
* update to mux.js[@5](https://github.com/5).9.2 and mpd-parser[@0](https://github.com/0).15.4 ([#1081](https://github.com/videojs/http-streaming/issues/1081)) ([f5c060f](https://github.com/videojs/http-streaming/commit/f5c060f))
### Tests
* add playback-min as a unit test type ([#1077](https://github.com/videojs/http-streaming/issues/1077)) ([327a572](https://github.com/videojs/http-streaming/commit/327a572))
<a name="2.6.1"></a>
## [2.6.1](https://github.com/videojs/http-streaming/compare/v2.6.0...v2.6.1) (2021-02-19)
### Bug Fixes
* allow buffer removes when there's no current media info in loader ([#1070](https://github.com/videojs/http-streaming/issues/1070)) ([97ab712](https://github.com/videojs/http-streaming/commit/97ab712))
* live dash segment changes should be considered a playlist update ([#1065](https://github.com/videojs/http-streaming/issues/1065)) ([1ce7838](https://github.com/videojs/http-streaming/commit/1ce7838))
* sometimes subtitlesTrack_.cues is null ([#1073](https://github.com/videojs/http-streaming/issues/1073)) ([6778ca1](https://github.com/videojs/http-streaming/commit/6778ca1))
* unbreak the minified build by updating rollup-plugin-worker-factory ([#1072](https://github.com/videojs/http-streaming/issues/1072)) ([e583b26](https://github.com/videojs/http-streaming/commit/e583b26))
### Chores
* mirror player.src on the demo page using sourceset ([#1071](https://github.com/videojs/http-streaming/issues/1071)) ([fee7309](https://github.com/videojs/http-streaming/commit/fee7309))
### Documentation
* **README:** fix useBandwidthFromLocalStorage and limitRenditionByPlayerDimensions ([#1075](https://github.com/videojs/http-streaming/issues/1075)) ([cf2efcb](https://github.com/videojs/http-streaming/commit/cf2efcb))
<a name="2.6.0"></a>
# [2.6.0](https://github.com/videojs/http-streaming/compare/v2.5.0...v2.6.0) (2021-02-11)
### Features
* allow xhr override globally, for super advanced use cases only ([#1059](https://github.com/videojs/http-streaming/issues/1059)) ([6279675](https://github.com/videojs/http-streaming/commit/6279675))
* expose m3u8-parser logging in debug log ([#1048](https://github.com/videojs/http-streaming/issues/1048)) ([0e8bd4b](https://github.com/videojs/http-streaming/commit/0e8bd4b))
### Bug Fixes
* do not request manifests until play when preload is none ([#1060](https://github.com/videojs/http-streaming/issues/1060)) ([49249d5](https://github.com/videojs/http-streaming/commit/49249d5)), closes [#126](https://github.com/videojs/http-streaming/issues/126)
* store `transmuxQueue` and `currentTransmux` on `transmuxer` instead of globally ([#1045](https://github.com/videojs/http-streaming/issues/1045)) ([a34b4da](https://github.com/videojs/http-streaming/commit/a34b4da))
* use a separate ProgramDateTime mapping to player time per timeline ([#1063](https://github.com/videojs/http-streaming/issues/1063)) ([5e9b4f1](https://github.com/videojs/http-streaming/commit/5e9b4f1))
* wait for endedtimeline event from transmuxer when reaching the end of a timeline ([#1058](https://github.com/videojs/http-streaming/issues/1058)) ([b01ab72](https://github.com/videojs/http-streaming/commit/b01ab72))
### Chores
* add legacy avc source ([#1050](https://github.com/videojs/http-streaming/issues/1050)) ([b34a770](https://github.com/videojs/http-streaming/commit/b34a770))
* add pdt test sources ([#1067](https://github.com/videojs/http-streaming/issues/1067)) ([112148b](https://github.com/videojs/http-streaming/commit/112148b))
* better worker build and synchronous web worker ([#1033](https://github.com/videojs/http-streaming/issues/1033)) ([f0732af](https://github.com/videojs/http-streaming/commit/f0732af))
### Documentation
* sample-aes encryption isn't currently supported ([#923](https://github.com/videojs/http-streaming/issues/923)) ([30f9b14](https://github.com/videojs/http-streaming/commit/30f9b14))
### Tests
* for IE11, add colon to timezone in Date strings of PDT mapping tests ([#1068](https://github.com/videojs/http-streaming/issues/1068)) ([f81c5a9](https://github.com/videojs/http-streaming/commit/f81c5a9))
<a name="2.5.0"></a>
# [2.5.0](https://github.com/videojs/http-streaming/compare/v2.4.2...v2.5.0) (2021-01-20)
### Features
* add flag to turn off 708 captions ([#1047](https://github.com/videojs/http-streaming/issues/1047)) ([ab5b4dc](https://github.com/videojs/http-streaming/commit/ab5b4dc))
### Chores
* update [@videojs](https://github.com/videojs)/vhs-utils to v3.0.0 ([#1036](https://github.com/videojs/http-streaming/issues/1036)) ([b072c93](https://github.com/videojs/http-streaming/commit/b072c93))
### Tests
* clear segment transmuxer in media segment request tests ([#1043](https://github.com/videojs/http-streaming/issues/1043)) ([83057a8](https://github.com/videojs/http-streaming/commit/83057a8))
* don't show QUnit UI in regular test runs ([#1044](https://github.com/videojs/http-streaming/issues/1044)) ([25c7f64](https://github.com/videojs/http-streaming/commit/25c7f64))
<a name="2.4.2"></a>
## [2.4.2](https://github.com/videojs/http-streaming/compare/v2.4.1...v2.4.2) (2021-01-07)
### Bug Fixes
* handle rollover and don't set wrong timing info for segments with high PTS/DTS values ([#1040](https://github.com/videojs/http-streaming/issues/1040)) ([9919b85](https://github.com/videojs/http-streaming/commit/9919b85))
<a name="2.4.1"></a>
## [2.4.1](https://github.com/videojs/http-streaming/compare/v2.4.0...v2.4.1) (2020-12-22)
### Bug Fixes
* if a playlist was last requested less than half target duration, delay retry ([#1038](https://github.com/videojs/http-streaming/issues/1038)) ([2e237ee](https://github.com/videojs/http-streaming/commit/2e237ee))
* programmatically create Config getters/setters ([8454da5](https://github.com/videojs/http-streaming/commit/8454da5))
### Chores
* **demo:** clear type on manual source change ([#1030](https://github.com/videojs/http-streaming/issues/1030)) ([d39276d](https://github.com/videojs/http-streaming/commit/d39276d))
* mark many more sources as working ([#1035](https://github.com/videojs/http-streaming/issues/1035)) ([904153f](https://github.com/videojs/http-streaming/commit/904153f))
* move playback tests to a separate ci run ([#1028](https://github.com/videojs/http-streaming/issues/1028)) ([f1d9f6e](https://github.com/videojs/http-streaming/commit/f1d9f6e))
* remove replace and update packages ([#1031](https://github.com/videojs/http-streaming/issues/1031)) ([0976212](https://github.com/videojs/http-streaming/commit/0976212))
<a name="2.4.0"></a>
# [2.4.0](https://github.com/videojs/http-streaming/compare/v2.3.0...v2.4.0) (2020-12-07)
### Features
* **playback watcher:** Configurable live seekable window ([#997](https://github.com/videojs/http-streaming/issues/997)) ([ad5c270](https://github.com/videojs/http-streaming/commit/ad5c270))
* log on mislabeled segment durations for HLS ([#1010](https://github.com/videojs/http-streaming/issues/1010)) ([4109a7f](https://github.com/videojs/http-streaming/commit/4109a7f))
* update to mux.js 5.7.0 ([#1014](https://github.com/videojs/http-streaming/issues/1014)) ([5f14909](https://github.com/videojs/http-streaming/commit/5f14909)), closes [#1001](https://github.com/videojs/http-streaming/issues/1001) [#909](https://github.com/videojs/http-streaming/issues/909)
### Bug Fixes
* abort all loaders on earlyabort ([#965](https://github.com/videojs/http-streaming/issues/965)) ([e7cb63a](https://github.com/videojs/http-streaming/commit/e7cb63a))
* don't save bandwidth and throughput for really small segments ([#1024](https://github.com/videojs/http-streaming/issues/1024)) ([a29e241](https://github.com/videojs/http-streaming/commit/a29e241))
* filter out unsupported subtitles for dash ([#962](https://github.com/videojs/http-streaming/issues/962)) ([124834a](https://github.com/videojs/http-streaming/commit/124834a))
* keep running the minimumUpdatePeriod unless cancelled or changed ([#1016](https://github.com/videojs/http-streaming/issues/1016)) ([f7b528c](https://github.com/videojs/http-streaming/commit/f7b528c))
* prevent double source buffer ready on IE11 ([#1015](https://github.com/videojs/http-streaming/issues/1015)) ([b1c2969](https://github.com/videojs/http-streaming/commit/b1c2969))
* remove duplicate cues with same time interval and text ([#1005](https://github.com/videojs/http-streaming/issues/1005)) ([6db2b6a](https://github.com/videojs/http-streaming/commit/6db2b6a))
* support tracks with id 0 for fmp4 playlists ([#1018](https://github.com/videojs/http-streaming/issues/1018)) ([bf63692](https://github.com/videojs/http-streaming/commit/bf63692))
* Wait for EME initialization before appending content ([#1002](https://github.com/videojs/http-streaming/issues/1002)) ([93132b7](https://github.com/videojs/http-streaming/commit/93132b7))
* when changing renditions over a discontinuity, don't use buffered end as segment start ([#1023](https://github.com/videojs/http-streaming/issues/1023)) ([40caa45](https://github.com/videojs/http-streaming/commit/40caa45))
* **experimentalBufferBasedABR:** start ABR timer on main playlist load ([#1026](https://github.com/videojs/http-streaming/issues/1026)) ([27de9a5](https://github.com/videojs/http-streaming/commit/27de9a5)), closes [#1025](https://github.com/videojs/http-streaming/issues/1025)
### Chores
* add multiple soon-to-work sources ([#1007](https://github.com/videojs/http-streaming/issues/1007)) ([030469f](https://github.com/videojs/http-streaming/commit/030469f))
* don't run tests on release ([#1006](https://github.com/videojs/http-streaming/issues/1006)) ([d13b737](https://github.com/videojs/http-streaming/commit/d13b737))
* skip duplicate ci workflows ([#1021](https://github.com/videojs/http-streaming/issues/1021)) ([20cc4a3](https://github.com/videojs/http-streaming/commit/20cc4a3))
* switch from travis to github actions for ci ([#989](https://github.com/videojs/http-streaming/issues/989)) ([c9b195b](https://github.com/videojs/http-streaming/commit/c9b195b))
* **demo page:** add an overrideNative button (default on) ([#1027](https://github.com/videojs/http-streaming/issues/1027)) ([197daab](https://github.com/videojs/http-streaming/commit/197daab))
### Code Refactoring
* Add a better distinction between master and child dash loaders ([#992](https://github.com/videojs/http-streaming/issues/992)) ([56592bc](https://github.com/videojs/http-streaming/commit/56592bc))
* add sidx segments to playlist object instead of re-parsing xml ([#994](https://github.com/videojs/http-streaming/issues/994)) ([e41f856](https://github.com/videojs/http-streaming/commit/e41f856))
* unify sidx/master/error request logic ([#998](https://github.com/videojs/http-streaming/issues/998)) ([fe57e60](https://github.com/videojs/http-streaming/commit/fe57e60))
### Tests
* fix tests on firefox 83 ([#1004](https://github.com/videojs/http-streaming/issues/1004)) ([00d9b1d](https://github.com/videojs/http-streaming/commit/00d9b1d))
<a name="2.3.0"></a>
# [2.3.0](https://github.com/videojs/http-streaming/compare/v2.2.0...v2.3.0) (2020-11-05)
### Features
* add experimental buffer based ABR ([#886](https://github.com/videojs/http-streaming/issues/886)) ([a05d032](https://github.com/videojs/http-streaming/commit/a05d032))
### Bug Fixes
* appendsdone abort and handle multiple id3 sections. ([#971](https://github.com/videojs/http-streaming/issues/971)) ([329d50a](https://github.com/videojs/http-streaming/commit/329d50a))
* check tech error before pause loaders ([#969](https://github.com/videojs/http-streaming/issues/969)) ([0c7b2cb](https://github.com/videojs/http-streaming/commit/0c7b2cb))
* inline json version ([#967](https://github.com/videojs/http-streaming/issues/967)) ([326ce1c](https://github.com/videojs/http-streaming/commit/326ce1c))
* **experimentalBufferBasedABR:** call selectPlaylist and change media on an interval ([#978](https://github.com/videojs/http-streaming/issues/978)) ([200c87b](https://github.com/videojs/http-streaming/commit/200c87b)), closes [#886](https://github.com/videojs/http-streaming/issues/886) [#966](https://github.com/videojs/http-streaming/issues/966) [#964](https://github.com/videojs/http-streaming/issues/964)
* only prevent audio group creation if no other playlists are using it ([#981](https://github.com/videojs/http-streaming/issues/981)) ([645e979](https://github.com/videojs/http-streaming/commit/645e979))
* **playback-watcher:** ignore subtitles ([#980](https://github.com/videojs/http-streaming/issues/980)) ([ca7655e](https://github.com/videojs/http-streaming/commit/ca7655e))
### Chores
* **package:** update aes-decrypter, m3u8 and mpd parser for vhs-utils ([#988](https://github.com/videojs/http-streaming/issues/988)) ([c31dee2](https://github.com/videojs/http-streaming/commit/c31dee2))
### Tests
* **playback-watcher:** subtitle test refactor ([#986](https://github.com/videojs/http-streaming/issues/986)) ([0f66d8e](https://github.com/videojs/http-streaming/commit/0f66d8e)), closes [#980](https://github.com/videojs/http-streaming/issues/980)
<a name="2.2.0"></a>
# [2.2.0](https://github.com/videojs/http-streaming/compare/v2.1.0...v2.2.0) (2020-09-25)
### Features
* default handleManfiestRedirect to true ([#927](https://github.com/videojs/http-streaming/issues/927)) ([556321f](https://github.com/videojs/http-streaming/commit/556321f))
* support MPD.Location ([#926](https://github.com/videojs/http-streaming/issues/926)) ([c4a43d7](https://github.com/videojs/http-streaming/commit/c4a43d7))
* Update minimumUpdatePeriod handling ([#942](https://github.com/videojs/http-streaming/issues/942)) ([8648e76](https://github.com/videojs/http-streaming/commit/8648e76))
### Bug Fixes
* audio groups with the same uri as media do not count ([#952](https://github.com/videojs/http-streaming/issues/952)) ([3927c0c](https://github.com/videojs/http-streaming/commit/3927c0c))
* dash manifest not refreshed if only some playlists are updated ([#949](https://github.com/videojs/http-streaming/issues/949)) ([31d3441](https://github.com/videojs/http-streaming/commit/31d3441))
* detect demuxed video underflow gaps ([#948](https://github.com/videojs/http-streaming/issues/948)) ([d0ef298](https://github.com/videojs/http-streaming/commit/d0ef298))
* MPD not refreshed if minimumUpdatePeriod is 0 ([#954](https://github.com/videojs/http-streaming/issues/954)) ([3a0682f](https://github.com/videojs/http-streaming/commit/3a0682f)), closes [#942](https://github.com/videojs/http-streaming/issues/942)
* noop vtt segment loader handle data ([#959](https://github.com/videojs/http-streaming/issues/959)) ([d1dcd7b](https://github.com/videojs/http-streaming/commit/d1dcd7b))
* report the correct buffered regardless of playlist change ([#950](https://github.com/videojs/http-streaming/issues/950)) ([043ccc6](https://github.com/videojs/http-streaming/commit/043ccc6))
* Throw a player error when trying to play DRM content without eme ([#938](https://github.com/videojs/http-streaming/issues/938)) ([ce4d6fd](https://github.com/videojs/http-streaming/commit/ce4d6fd))
* use playlist NAME when available as its ID ([#929](https://github.com/videojs/http-streaming/issues/929)) ([2269464](https://github.com/videojs/http-streaming/commit/2269464))
* use TIME_FUDGE_FACTOR rather than rounding by decimal digits ([#881](https://github.com/videojs/http-streaming/issues/881)) ([7eb112d](https://github.com/videojs/http-streaming/commit/7eb112d))
### Chores
* **package:** remove engine check in pkcs7 ([#947](https://github.com/videojs/http-streaming/issues/947)) ([89392fa](https://github.com/videojs/http-streaming/commit/89392fa))
* mark angel one dash subs as broken ([#956](https://github.com/videojs/http-streaming/issues/956)) ([56a0970](https://github.com/videojs/http-streaming/commit/56a0970))
* mediaConfig_ -> staringMediaInfo_, startingMedia_ -> currentMediaInfo_ ([#953](https://github.com/videojs/http-streaming/issues/953)) ([8801d1c](https://github.com/videojs/http-streaming/commit/8801d1c))
* playlist selector logging ([#921](https://github.com/videojs/http-streaming/issues/921)) ([ccdbaef](https://github.com/videojs/http-streaming/commit/ccdbaef))
* update m3u8-parser to v4.4.3 ([#928](https://github.com/videojs/http-streaming/issues/928)) ([af5b4ee](https://github.com/videojs/http-streaming/commit/af5b4ee))
### Reverts
* fix: use playlist NAME when available as its ID ([#929](https://github.com/videojs/http-streaming/issues/929)) ([#957](https://github.com/videojs/http-streaming/issues/957)) ([fe8376b](https://github.com/videojs/http-streaming/commit/fe8376b))
<a name="2.1.0"></a>
# [2.1.0](https://github.com/videojs/http-streaming/compare/v2.0.0...v2.1.0) (2020-07-28)
### Features
* Easier manual playlist switching, add codecs to renditions ([#850](https://github.com/videojs/http-streaming/issues/850)) ([f60fa1f](https://github.com/videojs/http-streaming/commit/f60fa1f))
* exclude all incompatable browser/muxer codecs ([#903](https://github.com/videojs/http-streaming/issues/903)) ([2d0f0d7](https://github.com/videojs/http-streaming/commit/2d0f0d7))
* expose canChangeType on the VHS property ([#911](https://github.com/videojs/http-streaming/issues/911)) ([a4ab285](https://github.com/videojs/http-streaming/commit/a4ab285))
* let back buffer be configurable ([8c96e6c](https://github.com/videojs/http-streaming/commit/8c96e6c))
* Support codecs switching when possible via sourceBuffer.changeType ([#841](https://github.com/videojs/http-streaming/issues/841)) ([267cc34](https://github.com/videojs/http-streaming/commit/267cc34))
### Bug Fixes
* always append init segment after trackinfo change ([#913](https://github.com/videojs/http-streaming/issues/913)) ([ea3650a](https://github.com/videojs/http-streaming/commit/ea3650a))
* cleanup mediasource listeners on dispose ([#871](https://github.com/videojs/http-streaming/issues/871)) ([e50f4c9](https://github.com/videojs/http-streaming/commit/e50f4c9))
* do not try to use unsupported audio ([#896](https://github.com/videojs/http-streaming/issues/896)) ([7711b26](https://github.com/videojs/http-streaming/commit/7711b26))
* do not use remove source buffer on ie 11 ([#904](https://github.com/videojs/http-streaming/issues/904)) ([1ab0f07](https://github.com/videojs/http-streaming/commit/1ab0f07))
* do not wait for audio appends for muxed segments ([#894](https://github.com/videojs/http-streaming/issues/894)) ([406cbcd](https://github.com/videojs/http-streaming/commit/406cbcd))
* Fixed issue with MPEG-Dash MPD Playlist Finalisation during Live Play. ([#874](https://github.com/videojs/http-streaming/issues/874)) ([c807930](https://github.com/videojs/http-streaming/commit/c807930))
* handle null return value from CaptionParser.parse ([#890](https://github.com/videojs/http-streaming/issues/890)) ([7b8fff2](https://github.com/videojs/http-streaming/commit/7b8fff2)), closes [#863](https://github.com/videojs/http-streaming/issues/863)
* have reloadSourceOnError get src from player ([#893](https://github.com/videojs/http-streaming/issues/893)) ([1e50bc5](https://github.com/videojs/http-streaming/commit/1e50bc5)), closes [videojs/video.js#6744](https://github.com/videojs/video.js/issues/6744)
* initialize EME for all playlists and PSSH values ([#872](https://github.com/videojs/http-streaming/issues/872)) ([e0e497f](https://github.com/videojs/http-streaming/commit/e0e497f))
* more conservative stalled download check, better logging ([#884](https://github.com/videojs/http-streaming/issues/884)) ([615e77f](https://github.com/videojs/http-streaming/commit/615e77f))
* pause/abort loaders before an exclude, preventing bad appends ([#902](https://github.com/videojs/http-streaming/issues/902)) ([c9126e1](https://github.com/videojs/http-streaming/commit/c9126e1))
* stop alt loaders on main mediachanging to prevent append race ([#895](https://github.com/videojs/http-streaming/issues/895)) ([8690c78](https://github.com/videojs/http-streaming/commit/8690c78))
* Support aac data with or without id3 tags by using mux.js[@5](https://github.com/5).6.6 ([#899](https://github.com/videojs/http-streaming/issues/899)) ([9c742ce](https://github.com/videojs/http-streaming/commit/9c742ce))
* Use revokeObjectURL dispose for created MSE blob urls ([#849](https://github.com/videojs/http-streaming/issues/849)) ([ca73cac](https://github.com/videojs/http-streaming/commit/ca73cac))
* Wait for sourceBuffer creation so drm setup uses valid codecs ([#878](https://github.com/videojs/http-streaming/issues/878)) ([f879563](https://github.com/videojs/http-streaming/commit/f879563))
### Chores
* Add vhs & mpc (vhs.masterPlaylistController_) to window of index.html ([#875](https://github.com/videojs/http-streaming/issues/875)) ([bab61d6](https://github.com/videojs/http-streaming/commit/bab61d6))
* **demo:** add a representations selector to the demo page ([#901](https://github.com/videojs/http-streaming/issues/901)) ([0a54ae2](https://github.com/videojs/http-streaming/commit/0a54ae2))
* fix tears of steal playready on the demo page ([#915](https://github.com/videojs/http-streaming/issues/915)) ([29a10d0](https://github.com/videojs/http-streaming/commit/29a10d0))
* keep window vhs/mpc up to date on source switch ([#883](https://github.com/videojs/http-streaming/issues/883)) ([3ba85fd](https://github.com/videojs/http-streaming/commit/3ba85fd))
* update DASH stream urls ([#918](https://github.com/videojs/http-streaming/issues/918)) ([902c2a5](https://github.com/videojs/http-streaming/commit/902c2a5))
* update local video.js ([#876](https://github.com/videojs/http-streaming/issues/876)) ([c2cc9aa](https://github.com/videojs/http-streaming/commit/c2cc9aa))
* use playready license server ([#916](https://github.com/videojs/http-streaming/issues/916)) ([6728837](https://github.com/videojs/http-streaming/commit/6728837))
### Code Refactoring
* remove duplicate bufferIntersection code in util/buffer.js ([#880](https://github.com/videojs/http-streaming/issues/880)) ([0ca43bd](https://github.com/videojs/http-streaming/commit/0ca43bd))
* simplify setupEmeOptions and add tests ([#869](https://github.com/videojs/http-streaming/issues/869)) ([e3921ed](https://github.com/videojs/http-streaming/commit/e3921ed))
<a name="2.0.0"></a>
# [2.0.0](https://github.com/videojs/http-streaming/compare/v2.0.0-rc.2...v2.0.0) (2020-06-16)
### Features
* add external vhs properties and deprecate hls and dash references ([#859](https://github.com/videojs/http-streaming/issues/859)) ([22af0b2](https://github.com/videojs/http-streaming/commit/22af0b2))
* Use VHS playback on any non-Safari browser ([#843](https://github.com/videojs/http-streaming/issues/843)) ([225d127](https://github.com/videojs/http-streaming/commit/225d127))
### Chores
* fix demo page on firefox, always use vhs on safari ([#851](https://github.com/videojs/http-streaming/issues/851)) ([d567b7d](https://github.com/videojs/http-streaming/commit/d567b7d))
* **stats:** update vhs usage in the stats page ([#867](https://github.com/videojs/http-streaming/issues/867)) ([4dda42a](https://github.com/videojs/http-streaming/commit/4dda42a))
### Code Refactoring
* Move caption parser to webworker, saving 5732b offloading work ([#863](https://github.com/videojs/http-streaming/issues/863)) ([491d194](https://github.com/videojs/http-streaming/commit/491d194))
* remove aes-decrypter objects from Hls saving 1415gz bytes ([#860](https://github.com/videojs/http-streaming/issues/860)) ([a4f8302](https://github.com/videojs/http-streaming/commit/a4f8302))
### Documentation
* add supported features doc ([#848](https://github.com/videojs/http-streaming/issues/848)) ([38f5860](https://github.com/videojs/http-streaming/commit/38f5860))
### Reverts
* "fix: Use middleware and a wrapped function for seeking instead of relying on unreliable 'seeking' events ([#161](https://github.com/videojs/http-streaming/issues/161))"([#856](https://github.com/videojs/http-streaming/issues/856)) ([1165f8e](https://github.com/videojs/http-streaming/commit/1165f8e))
### BREAKING CHANGES
* The Hls object which was exposed on videojs no longer has Decrypter, AsyncStream, and decrypt from aes-decrypter.
<a name="1.10.2"></a>
## [1.10.2](https://github.com/videojs/http-streaming/compare/v1.10.1...v1.10.2) (2019-05-13)
### Bug Fixes
* clear the blacklist for other playlists if final rendition errors ([#479](https://github.com/videojs/http-streaming/issues/479)) ([fe3b378](https://github.com/videojs/http-streaming/commit/fe3b378)), closes [#396](https://github.com/videojs/http-streaming/issues/396) [#471](https://github.com/videojs/http-streaming/issues/471)
* **development:** rollup watch, via `npm run watch`, should work for es/cjs ([#484](https://github.com/videojs/http-streaming/issues/484)) ([ad6f292](https://github.com/videojs/http-streaming/commit/ad6f292))
* **HLSe:** slice keys properly on IE11 ([#506](https://github.com/videojs/http-streaming/issues/506)) ([681cd6f](https://github.com/videojs/http-streaming/commit/681cd6f))
* **package:** update mpd-parser to version 0.8.1 🚀 ([#490](https://github.com/videojs/http-streaming/issues/490)) ([a49ad3a](https://github.com/videojs/http-streaming/commit/a49ad3a))
* **package:** update mux.js to version 5.1.2 🚀 ([#477](https://github.com/videojs/http-streaming/issues/477)) ([57a38e9](https://github.com/videojs/http-streaming/commit/57a38e9)), closes [#503](https://github.com/videojs/http-streaming/issues/503) [#504](https://github.com/videojs/http-streaming/issues/504)
* **source-updater:** run callbacks after setting timestampOffset ([#480](https://github.com/videojs/http-streaming/issues/480)) ([6ecf859](https://github.com/videojs/http-streaming/commit/6ecf859))
* livestream timeout issues ([#469](https://github.com/videojs/http-streaming/issues/469)) ([cf3fafc](https://github.com/videojs/http-streaming/commit/cf3fafc)), closes [segment#16](https://github.com/segment/issues/16) [segment#15](https://github.com/segment/issues/15) [segment#16](https://github.com/segment/issues/16) [segment#15](https://github.com/segment/issues/15) [segment#16](https://github.com/segment/issues/16)
* remove both vttjs listeners to prevent leaking one of them ([#495](https://github.com/videojs/http-streaming/issues/495)) ([1db1e72](https://github.com/videojs/http-streaming/commit/1db1e72))
### Performance Improvements
* don't enable captionParser for audio or subtitle loaders ([#487](https://github.com/videojs/http-streaming/issues/487)) ([358877f](https://github.com/videojs/http-streaming/commit/358877f))
<a name="1.10.1"></a>
## [1.10.1](https://github.com/videojs/http-streaming/compare/v1.10.0...v1.10.1) (2019-04-16)
### Bug Fixes
* **dash-playlist-loader:** clear out timers on dispose ([#472](https://github.com/videojs/http-streaming/issues/472)) ([2f1c222](https://github.com/videojs/http-streaming/commit/2f1c222))
### Reverts
* "fix: clear the blacklist for other playlists if final rendition errors ([#396](https://github.com/videojs/http-streaming/issues/396))" ([#471](https://github.com/videojs/http-streaming/issues/471)) ([dd55028](https://github.com/videojs/http-streaming/commit/dd55028))
<a name="1.10.0"></a>
# [1.10.0](https://github.com/videojs/http-streaming/compare/v1.9.3...v1.10.0) (2019-04-12)
### Features
* add option to cache encrpytion keys in the player ([#446](https://github.com/videojs/http-streaming/issues/446)) ([599b94d](https://github.com/videojs/http-streaming/commit/599b94d)), closes [#140](https://github.com/videojs/http-streaming/issues/140)
* add support for dash manifests describing sidx boxes ([#455](https://github.com/videojs/http-streaming/issues/455)) ([80dde16](https://github.com/videojs/http-streaming/commit/80dde16))
### Bug Fixes
* clear the blacklist for other playlists if final rendition errors ([#396](https://github.com/videojs/http-streaming/issues/396)) ([6e6c8c2](https://github.com/videojs/http-streaming/commit/6e6c8c2))
* on dispose, don't call abort on SourceBuffer until after remove() has finished ([3806750](https://github.com/videojs/http-streaming/commit/3806750))
### Documentation
* **README:** update broken link to full docs ([#440](https://github.com/videojs/http-streaming/issues/440)) ([fbd615c](https://github.com/videojs/http-streaming/commit/fbd615c))
<a name="1.9.3"></a>
## [1.9.3](https://github.com/videojs/http-streaming/compare/v1.9.2...v1.9.3) (2019-03-21)
### Bug Fixes
* **id3:** ignore unsupported id3 frames ([#437](https://github.com/videojs/http-streaming/issues/437)) ([7040b7d](https://github.com/videojs/http-streaming/commit/7040b7d)), closes [videojs/video.js#5823](https://github.com/videojs/video.js/issues/5823)
### Documentation
* add diagrams for playlist loaders ([#426](https://github.com/videojs/http-streaming/issues/426)) ([52201f9](https://github.com/videojs/http-streaming/commit/52201f9))
<a name="1.9.2"></a>
## [1.9.2](https://github.com/videojs/http-streaming/compare/v1.9.1...v1.9.2) (2019-03-14)
### Bug Fixes
* expose `custom` segment property in the segment metadata track ([#429](https://github.com/videojs/http-streaming/issues/429)) ([17510da](https://github.com/videojs/http-streaming/commit/17510da))
<a name="1.9.1"></a>
## [1.9.1](https://github.com/videojs/http-streaming/compare/v1.9.0...v1.9.1) (2019-03-05)
### Bug Fixes
* fix for streams that would occasionally never fire an `ended` event ([fc09926](https://github.com/videojs/http-streaming/commit/fc09926))
* Fix video playback freezes caused by not using absolute current time ([#401](https://github.com/videojs/http-streaming/issues/401)) ([957ecfd](https://github.com/videojs/http-streaming/commit/957ecfd))
* only fire seekablechange when values of seekable ranges actually change ([#415](https://github.com/videojs/http-streaming/issues/415)) ([a4c056e](https://github.com/videojs/http-streaming/commit/a4c056e))
* Prevent infinite buffering at the start of looped video on edge ([#392](https://github.com/videojs/http-streaming/issues/392)) ([b6d1b97](https://github.com/videojs/http-streaming/commit/b6d1b97))
### Code Refactoring
* align DashPlaylistLoader closer to PlaylistLoader states ([#386](https://github.com/videojs/http-streaming/issues/386)) ([5d80fe7](https://github.com/videojs/http-streaming/commit/5d80fe7))
<a name="1.9.0"></a>
# [1.9.0](https://github.com/videojs/http-streaming/compare/v1.8.0...v1.9.0) (2019-02-07)
### Features
* Use exposed transmuxer time modifications for more accurate conversion between program and player times ([#371](https://github.com/videojs/http-streaming/issues/371)) ([41df5c0](https://github.com/videojs/http-streaming/commit/41df5c0))
### Bug Fixes
* m3u8 playlist is not updating when only endList changes ([#373](https://github.com/videojs/http-streaming/issues/373)) ([c7d1306](https://github.com/videojs/http-streaming/commit/c7d1306))
* Prevent exceptions from being thrown by the MediaSource ([#389](https://github.com/videojs/http-streaming/issues/389)) ([8c06366](https://github.com/videojs/http-streaming/commit/8c06366))
### Chores
* Update mux.js to the latest version 🚀 ([#397](https://github.com/videojs/http-streaming/issues/397)) ([38ec2a5](https://github.com/videojs/http-streaming/commit/38ec2a5))
### Tests
* added test for playlist not updating when only endList changes ([#394](https://github.com/videojs/http-streaming/issues/394)) ([39d0be2](https://github.com/videojs/http-streaming/commit/39d0be2))
<a name="1.8.0"></a>
# [1.8.0](https://github.com/videojs/http-streaming/compare/v1.7.0...v1.8.0) (2019-01-10)
### Features
* expose custom M3U8 mapper API ([#325](https://github.com/videojs/http-streaming/issues/325)) ([609beb3](https://github.com/videojs/http-streaming/commit/609beb3))
### Bug Fixes
* **id3:** cuechange event not being triggered on audio-only HLS streams ([#334](https://github.com/videojs/http-streaming/issues/334)) ([bab70fd](https://github.com/videojs/http-streaming/commit/bab70fd)), closes [#130](https://github.com/videojs/http-streaming/issues/130)
<a name="1.7.0"></a>
# [1.7.0](https://github.com/videojs/http-streaming/compare/v1.6.0...v1.7.0) (2019-01-04)
### Features
* expose custom M3U8 parser API ([#331](https://github.com/videojs/http-streaming/issues/331)) ([b0643a4](https://github.com/videojs/http-streaming/commit/b0643a4))
<a name="1.6.0"></a>
# [1.6.0](https://github.com/videojs/http-streaming/compare/v1.5.1...v1.6.0) (2018-12-21)
### Features
* Add allowSeeksWithinUnsafeLiveWindow property ([#320](https://github.com/videojs/http-streaming/issues/320)) ([74b28e8](https://github.com/videojs/http-streaming/commit/74b28e8))
### Chores
* add clock.ticks to now async operations in tests ([#315](https://github.com/videojs/http-streaming/issues/315)) ([895c86a](https://github.com/videojs/http-streaming/commit/895c86a))
### Documentation
* Add README entry on DRM and videojs-contrib-eme ([#307](https://github.com/videojs/http-streaming/issues/307)) ([93b6167](https://github.com/videojs/http-streaming/commit/93b6167))
<a name="1.5.1"></a>
## [1.5.1](https://github.com/videojs/http-streaming/compare/v1.5.0...v1.5.1) (2018-12-06)
### Bug Fixes
* added missing manifest information on to segments (EXT-X-PROGRAM-DATE-TIME) ([#236](https://github.com/videojs/http-streaming/issues/236)) ([a35dd09](https://github.com/videojs/http-streaming/commit/a35dd09))
* remove player props on dispose to stop middleware ([#229](https://github.com/videojs/http-streaming/issues/229)) ([cd13f9f](https://github.com/videojs/http-streaming/commit/cd13f9f))
### Documentation
* add dash to package.json description ([#267](https://github.com/videojs/http-streaming/issues/267)) ([3296c68](https://github.com/videojs/http-streaming/commit/3296c68))
* add documentation for reloadSourceOnError ([#266](https://github.com/videojs/http-streaming/issues/266)) ([7448b37](https://github.com/videojs/http-streaming/commit/7448b37))
<a name="1.5.0"></a>
# [1.5.0](https://github.com/videojs/http-streaming/compare/v1.4.2...v1.5.0) (2018-11-13)
### Features
* Add useBandwidthFromLocalStorage option ([#275](https://github.com/videojs/http-streaming/issues/275)) ([60c88ae](https://github.com/videojs/http-streaming/commit/60c88ae))
### Bug Fixes
* don't wait for requests to finish when encountering an error in media-segment-request ([#286](https://github.com/videojs/http-streaming/issues/286)) ([970e3ce](https://github.com/videojs/http-streaming/commit/970e3ce))
* throttle final playlist reloads when using DASH ([#277](https://github.com/videojs/http-streaming/issues/277)) ([1c2887a](https://github.com/videojs/http-streaming/commit/1c2887a))
<a name="1.4.2"></a>
## [1.4.2](https://github.com/videojs/http-streaming/compare/v1.4.1...v1.4.2) (2018-11-01)
### Chores
* pin to node 8 for now ([#279](https://github.com/videojs/http-streaming/issues/279)) ([f900dc4](https://github.com/videojs/http-streaming/commit/f900dc4))
* update mux.js to 5.0.1 ([#282](https://github.com/videojs/http-streaming/issues/282)) ([af6ee4f](https://github.com/videojs/http-streaming/commit/af6ee4f))
<a name="1.4.1"></a>
## [1.4.1](https://github.com/videojs/http-streaming/compare/v1.4.0...v1.4.1) (2018-10-25)
### Bug Fixes
* **subtitles:** set default property if default and autoselect are both enabled ([#239](https://github.com/videojs/http-streaming/issues/239)) ([ee594e5](https://github.com/videojs/http-streaming/commit/ee594e5))
<a name="1.4.0"></a>
# [1.4.0](https://github.com/videojs/http-streaming/compare/v1.3.1...v1.4.0) (2018-10-24)
### Features
* limited experimental DASH multiperiod support ([#268](https://github.com/videojs/http-streaming/issues/268)) ([a213807](https://github.com/videojs/http-streaming/commit/a213807))
* smoothQualityChange flag ([#235](https://github.com/videojs/http-streaming/issues/235)) ([0e4fdf9](https://github.com/videojs/http-streaming/commit/0e4fdf9))
### Bug Fixes
* immediately setup EME if available ([#263](https://github.com/videojs/http-streaming/issues/263)) ([7577e90](https://github.com/videojs/http-streaming/commit/7577e90))
<a name="1.3.1"></a>
## [1.3.1](https://github.com/videojs/http-streaming/compare/v1.3.0...v1.3.1) (2018-10-15)
### Bug Fixes
* ensure content loops ([#259](https://github.com/videojs/http-streaming/issues/259)) ([26300df](https://github.com/videojs/http-streaming/commit/26300df))
<a name="1.3.0"></a>
# [1.3.0](https://github.com/videojs/http-streaming/compare/v1.2.6...v1.3.0) (2018-10-05)
### Features
* add an option to ignore player size in selection logic ([#238](https://github.com/videojs/http-streaming/issues/238)) ([7ae42b1](https://github.com/videojs/http-streaming/commit/7ae42b1))
### Documentation
* Update CONTRIBUTING.md ([#242](https://github.com/videojs/http-streaming/issues/242)) ([9d83e9d](https://github.com/videojs/http-streaming/commit/9d83e9d))
<a name="1.2.6"></a>
## [1.2.6](https://github.com/videojs/http-streaming/compare/v1.2.5...v1.2.6) (2018-09-21)
### Bug Fixes
* stutter after fast quality change in IE/Edge ([#213](https://github.com/videojs/http-streaming/issues/213)) ([2c0d9b2](https://github.com/videojs/http-streaming/commit/2c0d9b2))
### Documentation
* update issue template to link to the troubleshooting guide ([#215](https://github.com/videojs/http-streaming/issues/215)) ([413f0e8](https://github.com/videojs/http-streaming/commit/413f0e8))
* update README notes for video.js 7 ([#200](https://github.com/videojs/http-streaming/issues/200)) ([d68ce0c](https://github.com/videojs/http-streaming/commit/d68ce0c))
* update troubleshooting guide for Edge/mobile Chrome ([#216](https://github.com/videojs/http-streaming/issues/216)) ([21e5335](https://github.com/videojs/http-streaming/commit/21e5335))
<a name="1.2.5"></a>
## [1.2.5](https://github.com/videojs/http-streaming/compare/v1.2.4...v1.2.5) (2018-08-24)
### Bug Fixes
* fix replay functionality ([#204](https://github.com/videojs/http-streaming/issues/204)) ([fd6be83](https://github.com/videojs/http-streaming/commit/fd6be83))
<a name="1.2.4"></a>
## [1.2.4](https://github.com/videojs/http-streaming/compare/v1.2.3...v1.2.4) (2018-08-13)
### Bug Fixes
* Remove buffered data on fast quality switches ([#113](https://github.com/videojs/http-streaming/issues/113)) ([bc94fbb](https://github.com/videojs/http-streaming/commit/bc94fbb))
<a name="1.2.3"></a>
## [1.2.3](https://github.com/videojs/http-streaming/compare/v1.2.2...v1.2.3) (2018-08-09)
### Chores
* link to minified example in main page ([#189](https://github.com/videojs/http-streaming/issues/189)) ([15a7f92](https://github.com/videojs/http-streaming/commit/15a7f92))
* use netlify for easier testing ([#188](https://github.com/videojs/http-streaming/issues/188)) ([d2e0d35](https://github.com/videojs/http-streaming/commit/d2e0d35))
<a name="1.2.2"></a>
## [1.2.2](https://github.com/videojs/http-streaming/compare/v1.2.1...v1.2.2) (2018-08-07)
### Bug Fixes
* typeof minification ([#182](https://github.com/videojs/http-streaming/issues/182)) ([7c68335](https://github.com/videojs/http-streaming/commit/7c68335))
* Use middleware and a wrapped function for seeking instead of relying on unreliable 'seeking' events ([#161](https://github.com/videojs/http-streaming/issues/161)) ([6c68761](https://github.com/videojs/http-streaming/commit/6c68761))
### Chores
* add logo ([#184](https://github.com/videojs/http-streaming/issues/184)) ([a55626c](https://github.com/videojs/http-streaming/commit/a55626c))
### Documentation
* add note for Safari captions error ([#174](https://github.com/videojs/http-streaming/issues/174)) ([7b03530](https://github.com/videojs/http-streaming/commit/7b03530))
### Tests
* add support for real segments in tests ([#178](https://github.com/videojs/http-streaming/issues/178)) ([2b07fca](https://github.com/videojs/http-streaming/commit/2b07fca))
<a name="1.2.1"></a>
## [1.2.1](https://github.com/videojs/http-streaming/compare/v1.2.0...v1.2.1) (2018-07-17)
### Bug Fixes
* convert non-latin characters in IE ([#157](https://github.com/videojs/http-streaming/issues/157)) ([17678fb](https://github.com/videojs/http-streaming/commit/17678fb))
<a name="1.2.0"></a>
# [1.2.0](https://github.com/videojs/http-streaming/compare/v1.1.0...v1.2.0) (2018-07-16)
### Features
* **captions:** write in-band captions from DASH fmp4 segments to the textTrack API ([#108](https://github.com/videojs/http-streaming/issues/108)) ([7c11911](https://github.com/videojs/http-streaming/commit/7c11911))
### Chores
* add welcome bot config from video.js ([#150](https://github.com/videojs/http-streaming/issues/150)) ([922cfee](https://github.com/videojs/http-streaming/commit/922cfee))
<a name="1.1.0"></a>
# [1.1.0](https://github.com/videojs/http-streaming/compare/v1.0.2...v1.1.0) (2018-06-06)
### Features
* Utilize option to override native on tech ([#76](https://github.com/videojs/http-streaming/issues/76)) ([5c7ab4c](https://github.com/videojs/http-streaming/commit/5c7ab4c))
### Chores
* update tests and pages for video.js 7 ([#102](https://github.com/videojs/http-streaming/issues/102)) ([d6f5005](https://github.com/videojs/http-streaming/commit/d6f5005))
<a name="1.0.2"></a>
## [1.0.2](https://github.com/videojs/http-streaming/compare/v1.0.1...v1.0.2) (2018-05-17)
### Bug Fixes
* make project Video.js 7 ready ([#92](https://github.com/videojs/http-streaming/issues/92)) ([decad87](https://github.com/videojs/http-streaming/commit/decad87))
* make sure that es build is babelified ([#97](https://github.com/videojs/http-streaming/issues/97)) ([5f0428d](https://github.com/videojs/http-streaming/commit/5f0428d))
### Documentation
* update documentation with a glossary and intro page, added DASH background ([#94](https://github.com/videojs/http-streaming/issues/94)) ([4b0fde9](https://github.com/videojs/http-streaming/commit/4b0fde9))
<a name="1.0.1"></a>
## [1.0.1](https://github.com/videojs/http-streaming/compare/v1.0.0...v1.0.1) (2018-04-12)
### Bug Fixes
* minified build ([#84](https://github.com/videojs/http-streaming/issues/84)) ([2402ac6](https://github.com/videojs/http-streaming/commit/2402ac6))
<a name="1.0.0"></a>
# [1.0.0](https://github.com/videojs/http-streaming/compare/v0.9.0...v1.0.0) (2018-04-10)
### Chores
* sync videojs-contrib-hls updates ([#75](https://github.com/videojs/http-streaming/issues/75)) ([9223588](https://github.com/videojs/http-streaming/commit/9223588))
* update the aes-decrypter ([#71](https://github.com/videojs/http-streaming/issues/71)) ([27ed914](https://github.com/videojs/http-streaming/commit/27ed914))
### Documentation
* update docs for overrideNative ([#77](https://github.com/videojs/http-streaming/issues/77)) ([98ca6d3](https://github.com/videojs/http-streaming/commit/98ca6d3))
* update known issues for fmp4 captions ([#79](https://github.com/videojs/http-streaming/issues/79)) ([c418301](https://github.com/videojs/http-streaming/commit/c418301))
<a name="0.9.0"></a>
# [0.9.0](https://github.com/videojs/http-streaming/compare/v0.8.0...v0.9.0) (2018-03-30)
### Features
* support in-manifest DRM data ([#60](https://github.com/videojs/http-streaming/issues/60)) ([a1cad82](https://github.com/videojs/http-streaming/commit/a1cad82))
<a name="0.8.0"></a>
# [0.8.0](https://github.com/videojs/http-streaming/compare/v0.7.2...v0.8.0) (2018-03-30)
### Code Refactoring
* export corrections ([#68](https://github.com/videojs/http-streaming/issues/68)) ([aab3b90](https://github.com/videojs/http-streaming/commit/aab3b90))
* use rollup for build ([#69](https://github.com/videojs/http-streaming/issues/69)) ([c28c25c](https://github.com/videojs/http-streaming/commit/c28c25c))
# 0.7.0
* feat: Live support for DASH
# 0.6.1
* use webwackify for webworkers to support webpack bundle ([#50](https://github.com/videojs/http-streaming/pull/45))
# 0.5.3
* fix: program date time handling ([#45](https://github.com/videojs/http-streaming/pull/45))
* update m3u8-parser to v4.2.0
* use segment program date time info
* feat: Adding support for segments in Period and Representation ([#47](https://github.com/videojs/http-streaming/pull/47))
* wait for both main and audio loaders for endOfStream if main starting media unknown ([#44](https://github.com/videojs/http-streaming/pull/44))
# 0.5.2
* add debug logging statement for seekable updates ([#40](https://github.com/videojs/http-streaming/pull/40))
# 0.5.1
* Fix audio only streams with EXT-X-MEDIA tags ([#34](https://github.com/videojs/http-streaming/pull/34))
* Merge videojs-contrib-hls master into http-streaming master ([#35](https://github.com/videojs/http-streaming/pull/35))
* Update sinon to 1.10.3=
* Update videojs-contrib-quality-levels to ^2.0.4
* Fix test for event handler cleanup on dispose by calling event handling methods
* fix: Don't reset eme options ([#32](https://github.com/videojs/http-streaming/pull/32))
# 0.5.0
* update mpd-parser to support more segment list types ([#27](https://github.com/videojs/http-streaming/issues/27))
# 0.4.0
* Removed Flash support ([#15](https://github.com/videojs/http-streaming/issues/15))
* Blacklist playlists not supported by browser media source before initial selection ([#17](https://github.com/videojs/http-streaming/issues/17))
# 0.3.1
* Skip flash-based source handler with DASH sources ([#14](https://github.com/videojs/http-streaming/issues/14))
# 0.3.0
* Added additional properties to the stats object ([#10](https://github.com/videojs/http-streaming/issues/10))
# 0.2.1
* Updated the mpd-parser to fix IE11 DASH support ([#12](https://github.com/videojs/http-streaming/issues/12))
# 0.2.0
* Initial DASH Support ([#8](https://github.com/videojs/http-streaming/issues/8))
# 0.1.0
* Initial release, based on [videojs-contrib-hls 5.12.2](https://github.com/videojs/videojs-contrib-hls)

30
node_modules/@videojs/http-streaming/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,30 @@
# CONTRIBUTING
We welcome contributions from everyone!
## Getting Started
Make sure you have Node.js 8 or higher and npm installed.
1. Fork this repository and clone your fork
1. Install dependencies: `npm install`
1. Run a development server: `npm start`
### Making Changes
Refer to the [video.js plugin conventions][conventions] for more detail on best practices and tooling for video.js plugin authorship.
When you've made your changes, push your commit(s) to your fork and issue a pull request against the original repository.
### Running Tests
Testing is a crucial part of any software project. For all but the most trivial changes (typos, etc) test cases are expected. Tests are run in actual browsers using [Karma][karma].
- In all available and supported browsers: `npm test`
- In a specific browser: `npm run test:chrome`, `npm run test:firefox`, etc.
- While development server is running (`npm start`), navigate to [`http://localhost:9999/test/`][local]
[karma]: http://karma-runner.github.io/
[local]: http://localhost:9999/test/
[conventions]: https://github.com/videojs/generator-videojs-plugin/blob/master/docs/conventions.md

49
node_modules/@videojs/http-streaming/LICENSE generated vendored Normal file
View file

@ -0,0 +1,49 @@
Copyright Brightcove, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The AES decryption implementation in this project is derived from the
Stanford Javascript Cryptography Library
(http://bitwiseshiftleft.github.io/sjcl/). That work is covered by the
following copyright and permission notice:
Copyright 2009-2010 Emily Stark, Mike Hamburg, Dan Boneh.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and should not be interpreted as representing
official policies, either expressed or implied, of the authors.

1000
node_modules/@videojs/http-streaming/README.md generated vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

50
node_modules/@videojs/http-streaming/docs/README.md generated vendored Normal file
View file

@ -0,0 +1,50 @@
# Overview
This project supports both [HLS][hls] and [MPEG-DASH][dash] playback in the video.js player. This document is intended as a primer for anyone interested in contributing or just better understanding how bits from a server get turned into video on their display.
## HTTP Live Streaming
[HLS][apple-hls-intro] has two primary characteristics that distinguish it from other video formats:
- Delivered over HTTP(S): it uses the standard application protocol of the web to deliver all its data
- Segmented: longer videos are broken up into smaller chunks which can be downloaded independently and switched between at runtime
A standard HLS stream consists of a *Master Playlist* which references one or more *Media Playlists*. Each Media Playlist contains one or more sequential video segments. All these components form a logical hierarchy that informs the player of the different quality levels of the video available and how to address the individual segments of video at each of those levels:
![HLS Format](images/hls-format.png)
HLS streams can be delivered in two different modes: a "static" mode for videos that can be played back from any point, often referred to as video-on-demand (VOD); or a "live" mode where later portions of the video become available as time goes by. In the static mode, the Master and Media playlists are fixed. The player is guaranteed that the set of video segments referenced by those playlists will not change over time.
Live mode can work in one of two ways. For truly live events, the most common configuration is for each individual Media Playlist to only include the latest video segment and a small number of consecutive previous segments. In this mode, the player may be able to seek backwards a short time in the video but probably not all the way back to the beginning. In the other live configuration, new video segments can be appended to the Media Playlists but older segments are never removed. This configuration allows the player to seek back to the beginning of the stream at any time during the broadcast and transitions seamlessly to the static stream type when the event finishes.
If you're interested in a more in-depth treatment of the HLS format, check out [Apple's documentation][apple-hls-intro] and the IETF [Draft Specification][hls-spec].
## Dynamic Adaptive Streaming over HTTP
Similar to HLS, [DASH][dash-wiki] content is segmented and is delivered over HTTP(s).
A DASH stream consits of a *Media Presentation Description*(MPD) that describes segment metadata such as timing information, URLs, resolution and bitrate. Each segment can contain either ISO base media file format(e.g MP4) or MPEG-2 TS data. Typically, the MPD will describe the various *Representations* that map to collections of segments at different bitrates to allow bitrate selection. These Representations can be organized as a SegmentList, SegmentTemplate, SegmentBase, or SegmentTimeline.
DASH streams can be delivered in both video-on-demand(VOD) and live streaming modes. In the VOD case, the MPD describes all the segments and representations available and the player can chose which representation to play based on it's capabilities.
Live mode is accomplished using the ISOBMFF Live profile if the segments are in ISOBMFF. There are a few different ways to setup the MPD including but not limited to updating the MPD after an interval of time, using *Periods*, or using the *availabilityTimeOffset* field. A few examples of this are provided by the [DASH Reference Client][dash-if-reference-client]. The MPD will provide enough information for the player to playback the live stream and seek back as far as is specified in the MPD.
If you're interested in a more in-depth description of MPEG-DASH, check out [MDN's tutorial on setting up DASH][mdn-dash-tut] or the [DASHIF Guidelines][dash-if-guide].
# Further Documentation
- [Architechture](arch.md)
- [Glossary](glossary.md)
- [Adaptive Bitrate Switching](bitrate-switching.md)
- [Multiple Alternative Audio Tracks](multiple-alternative-audio-tracks.md)
- [reloadSourceOnError](reload-source-on-error.md)
# Helpful Tools
- [FFmpeg](http://trac.ffmpeg.org/wiki/CompilationGuide)
- [Thumbcoil](http://thumb.co.il/): web based video inspector
[hls]: /docs/intro.md#http-live-streaming
[dash]: /docs/intro.md#dynamic-adaptive-streaming-over-http
[apple-hls-intro]: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
[hls-spec]: https://datatracker.ietf.org/doc/draft-pantos-http-live-streaming/
[dash-wiki]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[dash-if-reference-client]: https://reference.dashif.org/dash.js/
[mdn-dash-tut]: https://developer.mozilla.org/en-US/Apps/Fundamentals/Audio_and_video_delivery/Setting_up_adaptive_streaming_media_sources
[dash-if-guide]: http://dashif.org/guidelines/

28
node_modules/@videojs/http-streaming/docs/arch.md generated vendored Normal file
View file

@ -0,0 +1,28 @@
## HLS Project Overview
This project has three primary duties:
1. Download and parse playlist files
1. Implement the [HTMLVideoElement](https://html.spec.whatwg.org/multipage/embedded-content.html#the-video-element) interface
1. Feed content bits to a SourceBuffer by downloading and transmuxing video segments
### Playlist Management
The [playlist loader](../src/playlist-loader.js) handles all of the details of requesting, parsing, updating, and switching playlists at runtime. It's operation is described by this state diagram:
![Playlist Loader States](images/playlist-loader-states.png)
During VOD playback, the loader will move quickly to the HAVE_METADATA state and then stay there unless a quality switch request sends it to SWITCHING_MEDIA while it fetches an alternate playlist. The loader enters the HAVE_CURRENT_METADATA when a live stream is detected and it's time to refresh the current media playlist to find out about new video segments.
### HLS Tech
Currently, the HLS project integrates with [video.js](http://www.videojs.com/) as a [tech](https://github.com/videojs/video.js/blob/master/docs/guides/tech.md). That means it's responsible for providing an interface that closely mirrors the `<video>` element. You can see that implementation in [videojs-http-streaming.js](../src/videojs-http-streaming.js), the primary entry point of the project.
### Transmuxing
Most browsers don't have support for the file type that HLS video segments are stored in. To get HLS playing back on those browsers, contrib-hls strings together a number of technologies:
1. The [Netstream](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html) in [video.js SWF](https://github.com/videojs/video-js-swf) has a special mode of operation that allows binary video data packaged as an [FLV](http://en.wikipedia.org/wiki/Flash_Video) to be provided directly
1. [videojs-contrib-media-sources](https://github.com/videojs/videojs-contrib-media-sources) provides an abstraction layer over the SWF that operates like a [Media Source](https://w3c.github.io/media-source/#mediasource)
1. A pure javascript transmuxer that repackages HLS segments as FLVs
Transmuxing is the process of transforming media stored in one container format into another container without modifying the underlying media data. If that last sentence doesn't make any sense to you, check out the [Introduction to Media](media.md) for more details.
### Buffer Management
Buffering in contrib-hls is driven by two functions in videojs-hls.js: fillBuffer() and drainBuffer(). During its operation, contrib-hls periodically calls fillBuffer() which determines when more video data is required and begins a segment download if so. Meanwhile, drainBuffer() is invoked periodically during playback to process incoming segments and append them onto the [SourceBuffer](http://w3c.github.io/media-source/#sourcebuffer). In conjunction with a goal buffer length, this producer-consumer relationship drives the buffering behavior of contrib-hls.

View file

@ -0,0 +1,44 @@
# Adaptive Switching Behavior
The HLS tech tries to ensure the highest-quality viewing experience
possible, given the available bandwidth and encodings. This doesn't
always mean using the highest-bitrate rendition available-- if the player
is 300px by 150px, it would be a big waste of bandwidth to download a 4k
stream. By default, the player attempts to load the highest-bitrate
variant that is less than the most recently detected segment bandwidth,
with one condition: if there are multiple variants with dimensions greater
than the current player size, it will only switch up one size greater
than the current player size.
If you're the visual type, the whole process is illustrated
below. Whenever a new segment is downloaded, we calculate the download
bitrate based on the size of the segment and the time it took to
download:
![New bitrate info is available](images/bitrate-switching-1.png)
First, we filter out all the renditions that have a higher bitrate
than the new measurement:
![Bitrate filtering](images/bitrate-switching-2.png)
Then we get rid of any renditions that are bigger than the current
player dimensions:
![Resolution filtering](images/bitrate-switching-3.png)
We don't want to signficant quality drop just because your player is
one pixel too small, so we add back in the next highest
resolution. The highest bitrate rendition that remains is the one that
gets used:
![Final selection](images/bitrate-switching-4.png)
If it turns out no rendition is acceptable based on the filtering
described above, the first encoding listed in the master playlist will
be used.
If you'd like your player to use a different set of priorities, it's
possible to completely replace the rendition selection logic. For
instance, you could always choose the most appropriate rendition by
resolution, even though this might mean more stalls during playback.
See the documentation on `player.vhs.selectPlaylist` for more details.

View file

@ -0,0 +1,218 @@
# Creating Content
## Commands for creating tests streams
### Streams with EXT-X-PROGRAM-DATE-TIME for testing seekToProgramTime and convertToProgramTime
lavfi and testsrc are provided for creating a test stream in ffmpeg
-g 300 sets the GOP size to 300 (keyframe interval, at 30fps, one keyframe every 10 seconds)
-f hls sets the format to HLS (creates an m3u8 and TS segments)
-hls\_time 10 sets the goal segment size to 10 seconds
-hls\_list\_size 20 sets the number of segments in the m3u8 file to 20
-program\_date\_time an hls flag for setting #EXT-X-PROGRAM-DATE-TIME on each segment
```
ffmpeg \
-f lavfi \
-i testsrc=duration=200:size=1280x720:rate=30 \
-g 300 \
-f hls \
-hls_time 10 \
-hls_list_size 20 \
-hls_flags program_date_time \
stream.m3u8
```
## Commands used for segments in `test/segments` dir
### video.ts
Copy only the first two video frames, leave out audio.
```
$ ffmpeg -i index0.ts -vframes 2 -an -vcodec copy video.ts
```
### videoOneSecond.ts
Blank video for 1 second, MMS-Small resolution, start at 0 PTS/DTS, 2 frames per second
```
$ ffmpeg -f lavfi -i color=c=black:s=128x96:r=2:d=1 -muxdelay 0 -c:v libx264 videoOneSecond.ts
```
### videoOneSecond1.ts through videoOneSecond4.ts
Same as videoOneSecond.ts, but follows timing in sequence, with videoOneSecond.ts acting as the 0 index. Each segment starts at the second that its index indicates (e.g., videoOneSecond2.ts has a start time of 2 seconds).
```
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 1 -vcodec copy videoOneSecond1.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 2 -vcodec copy videoOneSecond2.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 3 -vcodec copy videoOneSecond3.ts
$ ffmpeg -i videoOneSecond.ts -muxdelay 0 -output_ts_offset 4 -vcodec copy videoOneSecond4.ts
```
### audio.ts
Copy only the first two audio frames, leave out video.
```
$ ffmpeg -i index0.ts -aframes 2 -vn -acodec copy audio.ts
```
### videoMinOffset.ts
video.ts but with an offset of 0
```
$ ffmpeg -i video.ts -muxpreload 0 -muxdelay 0 -vcodec copy videoMinOffset.ts
```
### audioMinOffset.ts
audio.ts but with an offset of 0. Note that muxed.ts is used because ffmpeg didn't like
the use of audio.ts
```
$ ffmpeg -i muxed.ts -muxpreload 0 -muxdelay 0 -acodec copy -vn audioMinOffset.ts
```
### videoMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 6006 (0.066733 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 6006) / 90,000 = 95443.6509556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 95443.6509556 -muxpreload 95443.6509556 -output_ts_offset 95443.6509556 -vcodec copy videoMaxOffset.ts
```
### audioMaxOffset.ts
This segment offsets content such that it ends at exactly the max timestamp before a rollover occurs. It uses the max timestamp of 2^33 (8589934592) minus the segment duration of 11520 (0.128000 seconds) in order to not rollover mid segment, and divides the value by 90,000 to convert it from media time to seconds.
(2^33 - 11520) / 90,000 = 95443.5896889
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 95443.5896889 -muxpreload 95443.5896889 -output_ts_offset 95443.5896889 -acodec copy audioMaxOffset.ts
```
### videoLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i videoMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -vcodec copy videoLargeOffset.ts
```
### audioLargeOffset.ts
This segment offsets content by the rollover threshhold of 2^32 (4294967296) found in the rollover handling of mux.js, adds 1 to ensure there aren't any cases where there's an equal match, then divides the value by 90,000 to convert it from media time to seconds.
(2^32 + 1) / 90,000 = 47721.8588556
```
$ ffmpeg -i audioMinOffset.ts -muxdelay 47721.8588556 -muxpreload 47721.8588556 -output_ts_offset 47721.8588556 -acodec copy audioLargeOffset.ts
```
### videoLargeOffset2.ts
This takes videoLargeOffset.ts and adds the duration of videoLargeOffset.ts (6006 / 90,000 = 0.066733 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.066733 = 47721.9255886
```
$ ffmpeg -i videoLargeOffset.ts -muxdelay 47721.9255886 -muxpreload 47721.9255886 -output_ts_offset 47721.9255886 -vcodec copy videoLargeOffset2.ts
```
### audioLargeOffset2.ts
This takes audioLargeOffset.ts and adds the duration of audioLargeOffset.ts (11520 / 90,000 = 0.128 seconds) to its offset so that this segment can act as the second in one continuous stream.
47721.8588556 + 0.128 = 47721.9868556
```
$ ffmpeg -i audioLargeOffset.ts -muxdelay 47721.9868556 -muxpreload 47721.9868556 -output_ts_offset 47721.9868556 -acodec copy audioLargeOffset2.ts
```
### caption.ts
Copy the first two frames of video out of a ts segment that already includes CEA-608 captions.
`ffmpeg -i index0.ts -vframes 2 -an -vcodec copy caption.ts`
### id3.ts
Copy only the first five frames of video, leave out audio.
`ffmpeg -i index0.ts -vframes 5 -an -vcodec copy smaller.ts`
Create an ID3 tag using [id3taggenerator][apple_streaming_tools]:
`id3taggenerator -text "{\"id\":1, \"data\": \"id3\"}" -o tag.id3`
Create a file `macro.txt` with the following:
`0 id3 tag.id3`
Run [mediafilesegmenter][apple_streaming_tools] with the small video segment and macro file, to produce a new segment with ID3 tags inserted at the specified times.
`mediafilesegmenter -start-segments-with-iframe --target-duration=1 --meta-macro-file=macro.txt -s -A smaller.ts`
### mp4Video.mp4
Copy only the first two video frames, leave out audio.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -vframes 2 -an -vcodec copy mp4Video.mp4
```
### mp4Audio.mp4
Copy only the first two audio frames, leave out video.
movflags:
* frag\_keyframe: "Start a new fragment at each video keyframe."
* empty\_moov: "Write an initial moov atom directly at the start of the file, without describing any samples in it."
* omit\_tfhd\_offset: "Do not write any absolute base\_data\_offset in tfhd atoms. This avoids tying fragments to absolute byte positions in the file/streams." (see also: https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing)
```
$ ffmpeg -i file.mp4 -movflags frag_keyframe+empty_moov+omit_tfhd_offset -aframes 2 -vn -acodec copy mp4Audio.mp4
```
### mp4VideoInit.mp4 and mp4AudioInit.mp4
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Renamed from .m4s to .mp4
```
$ ffmpeg -i input.mp4 -f dash out.mpd
```
### webmVideoInit.webm and webmVideo.webm
```
$ cat mp4VideoInit.mp4 mp4Video.mp4 > video.mp4
$ ffmpeg -i video.mp4 -dash_segment_type webm -c:v libvpx-vp9 -f dash output.mpd
$ mv init-stream0.webm webmVideoInit.webm
$ mv chunk-stream0-00001.webm webmVideo.webm
```
## Other useful commands
### Joined (audio and video) initialization segment (for HLS)
Using DASH as the format type (-f) will lead to two init segments, one for video and one for audio. Using HLS will lead to one joined.
Note that -hls\_fmp4\_init\_filename defaults to init.mp4, but is here for readability.
Without specifying fmp4 for hls\_segment\_type, ffmpeg defaults to ts.
```
$ ffmpeg -i input.mp4 -f hls -hls_fmp4_init_filename init.mp4 -hls_segment_type fmp4 out.m3u8
```
[apple_streaming_tools]: https://developer.apple.com/documentation/http_live_streaming/about_apple_s_http_live_streaming_tools

View file

@ -0,0 +1,87 @@
# DASH Playlist Loader
## Purpose
The [DashPlaylistLoader][dpl] (DPL) is responsible for requesting MPDs, parsing them and keeping track of the media "playlists" associated with the MPD. The [DPL] is used with a [SegmentLoader] to load fmp4 fragments from a DASH source.
## Basic Responsibilities
1. To request an MPD.
2. To parse an MPD into a format [videojs-http-streaming][vhs] can understand.
3. To refresh MPDs according to their minimumUpdatePeriod.
4. To allow selection of a specific media stream.
5. To sync the client clock with a server clock according to the UTCTiming node.
6. To refresh a live MPD for changes.
## Design
The [DPL] is written to be as similar as possible to the [PlaylistLoader][pl]. This means that majority of the public API for these two classes are the same, and so are the states they go through and events that they trigger.
### States
![DashPlaylistLoader States](images/dash-playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the MPD is received and parsed.
- `HAVE_MASTER` the state before a media stream is setup but the MPD has been parsed.
- `HAVE_METADATA` the state after a media stream is setup.
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [DPL] and request the MPD.
- `parseMasterXml()` this will parse the MPD manifest and return the result.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the DASH source as a whole, from the MPD; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `minimumUpdatePeriod` signals that a update period has ended and the MPD must be requested again.
- `playlistunchanged` signals that no changes have been made to a MPD.
- `mediaupdatetimeout` signals that a live MPD and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![DPL with MPC and MG](images/dash-playlist-loader-mpc-mg-sequence.plantuml.png)
### Special Features
There are a few features of [DPL] that are different from [PL] due to fundamental differences between HLS and DASH standards.
#### MinimumUpdatePeriod
This is a time period specified in the MPD after which the MPD should be re-requested and parsed. There could be any number of changes to the MPD between these update periods.
#### SyncClientServerClock
There is a UTCTiming node in the MPD that allows the client clock to be synced with a clock on the server. This may affect the results of parsing the MPD.
#### Requesting `sidx` Boxes
To be filled out.
### Previous Behavior
Until version 1.9.0 of [VHS], we thought that [DPL] could skip the `HAVE_NOTHING` and `HAVE_MASTER` states, as no other XHR requests are needed once the MPD has been downloaded and parsed. However, this is incorrect as there are some Presentations that signal the use of a "Segment Index box" or `sidx`. This `sidx` references specific byte ranges in a file that could contain media or potentially other `sidx` boxes.
A DASH MPD that describes a `sidx` is therefore similar to an HLS master manifest, in that the MPD contains references to something that must be requested and parsed first before references to media segments can be obtained. With this in mind, it was necessary to update the initialization and state transitions of [DPL] to allow further XHR requests to be made after the initial request for the MPD.
### Current Behavior
In [this PR](https://github.com/videojs/http-streaming/pull/386), the [DPL] was updated to go through the `HAVE_NOTHING` and `HAVE_MASTER` states before arriving at `HAVE_METADATA`. If the MPD does not contain `sidx` boxes, then this transition happens quickly after `load()` is called, spending little time in the `HAVE_MASTER` state.
The initial media selection for `masterPlaylistLoader` is made in the `loadedplaylist` handler located in [MasterPlaylistController][mpc]. We now use `hasPendingRequest` to determine whether to automatically select a media playlist for the `masterPlaylistLoader` as a fallback in case one is not selected by [MPC]. The child [DPL]s are created with a media playlist passed in as an argument, so this fallback is not necessary for them. Instead, that media playlist is saved and auto-selected once we enter the `HAVE_MASTER` state.
The `updateMaster` method will return `null` if no updates are found.
The `selectinitialmedia` event is not triggered until an audioPlaylistLoader (which for DASH is always a child [DPL]) has a media playlist. This is signaled by triggering `loadedmetadata` on the respective [DPL]. This event is used to initialize the [Representations API][representations] and setup EME (see [contrib-eme]).
[dpl]: ../src/dash-playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md
[pl]: ../src/playlist-loader.js
[mpc]: ../src/master-playlist-controller.js
[representations]: ../README.md#hlsrepresentations
[contrib-eme]: https://github.com/videojs/videojs-contrib-eme

23
node_modules/@videojs/http-streaming/docs/glossary.md generated vendored Normal file
View file

@ -0,0 +1,23 @@
# Glossary
**Playlist**: This is a representation of an HLS or DASH manifest.
**Media Playlist**: This is a manifest that represents a single rendition or media stream of the source.
**Master Playlist Controller**: This acts as the main controller for the playback engine. It interacts with the SegmentLoaders, PlaylistLoaders, PlaybackWatcher, etc.
**Playlist Loader**: This will request the source and load the master manifest. It is also instructed by the ABR algorithm to load a media playlist or wraps a media playlist if it is provided as the source. There are more details about the playlist loader [here](./arch.md).
**DASH Playlist Loader**: This will do as the PlaylistLoader does, but for DASH sources. It also handles DASH specific functionaltiy, such as refreshing the MPD according to the minimumRefreshPeriod and synchronizing to a server clock.
**Segment Loader**: This determines which segment should be loaded, requests it via the Media Segment Loader and passes the result to the Source Updater.
**Media Segment Loader**: This requests a given segment, decrypts the segment if necessary, and returns it to the Segment Loader.
**Source Updater**: This manages the browser's [SourceBuffers](https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer). It appends decrypted segment bytes provided by the Segment Loader to the corresponding Source Buffer.
**ABR(Adaptive Bitrate) Algorithm**: This concept is described more in detail [here](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming). Our chosen ABR algorithm is referenced by [selectPlaylist](../README.md#hlsselectplaylist) and is described more [here](./bitrate-switching.md).
**Playback Watcher**: This attemps to resolve common playback stalls caused by improper seeking, gaps in content and browser issues.
**Sync Controller**: This will attempt to create a mapping between the segment index and a display time on the player.

20
node_modules/@videojs/http-streaming/docs/hlse.md generated vendored Normal file
View file

@ -0,0 +1,20 @@
# Encrypted HTTP Live Streaming
The [HLS spec](http://tools.ietf.org/html/draft-pantos-http-live-streaming-13#section-6.2.3) requires segments to be encrypted with AES-128 in CBC mode with PKCS7 padding. You can encrypt data to that specification with a combination of [OpenSSL](https://www.openssl.org/) and the [pkcs7 utility](https://github.com/brightcove/pkcs7). From the command-line:
```sh
# encrypt the text "hello" into a file
# since this is for testing, skip the key salting so the output is stable
# using -nosalt outside of testing is a terrible idea!
echo -n "hello" | pkcs7 | \
openssl enc -aes-128-cbc -nopad -nosalt -K $KEY -iv $IV > hello.encrypted
# xxd is a handy way of translating binary into a format easily consumed by
# javascript
xxd -i hello.encrypted
```
Later, you can decrypt it:
```sh
openssl enc -d -nopad -aes-128-cbc -K $KEY -iv $IV
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

View file

@ -0,0 +1,12 @@
<svg width="228" height="310" version="1.1" baseProfile="full" viewbox="0 0 228 310" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="134" y="111" style="font-weight:normal;">load()</text>
<path d="M114 81 L114 105 L114 129 L114 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 121 L114 125 L117.2 121 L114 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="134" y="211" style="font-weight:normal;">media()</text>
<path d="M114 181 L114 205 L114 229 L114 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M110.8 221 L114 225 L117.2 221 L114 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="37" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="57" y="60" style="">HAVE_NOTHING</text>
<rect x="40" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="60" y="160" style="">HAVE_MASTER</text>
<rect x="30" y="230" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="260" style="">HAVE_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View file

@ -0,0 +1,125 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="744.09448819"
height="1052.3622047"
id="svg2"
version="1.1"
inkscape:version="0.48.2 r9819"
sodipodi:docname="New document 1">
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.74898074"
inkscape:cx="405.31989"
inkscape:cy="721.1724"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1165"
inkscape:window-height="652"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g3832">
<g
transform="translate(-80,0)"
id="g3796">
<rect
style="fill:none;stroke:#000000;stroke-width:4.99253178;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3756"
width="195.00757"
height="75.007133"
x="57.496265"
y="302.08554" />
<text
xml:space="preserve"
style="font-size:39.94025421px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="80.563461"
y="353.93951"
id="text3758"
sodipodi:linespacing="125%"
transform="scale(0.99841144,1.0015911)"><tspan
sodipodi:role="line"
id="tspan3760"
x="80.563461"
y="353.93951">Header</tspan></text>
</g>
<g
transform="translate(-80,0)"
id="g3801">
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="278.44489"
y="354.50266"
id="text3762"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3764"
x="278.44489"
y="354.50266">Raw Bitstream Payload (RBSP)</tspan></text>
<rect
style="fill:none;stroke:#000000;stroke-width:5;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="rect3768"
width="660.63977"
height="75"
x="252.5"
y="302.09293" />
</g>
</g>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="10.078175"
y="432.12851"
id="text3806"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3808"
x="10.078175"
y="432.12851">1 byte</tspan></text>
<text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
x="-31.193787"
y="252.32137"
id="text3810"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3812"
x="-31.193787"
y="252.32137">H264 Network Abstraction Layer (NAL) Unit</tspan></text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

View file

@ -0,0 +1,26 @@
<svg width="304" height="610" version="1.1" baseProfile="full" viewbox="0 0 304 610" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" style="font-weight:bold; font-size:10pt; font-family:'Arial', Helvetica, sans-serif;;stroke-width:2;stroke-linejoin:round;stroke-linecap:round"><text x="172" y="111" style="font-weight:normal;">load()</text>
<path d="M152 81 L152 105 L152 129 L152 129 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 121 L152 125 L155.2 121 L152 129 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="211" style="font-weight:normal;">media()</text>
<path d="M152 181 L152 205 L152 229 L152 229 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 221 L152 225 L155.2 221 L152 229 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<text x="172" y="311" style="font-weight:normal;">media()/ start()</text>
<path d="M152 281 L152 305 L152 329 L152 329 " style="stroke:#33322E;fill:none;stroke-dasharray:none;"></path>
<path d="M148.8 321 L152 325 L155.2 321 L152 329 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 381 L152 405 L152 429 L152 429 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 421 L152 425 L155.2 421 L152 429 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 389 L152 385 L148.8 389 L152 381 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M152 481 L152 505 L152 529 L152 529 " style="stroke:#33322E;fill:none;stroke-dasharray:4 4;"></path>
<path d="M148.8 521 L152 525 L155.2 521 L152 529 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<path d="M155.2 489 L152 485 L148.8 489 L152 481 Z" style="stroke:#33322E;fill:#33322E;stroke-dasharray:none;"></path>
<rect x="75" y="30" height="50" width="154" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="95" y="60" style="">HAVE_NOTHING</text>
<rect x="78" y="130" height="50" width="149" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="98" y="160" style="">HAVE_MASTER</text>
<rect x="56" y="230" height="50" width="192" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="76.3" y="260" style="">SWITCHING_MEDIA</text>
<rect x="68" y="330" height="50" width="168" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="88" y="360" style="">HAVE_METADATA</text>
<text x="67" y="460" style="font-weight:normal;font-style:italic;">mediaupdatetimeout</text>
<rect x="30" y="530" height="50" width="244" style="stroke:#33322E;fill:#eee8d5;stroke-dasharray:none;"></rect>
<text x="50" y="560" style="">HAVE_CURRENT_METADATA</text></svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

View file

@ -0,0 +1,119 @@
@startuml
header DashPlaylistLoader sequences
title DashPlaylistLoader sequences: Master Manifest with Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterDashPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioDashPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "mpdParser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
== Requesting Master Manifest ==
MPL -> MPL : start()
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL: media(x)
alt if no sidx
note over MPL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
note over MPL #lightblue: trigger 'loadedmetadata' on master loader [T1]
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
== Initializing Media Groups, Choosing Active Tracks ==
MPL -> MG: setupMediaGroups()
MG -> MG: initialize()
== Initializing Alternate Audio Loader ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> ASL: reset audio segment loader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> APL: zero delay to fake network request
break finish pending tasks
MG -> Tech: add audioTrack
MPL -> MPC: setupSourceBuffers_()
MPL -> MPC: setupFirstPlay()
loop mainSegmentLoader.monitorBufferTick_()
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
APL -> APL: zero delay over
APL -> APL: media(x)
alt if no sidx
note over APL #lightgray: zero delay to fake network request
else if sidx
break
MPL -> ext: request sidx
end
end
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightblue: trigger 'loadedmetadata' [T2]
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.monitorBufferTick_()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View file

@ -0,0 +1,21 @@
#title: DASH Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [HAVE_METADATA]

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,246 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
group MasterPlaylistController.constructor()
MPC -> MPL : setting up MasterPlaylistLoader
note left #lightyellow
sets up mediaupdatetimeout
handler for live playlist staleness
end note
note over MPL #lightgray: state = 'HAVE_NOTHING'
MPC -> MPL: load()
end
group MasterPlaylistLoader.load()
MPL -> MPL : start()
note left #lightyellow: not started yet
== Requesting Master Manifest ==
group start()
note over MPL #lightgray: started = true
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
MPL -> MPL: set loader's master playlist
note over MPL #lightgray: state = 'HAVE_MASTER'
note over MPL #lightblue: trigger 'loadedplaylist' on master loader
== Requesting Video Manifest ==
group 'loadedplaylist' handler
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
note left #lightgray: select initial (video) playlist
note over MPL #lightyellow: state = 'SWITCHING_MEDIA'
group media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> MPL: haveMetadata()
note over MPL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightyellow
update master and media playlists
end note
opt live
MPL -> MPL: setup mediaupdatetimeout
end
note over MPL #lightblue
trigger 'loadedplaylist' on master loader.
This does not end up requesting segments
at this point.
end note
group MasterPlaylistLoader 'loadedplaylist' handler
MPL -> MPL : setup durationchange handler
end
end
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
note over SL #lightyellow: updates playlist
MPL -> SL: load()
note right #lightgray
This does nothing as mimeTypes
have not been set yet.
end note
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
group MediaGroups.setupMediaGroups()
group initialize()
MG -> APL: create child playlist loader for alt audio
note over APL #lightyellow: state = 'HAVE_NOTHING'
note left #lightgray
setup 'loadedmetadata' and 'loadedplaylist' listeners
on child alt audio playlist loader
end note
MG -> Tech: add audioTracks
end
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
note left #lightgray
There is no activePlaylistLoader at this point,
but there is an audio playlistLoader
end note
group onTrackChanged()
MG -> SL: reset mainSegmentLoader
note left #lightgray: Clears buffer, aborts all inflight requests
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
group startLoaders()
note over MG #lightyellow
activePlaylistLoader = AudioPlaylistLoader
end note
MG -> APL: load()
end
group AudioPlaylistLoader.load()
APL -> APL: start()
group alt start()
note over APL #lightyellow: started = true
APL -> ext: request alt audio media manifest
break MasterPlaylistLoader 'loadedmetadata' handler
MPL -> MPC: setupSourceBuffers()
note left #lightgray
This will set mimeType.
Segments can be loaded from now on.
end note
MPL -> MPC: setupFirstPlay()
note left #lightgray
Immediate exit since the player
is paused
end note
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
note over APL #lightyellow: state = 'HAVE_MASTER'
note left #lightgray: Infer a master playlist
APL -> APL: haveMetadata()
note over APL #lightyellow: state = 'HAVE_METADATA'
group haveMetadata()
APL -> parser: parsing the child manifest again
parser -> APL: returning object representing child manifest
note over APL #lightyellow
update master and media references
end note
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
group 'loadedplaylist' handler
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over ASL #lightyellow: set playlist
end
end
note over APL #lightblue: trigger 'loadedmetadata'
group 'loadedmetadata' handler
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
end
end
end
end
end
end
end
end
end
@enduml

View file

@ -0,0 +1,114 @@
@startuml
header PlaylistLoader sequences
title PlaylistLoader sequences: Master Manifest and Alternate Audio
Participant "MasterPlaylistController" as MPC #red
Participant "MasterPlaylistLoader" as MPL #blue
Participant "mainSegmentLoader" as SL #blue
Participant "AudioPlaylistLoader" as APL #green
Participant "audioSegmentLoader" as ASL #green
Participant "external server" as ext #brown
Participant "m3u8Parser" as parser #orange
Participant "mediaGroups" as MG #purple
Participant Tech #lightblue
== Initialization ==
MPC -> MPL : construct MasterPlaylistLoader
MPC -> MPL: load()
MPL -> MPL : start()
== Requesting Master Manifest ==
MPL -> ext: xhr request for master manifest
ext -> MPL : response with master manifest
MPL -> parser: parse master manifest
parser -> MPL: object representing manifest
note over MPL #lightblue: trigger 'loadedplaylist'
== Requesting Video Manifest ==
note over MPL #lightblue: handling loadedplaylist
MPL -> MPL : media()
MPL -> ext : request child manifest
ext -> MPL: child manifest returned
MPL -> parser: parse child manifest
parser -> MPL: object representing the child manifest
note over MPL #lightblue: trigger 'loadedplaylist'
note over MPL #lightblue: handleing 'loadedplaylist'
MPL -> SL: playlist()
MPL -> SL: load()
== Requesting Video Segments ==
note over MPL #lightblue: trigger 'loadedmetadata'
note over MPL #lightblue: handling 'loadedmetadata'
opt vod and preload !== 'none'
MPL -> SL: playlist()
MPL -> SL: load()
end
MPL -> MG: setupMediaGroups()
== Initializing Media Groups, Choosing Active Tracks ==
MG -> APL: create child playlist loader for alt audio
MG -> MG: activeGroup and audio variant selected
MG -> MG: enable activeTrack, onTrackChanged()
MG -> SL: reset mainSegmentLoader
== Requesting Alternate Audio Manifest ==
MG -> MG: startLoaders()
MG -> APL: load()
APL -> APL: start()
APL -> ext: request alt audio media manifest
break finish pending tasks
MG -> Tech: add audioTracks
MPL -> MPC: setupSourceBuffers()
MPL -> MPC: setupFirstPlay()
loop on monitorBufferTick
SL -> ext: requests media segments
ext -> SL: response with media segment bytes
end
end
ext -> APL: responds with child manifest
APL -> parser: parse child manifest
parser -> APL: object representing child manifest returned
== Requesting Alternate Audio Segments ==
note over APL #lightblue: trigger 'loadedplaylist'
note over APL #lightblue: handling 'loadedplaylist'
APL -> ASL: playlist()
note over APL #lightblue: trigger 'loadedmetadata'
note over APL #lightblue: handling 'loadedmetadata'
APL -> ASL: playlist()
APL -> ASL: load()
loop audioSegmentLoader.load()
ASL -> ext: requests media segments
ext -> ASL: response with media segment bytes
end
@enduml

View file

@ -0,0 +1,25 @@
#title: Playlist Loader States
#arrowSize: 0.5
#bendSize: 1
#direction: down
#gutter: 10
#edgeMargin: 1
#edges: rounded
#fillArrows: false
#font: Arial
#fontSize: 10
#leading: 1
#lineWidth: 2
#padding: 20
#spacing: 50
#stroke: #33322E
#zoom: 1
#.label: align=center visual=none italic
[HAVE_NOTHING] load()-> [HAVE_MASTER]
[HAVE_MASTER] media()-> [SWITCHING_MEDIA]
[SWITCHING_MEDIA] media()/ start()-> [HAVE_METADATA]
[HAVE_METADATA] <--> [<label> mediaupdatetimeout]
[<label> mediaupdatetimeout] <--> [HAVE_CURRENT_METADATA]

View file

@ -0,0 +1,13 @@
@startuml
state "Download Segment" as DL
state "Prepare for Append" as PfA
[*] -> DL
DL -> PfA
PfA : transmux (if needed)
PfA -> Append
Append : MSE source buffer
Append -> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

View file

@ -0,0 +1,57 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant "videojs-contrib-media-sources" order 3
participant mux.js order 4
participant "Native Source Buffer" order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
group Request
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
At end of all requests
(key/segment/init segment)
end note
SegmentLoader -> SegmentLoader : handleSegment(...)
note left
"Probe" (parse) segment for
timing and track information
end note
SegmentLoader -> "videojs-contrib-media-sources" : append to "fake" source buffer
note left
Source buffer here is a
wrapper around native buffers
end note
group Transmux
"videojs-contrib-media-sources" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"videojs-contrib-media-sources" -> mux.js : postMessage(...flush...)
"mux.js" -> "videojs-contrib-media-sources" : postMessage(...data...)
"videojs-contrib-media-sources" -> "Native Source Buffer" : append
"Native Source Buffer" -> "videojs-contrib-media-sources" : //updateend//
"videojs-contrib-media-sources" -> SegmentLoader : handleUpdateEnd(...)
end
end
SegmentLoader -> SegmentLoader : handleUpdateEnd_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View file

@ -0,0 +1,29 @@
@startuml
state "Request Segment" as RS
state "Partial Response (1)" as PR1
state "..." as DDD
state "Partial Response (n)" as PRN
state "Prepare for Append (1)" as PfA1
state "Prepare for Append (n)" as PfAN
state "Append (1)" as A1
state "Append (n)" as AN
[*] -> RS
RS --> PR1
PR1 --> DDD
DDD --> PRN
PR1 -> PfA1
PfA1 : transmux (if needed)
PfA1 -> A1
A1 : MSE source buffer
PRN -> PfAN
PfAN : transmux (if needed)
PfAN -> AN
AN : MSE source buffer
AN --> [*]
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

109
node_modules/@videojs/http-streaming/docs/lhls/index.md generated vendored Normal file
View file

@ -0,0 +1,109 @@
# LHLS
### Table of Contents
* [Background](#background)
* [Current Support for LHLS in VHS](#current-support-for-lhls-in-vhs)
* [Request a Segment in Pieces](#request-a-segment-in-pieces)
* [Transmux and Append Segment Pieces](#transmux-and-append-segment-pieces)
* [videojs-contrib-media-sources background](#videojs-contrib-media-sources-background)
* [Transmux Before Append](#transmux-before-append)
* [Transmux Within media-segment-request](#transmux-within-media-segment-request)
* [mux.js](#muxjs)
* [The New Flow](#the-new-flow)
* [Resources](#resources)
### Background
LHLS stands for Low-Latency HLS (see [Periscope's post](https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef)). It's meant to be used for ultra low latency live streaming, where a server can send pieces of a segment before the segment is done being written to, and the player can append those pieces to the browser, allowing sub segment duration latency from true live.
In order to support LHLS, a few components are required:
* A server that supports [chunked transfer encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding).
* A client that can:
* request segment pieces
* transmux segment pieces (for browsers that don't natively support the media type)
* append segment pieces
### Current Support for LHLS in VHS
At the moment, VHS doesn't support any of the client requirements. It waits until a request is completed and the transmuxer expects full segments.
Current flow:
![current flow](./current-flow.plantuml.png)
Expected flow:
![expected flow](./expected-flow.plantuml.png)
### Request Segment Pieces
The first change was to request pieces of a segment. There are a few approaches to accomplish this:
* [Range Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests)
* requires server support
* more round trips
* [Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
* limited browser support
* doesn't support aborts
* [Plain text MIME type](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data)
* slightly non-standard
* incurs a cost of converting from string to bytes
*Plain text MIME type* was chosen because of its wide support. It provides a mechanism to access progressive bytes downloaded on [XMLHttpRequest progress events](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequestEventTarget/onprogress).
This change was made in [media-segment-request](https://github.com/videojs/http-streaming/blob/master/src/media-segment-request.js).
### Transmux and Append Segment Pieces
Getting the progress bytes is easy. Supporting partial transmuxing and appending is harder.
Current flow:
![current transmux and append flow](./current-transmux-and-append-flow.plantuml.png)
In order to support partial transmuxing and appending in the current flow, videojs-contrib-media-sources would have to get more complicated.
##### videojs-contrib-media-sources background
Browsers, via MSE source buffers, only support a limited set of media types. For most browsers, this means MP4/fragmented MP4. HLS uses TS segments (it also supports fragmented MP4, but that case is less common). This is why transmuxing is necessary.
Just like Video.js is a wrapper around the browser video element, bridging compatibility and adding support to extend features, videojs-contrib-media-sources provides support for more media types across different browsers by building in a transmuxer.
Not only did videojs-contrib-media-sources allow us to transmux TS to FMP4, but it also allowed us to transmux TS to FLV for flash support.
Over time, the complexity of logic grew in videojs-contrib-media-sources, and it coupled tightly with videojs-contrib-hls and videojs-http-streaming, firing events to communicate between the two.
Once flash support was moved to a distinct flash module, [via flashls](https://github.com/brightcove/videojs-flashls-source-handler), it was decided to move the videojs-contrib-media-sources logic into VHS, and to remove coupled logic by using only the native source buffers (instead of the wrapper) and transmuxing somewhere within VHS before appending.
##### Transmux Before Append
As the LHLS work started, and videojs-contrib-media-sources needed more logic, the native media source [abstraction leaked](https://en.wikipedia.org/wiki/Leaky_abstraction), adding non-standard functions to work around limitations. In addition, the logic in videojs-contrib-media-sources required more conditional paths, leading to more confusing code.
It was decided that it would be easier to do the transmux before append work in the process of adding support for LHLS. This was widely considered a *good decision*, and provided a means of reducing tech debt while adding in a new feature.
##### Transmux Within media-segment-request
Work started by moving transmuxing into segment-loader, however, we quickly realized that media-segment-request provided a better home.
media-segment-request already handled decrypting segments. If it handled transmuxing as well, then segment-loader could stick with only deciding which segment to request, getting bytes as FMP4, and appending them.
The transmuxing logic moved to a new module called segment-transmuxer, which wrapped around the [WebWorker](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker) that wrapped around mux.js (the transmuxer itself).
##### mux.js
While most of the [mux.js pipeline](https://github.com/videojs/mux.js/blob/master/docs/diagram.png) supports pushing pieces of data (and should support LHLS by default), its "flushes" to send transmuxed data back to the caller expected full segments.
Much of the pipeline was reused, however, the top level audio and video segment streams, as well as the entry point, were rewritten so that instead of providing a full segment on flushes, each frame of video was provided individually (audio frames still flush as a group). The new concept of partial flushes was added into the pipeline to handle this case.
##### The New Flow
One benefit to transmuxing before appending is the possibility of extracting track and timing information from the segments. Previously, this required a separate parsing step to happen on the full segment. Now, it is included in the transmuxing pipeline, and comes back to us on separate callbacks.
![new segment loader sequence](./new-segment-loader-sequence.plantuml.png)
### Resources
* https://medium.com/@periscopecode/introducing-lhls-media-streaming-eb6212948bef
* https://github.com/jordicenzano/webserver-chunked-growingfiles

View file

@ -0,0 +1,118 @@
@startuml
participant SegmentLoader order 1
participant "media-segment-request" order 2
participant XMLHttpRequest order 3
participant "segment-transmuxer" order 4
participant mux.js order 5
SegmentLoader -> "media-segment-request" : mediaSegmentRequest(...)
"media-segment-request" -> XMLHttpRequest : request for segment/key/init segment
group Request
XMLHttpRequest -> "media-segment-request" : //segment progress//
note over "media-segment-request" #moccasin
If handling partial data,
tries to transmux new
segment bytes.
end note
"media-segment-request" -> SegmentLoader : progressFn(...)
note left
Forwards "progress" events from
the XML HTTP Request.
end note
group Transmux
"media-segment-request" -> "segment-transmuxer" : transmux(...)
"segment-transmuxer" -> mux.js : postMessage(...setAudioAppendStart...)
note left
Used for checking for overlap when
prefixing audio with silence.
end note
"segment-transmuxer" -> mux.js : postMessage(...alignGopsWith...)
note left
Used for aligning gops when overlapping
content (switching renditions) to fix
some browser glitching.
end note
"segment-transmuxer" -> mux.js : postMessage(...push...)
note left
Pushes bytes into the transmuxer pipeline.
end note
"segment-transmuxer" -> mux.js : postMessage(...partialFlush...)
note left #moccasin
Collates any complete frame data
from partial segment and
caches remainder.
end note
"segment-transmuxer" -> mux.js : postMessage(...flush...)
note left
Collates any complete frame data
from segment, caches only data
required between segments.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...trackinfo...)
"segment-transmuxer" -> "media-segment-request" : onTrackInfo(...)
"media-segment-request" -> SegmentLoader : trackInfoFn(...)
note left
Gets whether the segment
has audio and/or video.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...audioTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onAudioTimingInfo(...)
"mux.js" -> "segment-transmuxer" : postMessage(...videoTimingInfo...)
"segment-transmuxer" -> "media-segment-request" : onVideoTimingInfo(...)
"media-segment-request" -> SegmentLoader : timingInfoFn(...)
note left
Gets the audio/video
start/end times.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...caption...)
"segment-transmuxer" -> "media-segment-request" : onCaptions(...)
"media-segment-request" -> SegmentLoader : captionsFn(...)
note left
Gets captions from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...id3Frame...)
"segment-transmuxer" -> "media-segment-request" : onId3(...)
"media-segment-request" -> SegmentLoader : id3Fn(...)
note left
Gets metadata from transmux.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...data...)
"segment-transmuxer" -> "media-segment-request" : onData(...)
"media-segment-request" -> SegmentLoader : dataFn(...)
note left
Gets an fmp4 segment
ready to be appended.
end note
"mux.js" -> "segment-transmuxer" : postMessage(...done...)
note left
Gathers GOP info, and calls
done callback.
end note
"segment-transmuxer" -> "media-segment-request" : onDone(...)
"media-segment-request" -> SegmentLoader : doneFn(...)
note left
Queues callbacks on source
buffer queue to wait for
appends to complete.
end note
end
XMLHttpRequest -> "media-segment-request" : //segment request finished//
end
SegmentLoader -> SegmentLoader : handleAppendsDone_()
note left
Saves segment timing info
and starts next request.
end note
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

View file

@ -0,0 +1,36 @@
# Transmux Before Append Changes
## Overview
In moving our transmuxing stage from after append (to a virtual source buffer from videojs-contrib-media-sources) to before appending (to a native source buffer), some changes were required, and others made the logic simpler. What follows are some details into some of the changes made, why they were made, and what impact they will have.
### Source Buffer Creation
In a pre-TBA (transmux before append) world, videojs-contrib-media-source's source buffers provided an abstraction around the native source buffers. They also required a bit more information than the native buffers. For instance, they used the full mime types instead of simply relying on the codec information, when creating the source buffers. This provided the container types, which let the virtual source buffer know whether the media needed to be transmuxed or not. In a post-TBA world, the container type is no longer required, therefore only the codec strings are passed along.
In terms of when the source buffers are created, in the post-TBA world, the creation of source buffers is delayed until we are sure we have all of the information we need. This means that we don't create the native source buffers until the PMT is parsed from the main media. Even if the content is demuxed, we only need to parse the main media, since, for now, we don't rely on codec information from the segment itself, and instead use the manifest-provided codec info, or default codecs. While we could create the source buffers earlier if the codec information is provided in the manifest, delaying provides a simpler, single, code path, and more opportunity for us to be flexible with how much codec info is provided by the attribute. While the HLS specification requires this information, other formats may not, and we have seen content that plays fine but does not adhere to the strict rules of providing all necessary codec information.
### Appending Init Segments
Previously, init segments were handled by videojs-contrib-media-sources for TS segments and segment-loader for FMP4 segments.
videojs-contrib-media-sources and TS:
* video segments
* append the video init segment returned from the transmuxer with every segment
* audio segments
* append the audio init segment returned from the transmuxer only in the following cases:
* first append
* after timestampOffset is set
* audio track events: change/addtrack/removetrack
* 'mediachange' event
segment-loader and FMP4:
* if segment.map is set:
* save (cache) the init segment after the request finished
* append the init segment directly to the source buffer if the segment loader's activeInitSegmentId doesn't match the segment.map generated init segment ID
With the transmux before append and LHLS changes, we only append video init segments on changes as well. This is more important with LHLS, as prepending an init segment before every frame of video would be wasteful.
### Test Changes
Some tests were removed because they were no longer relevant after the change to creating source buffers later. For instance, `waits for both main and audio loaders to finish before calling endOfStream if main loader starting media is unknown` no longer can be tested by waiting for an audio loader response and checking for end of stream, as the test will time out since MasterPlaylistController will wait for track info from the main loader before the source buffers are created. That condition is checked elsewhere.

29
node_modules/@videojs/http-streaming/docs/media.md generated vendored Normal file
View file

@ -0,0 +1,29 @@
This doc is just a stub right now. Check back later for updates.
# General
When we talk about video, we normally think about it as one monolithic thing. If you ponder it for a moment though, you'll realize it's actually two distinct sorts of information that are presented to the viewer in tandem: a series of pictures and a sequence of audio samples. The temporal nature of audio and video is shared but the techniques used to efficiently transmit them are very different and necessitate a lot of the complexity in video file formats. Bundling up these (at least) two streams into a single package is the first of many issues introduced by the need to serialize video data and is solved by meta-formats called _containers_.
Containers formats are probably the most recongnizable of the video components because they get the honor of determining the file extension. You've probably heard of MP4, MOV, and WMV, all of which are container formats. Containers specify how to serialize audio, video, and metadata streams into a sequential series of bits and how to unpack them for decoding. Containers are basically a box that can hold video information and timed media data:
![Containers](images/containers.png)
- codecs
- containers, multiplexing
# MPEG2-TS
![MPEG2-TS Structure](images/mp2t-structure.png)
![MPEG2-TS Packet Types](images/mp2t-packet-types.png)
- streaming vs storage
- program table
- program map table
- history, context
# H.264
- NAL units
- Annex B vs MP4 elementary stream
- access unit -> sample
# MP4
- origins: quicktime

75
node_modules/@videojs/http-streaming/docs/mse.md generated vendored Normal file
View file

@ -0,0 +1,75 @@
# Media Source Extensions Notes
A collection of findings experimenting with Media Source Extensions on
Chrome 36.
* Specifying an audio and video codec when creating a source buffer
but passing in an initialization segment with only a video track
results in a decode error
## ISO Base Media File Format (BMFF)
### Init Segment
A working initialization segment is outlined below. It may be possible
to trim this structure down further.
- `ftyp`
- `moov`
- `mvhd`
- `trak`
- `tkhd`
- `mdia`
- `mdhd`
- `hdlr`
- `minf`
- `mvex`
### Media Segment
The structure of a minimal media segment that actually encapsulates
movie data is outlined below:
- `moof`
- `mfhd`
- `traf`
- `tfhd`
- `tfdt`
- `trun` containing samples
- `mdat`
### Structure
sample: time {number}, data {array}
chunk: samples {array}
track: samples {array}
segment: moov {box}, mdats {array} | moof {box}, mdats {array}, data {array}
track
chunk
sample
movie fragment -> track fragment -> [samples]
### Sample Data Offsets
Movie-fragment Relative Addressing: all trun data offsets are relative
to the containing moof (?).
Without default-base-is-moof, the base data offset for each trun in
trafs after the first is the *end* of the previous traf.
#### iso5/DASH Style
moof
|- traf (default-base-is-moof)
| |- trun_0 <size of moof> + 0
| `- trun_1 <size of moof> + 100
`- traf (default-base-is-moof)
`- trun_2 <size of moof> + 300
mdat
|- samples_for_trun_0 (100 bytes)
|- samples_for_trun_1 (200 bytes)
`- samples_for_trun_2
#### Single Track Style
moof
`- traf
`- trun_0 <size of moof> + 0
mdat
`- samples_for_trun_0

View file

@ -0,0 +1,96 @@
# Multiple Alternative Audio Tracks
## General
m3u8 manifests with multiple audio streams will have those streams added to `video.js` in an `AudioTrackList`. The `AudioTrackList` can be accessed using `player.audioTracks()` or `tech.audioTracks()`.
## Mapping m3u8 metadata to AudioTracks
The mapping between `AudioTrack` and the parsed m3u8 file is fairly straight forward. The table below shows the mapping
| m3u8 | AudioTrack |
|---------|------------|
| label | label |
| lang | language |
| default | enabled |
| ??? | kind |
| ??? | id |
As you can see m3u8's do not have a property for `AudioTrack.id`, which means that we let `video.js` randomly generate the id for `AudioTrack`s. This will have no real impact on any part of the system as we do not use the `id` anywhere.
The other property that does not have a mapping in the m3u8 is `AudioTrack.kind`. It was decided that we would set the `kind` to `main` when `default` is set to `true` and in other cases we set it to `alternative` unless the track has `characteristics` which include `public.accessibility.describes-video`, in which case we set it to `main-desc` (note that this `kind` indicates that the track is a mix of the main track and description, so it can be played *instead* of the main track; a track with kind `description` *only* has the description, not the main track).
Below is a basic example of a mapping
m3u8 layout
``` JavaScript
{
'media-group-1': [{
'audio-track-1': {
default: true,
lang: 'eng'
},
'audio-track-2': {
default: false,
lang: 'fr'
},
'audio-track-3': {
default: false,
lang: 'eng',
characteristics: 'public.accessibility.describes-video'
}
}]
}
```
Corresponding AudioTrackList when media-group-1 is used (before any tracks have been changed)
``` JavaScript
[{
label: 'audio-tracks-1',
enabled: true,
language: 'eng',
kind: 'main',
id: 'random'
}, {
label: 'audio-tracks-2',
enabled: false,
language: 'fr',
kind: 'alternative',
id: 'random'
}, {
label: 'audio-tracks-3',
enabled: false,
language: 'eng',
kind: 'main-desc',
id: 'random'
}]
```
## Startup (how tracks are added and used)
> AudioTrack & AudioTrackList live in video.js
1. `HLS` creates a `MasterPlaylistController` and watches for the `loadedmetadata` event
1. `HLS` parses the m3u8 using the `MasterPlaylistController`
1. `MasterPlaylistController` creates a `PlaylistLoader` for the master m3u8
1. `MasterPlaylistController` creates `PlaylistLoader`s for every audio playlist
1. `MasterPlaylistController` creates a `SegmentLoader` for the main m3u8
1. `MasterPlaylistController` creates a `SegmentLoader` for a potential audio playlist
1. `HLS` sees the `loadedmetadata` and finds the currently selected MediaGroup and all the metadata
1. `HLS` removes all `AudioTrack`s from the `AudioTrackList`
1. `HLS` created `AudioTrack`s for the MediaGroup and adds them to the `AudioTrackList`
1. `HLS` calls `MasterPlaylistController`s `useAudio` with no arguments (causes it to use the currently enabled audio)
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux
## How tracks are switched
> AudioTrack & AudioTrackList live in video.js
1. `HLS` is setup to watch for the `changed` event on the `AudioTrackList`
1. User selects a new `AudioTrack` from a menu (where only one track can be enabled)
1. `AudioTrackList` enables the new `Audiotrack` and disables all others
1. `AudioTrackList` triggers a `changed` event
1. `HLS` sees the `changed` event and finds the newly enabled `AudioTrack`
1. `HLS` sends the `label` for the new `AudioTrack` to `MasterPlaylistController`s `useAudio` function
1. `MasterPlaylistController` turns off the current audio `PlaylistLoader` if it is on
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` maps the `label` to the `PlaylistLoader` containing the audio
1. `MasterPlaylistController` turns on that `PlaylistLoader` and the Corresponding `SegmentLoader` (master or audio only)
1. `MediaSource`/`mux.js` determine how to mux

View file

@ -0,0 +1,16 @@
# How to get player time from program time
NOTE: See the doc on [Program Time to Player Time](program-time-to-player-time.md) for definitions and an overview of the conversion process.
## Overview
To convert a program time to a player time, the following steps must be taken:
1. Find the right segment by sequentially searching through the playlist until the program time requested is >= the EXT-X-PROGRAM-DATE-TIME of the segment, and < the EXT-X-PROGRAM-DATE-TIME of the following segment (or the end of the playlist is reached).
2. Determine the segment's start and end player times.
To accomplish #2, the segment must be downloaded and transmuxed (right now only TS segments are handled, and TS is always transmuxed to FMP4). This will obtain start and end times post transmuxer modifications. These are the times that the source buffer will recieve and report for the segment's newly created MP4 fragment.
Since there isn't a simple code path for downloading a segment without appending, the easiest approach is to seek to the estimated start time of that segment using the playlist duration calculation function. Because this process is not always accurate (manifest timing values are almost never accurate), a few seeks may be required to accurately seek into that segment.
If all goes well, and the target segment is downloaded and transmuxed, the player time may be found by taking the difference between the requested program time and the EXT-X-PROGRAM-DATE-TIME of the segment, then adding that difference to `segment.videoTimingInfo.transmuxedPresentationStart`.

View file

@ -0,0 +1,47 @@
# Playlist Loader
## Purpose
The [PlaylistLoader][pl] (PL) is responsible for requesting m3u8s, parsing them and keeping track of the media "playlists" associated with the manifest. The [PL] is used with a [SegmentLoader] to load ts or fmp4 fragments from an HLS source.
## Basic Responsibilities
1. To request an m3u8.
2. To parse a m3u8 into a format [videojs-http-streaming][vhs] can understand.
3. To allow selection of a specific media stream.
4. To refresh a live master m3u8 for changes.
## Design
### States
![PlaylistLoader States](images/playlist-loader-states.nomnoml.svg)
- `HAVE_NOTHING` the state before the m3u8 is received and parsed.
- `HAVE_MASTER` the state before a media manifest is parsed and setup but after the master manifest has been parsed and setup.
- `HAVE_METADATA` the state after a media stream is setup.
- `SWITCHING_MEDIA` the intermediary state we go though while changing to a newly selected media playlist
- `HAVE_CURRENT_METADATA` a temporary state after requesting a refresh of the live manifest and before receiving the update
### API
- `load()` this will either start or kick the loader during playback.
- `start()` this will start the [PL] and request the m3u8.
- `media()` this will return the currently active media stream or set a new active media stream.
### Events
- `loadedplaylist` signals the setup of a master playlist, representing the HLS source as a whole, from the m3u8; or a media playlist, representing a media stream.
- `loadedmetadata` signals initial setup of a media stream.
- `playlistunchanged` signals that no changes have been made to a m3u8.
- `mediaupdatetimeout` signals that a live m3u8 and media stream must be refreshed.
- `mediachanging` signals that the currently active media stream is going to be changed.
- `mediachange` signals that the new media stream has been updated.
### Interaction with Other Modules
![PL with MPC and MG](images/playlist-loader-mpc-mg-sequence.plantuml.png)
[pl]: ../src/playlist-loader.js
[sl]: ../src/segment-loader.js
[vhs]: intro.md

View file

@ -0,0 +1,141 @@
# How to get program time from player time
## Definitions
NOTE: All times referenced in seconds unless otherwise specified.
*Player Time*: any time that can be gotten/set from player.currentTime() (e.g., any time within player.seekable().start(0) to player.seekable().end(0)).<br />
*Stream Time*: any time within one of the stream's segments. Used by video frames (e.g., dts, pts, base media decode time). While these times natively use clock values, throughout the document the times are referenced in seconds.<br />
*Program Time*: any time referencing the real world (e.g., EXT-X-PROGRAM-DATE-TIME).<br />
*Start of Segment*: the pts (presentation timestamp) value of the first frame in a segment.<br />
## Overview
In order to convert from a *player time* to a *stream time*, an "anchor point" is required to match up a *player time*, *stream time*, and *program time*.
Two anchor points that are usable are the time since the start of a new timeline (e.g., the time since the last discontinuity or start of the stream), and the start of a segment. Because, in our requirements for this conversion, each segment is tagged with its *program time* in the form of an [EXT-X-PROGRAM-DATE-TIME tag](https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.6), using the segment start as the anchor point is the easiest solution. It's the closest potential anchor point to the time to convert, and it doesn't require us to track time changes across segments (e.g., trimmed or prepended content).
Those time changes are the result of the transmuxer, which can add/remove content in order to keep the content playable (without gaps or other breaking changes between segments), particularly when a segment doesn't start with a key frame.
In order to make use of the segment start, and to calculate the offset between the segment start and the time to convert, a few properties are needed:
1. The start of the segment before transmuxing
1. Time changes made to the segment during transmuxing
1. The start of the segment after transmuxing
While the start of the segment before and after transmuxing is trivial to retrieve, getting the time changes made during transmuxing is more complicated, as we must account for any trimming, prepending, and gap filling made during the transmux stage. However, the required use-case only needs the position of a video frame, allowing us to ignore any changes made to the audio timeline (because VHS uses video as the timeline of truth), as well as a couple of the video modifications.
What follows are the changes made to a video stream by the transmuxer that could alter the timeline, and if they must be accounted for in the conversion:
* Keyframe Pulling
* Used when: the segment doesn't start with a keyframe.
* Impact: the keyframe with the lowest dts value in the segment is "pulled" back to the first dts value in the segment, and all frames in-between are dropped.
* Need to account in time conversion? No. If a keyframe is pulled, and frames before it are dropped, then the segment will maintain the same segment duration, and the viewer is only seeing the keyframe during that period.
* GOP Fusion
* Used when: the segment doesn't start with a keyframe.
* Impact: if GOPs were saved from previous segment appends, the last GOP will be prepended to the segment.
* Need to account in time conversion? Yes. The segment is artificially extended, so while it shouldn't impact the stream time itself (since it will overlap with content already appended), it will impact the post transmux start of segment.
* GOPS to Align With
* Used when: switching renditions, or appending segments with overlapping GOPs (intersecting time ranges).
* Impact: GOPs in the segment will be dropped until there are no overlapping GOPs with previous segments.
* Need to account in time conversion? No. So long as we aren't switching renditions, and the content is sane enough to not contain overlapping GOPs, this should not have a meaningful impact.
Among the changes, with only GOP Fusion having an impact, the task is simplified. Instead of accounting for any changes to the video stream, only those from GOP Fusion should be accounted for. Since GOP fusion will potentially only prepend frames to the segment, we just need the number of seconds prepended to the segment when offsetting the time. As such, we can add the following properties to each segment:
```
segment: {
// calculated start of segment from either end of previous segment or end of last buffer
// (in stream time)
start,
...
videoTimingInfo: {
// number of seconds prepended by GOP fusion
transmuxerPrependedSeconds
// start of transmuxed segment (in player time)
transmuxedPresentationStart
}
}
```
## The Formula
With the properties listed above, calculating a *program time* from a *player time* is given as follows:
```
const playerTimeToProgramTime = (playerTime, segment) => {
if (!segment.dateTimeObject) {
// Can't convert without an "anchor point" for the program time (i.e., a time that can
// be used to map the start of a segment with a real world time).
return null;
}
const transmuxerPrependedSeconds = segment.videoTimingInfo.transmuxerPrependedSeconds;
const transmuxedStart = segment.videoTimingInfo.transmuxedPresentationStart;
// get the start of the content from before old content is prepended
const startOfSegment = transmuxedStart + transmuxerPrependedSeconds;
const offsetFromSegmentStart = playerTime - startOfSegment;
return new Date(segment.dateTimeObject.getTime() + offsetFromSegmentStart * 1000);
};
```
## Examples
```
// Program Times:
// segment1: 2018-11-10T00:00:30.1Z => 2018-11-10T00:00:32.1Z
// segment2: 2018-11-10T00:00:32.1Z => 2018-11-10T00:00:34.1Z
// segment3: 2018-11-10T00:00:34.1Z => 2018-11-10T00:00:36.1Z
//
// Player Times:
// segment1: 0 => 2
// segment2: 2 => 4
// segment3: 4 => 6
const segment1 = {
dateTimeObject: 2018-11-10T00:00:30.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0,
transmuxedPresentationStart: 0
}
};
playerTimeToProgramTime(0.1, segment1);
// startOfSegment = 0 + 0 = 0
// offsetFromSegmentStart = 0.1 - 0 = 0.1
// return 2018-11-10T00:00:30.1Z + 0.1 = 2018-11-10T00:00:30.2Z
const segment2 = {
dateTimeObject: 2018-11-10T00:00:32.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.3,
transmuxedPresentationStart: 1.7
}
};
playerTimeToProgramTime(2.5, segment2);
// startOfSegment = 1.7 + 0.3 = 2
// offsetFromSegmentStart = 2.5 - 2 = 0.5
// return 2018-11-10T00:00:32.1Z + 0.5 = 2018-11-10T00:00:32.6Z
const segment3 = {
dateTimeObject: 2018-11-10T00:00:34.1Z
videoTimingInfo: {
transmuxerPrependedSeconds: 0.2,
transmuxedPresentationStart: 3.8
}
};
playerTimeToProgramTime(4, segment3);
// startOfSegment = 3.8 + 0.2 = 4
// offsetFromSegmentStart = 4 - 4 = 0
// return 2018-11-10T00:00:34.1Z + 0 = 2018-11-10T00:00:34.1Z
```
## Transmux Before Append Changes
Even though segment timing values are retained for transmux before append, the formula does not need to change, as all that matters for calculation is the offset from the transmuxed segment start, which can then be applied to the stream time start of segment, or the program time start of segment.
## Getting the Right Segment
In order to make use of the above calculation, the right segment must be chosen for a given player time. This time may be retrieved by simply using the times of the segment after transmuxing (as the start/end pts/dts values then reflect the player time it should slot into in the source buffer). These are included in `videoTimingInfo` as `transmuxedPresentationStart` and `transmuxedPresentationEnd`.
Although there may be a small amount of overlap due to `transmuxerPrependedSeconds`, as long as the search is sequential from the beginning of the playlist to the end, the right segment will be found, as the prepended times will only come from content from prior segments.

View file

@ -0,0 +1,43 @@
# Using the reloadSourceOnError Plugin
Call the plugin to activate it:
```js
player.reloadSourceOnError()
```
Now if the player encounters a fatal error during playback, it will automatically
attempt to reload the current source. If the error was caused by a transient
browser or networking problem, this can allow playback to continue with a minimum
of disruption to your viewers.
The plugin will only restart your player once in a 30 second time span so that your
player doesn't get into a reload loop if it encounters non-transient errors. You
can tweak the amount of time required between restarts by adjusting the
`errorInterval` option.
If your video URLs are time-sensitive, the original source could be invalid by the
time an error occurs. If that's the case, you can provide a `getSource` callback
to regenerate a valid source object. In your callback, the `this` keyword is a
reference to the player that errored. The first argument to `getSource` is a
function. Invoke that function and pass in your new source object when you're ready.
```js
player.reloadSourceOnError({
// getSource allows you to override the source object used when an error occurs
getSource: function(reload) {
console.log('Reloading because of an error');
// call reload() with a fresh source object
// you can do this step asynchronously if you want (but the error dialog will
// show up while you're waiting)
reload({
src: 'https://example.com/index.m3u8?token=abc123ef789',
type: 'application/x-mpegURL'
});
},
// errorInterval specifies the minimum amount of seconds that must pass before
// another reload will be attempted
errorInterval: 5
});
```

View file

@ -0,0 +1,289 @@
# Supported Features
## Browsers
Any browser that supports [MSE] (media source extensions). See
https://caniuse.com/#feat=mediasource
Note that browsers with native HLS support may play content with the native player, unless
the [overrideNative] option is used. Some notable browsers with native HLS players are:
* Safari (macOS and iOS)
* Chrome Android
* Firefox Android
However, due to the limited features offered by some of the native players, the only
browser on which VHS defaults to using the native player is Safari (macOS and iOS).
## Streaming Formats and Media Types
### Streaming Formats
VHS aims to be mostly streaming format agnostic. So long as the manifest can be parsed to
a common JSON representation, VHS should be able to play it. However, due to some large
differences between the major streaming formats (HLS and DASH), some format specific code
is included in VHS. If you have another format you would like supported, please reach out
to us (e.g., file an issue).
* [HLS] (HTTP Live Streaming)
* [MPEG-DASH] (Dynamic Adaptive Streaming over HTTP)
### Media Container Formats
* [TS] (MPEG Transport Stream)
* [MP4] (MPEG-4 Part 14: MP4, M4A, M4V, M4S, MPA), ISOBMFF
* [AAC] (Advanced Audio Coding)
### Codecs
If the content is packaged in an [MP4] container, then any codec supported by the browser
is supported. If the content is packaged in a [TS] container, then the codec must be
supported by [the transmuxer]. The following codecs are supported by the transmuxer:
* [AVC] (Advanced Video Coding, h.264)
* [AVC1] (Advnced Video Coding, h.265)
* [HE-AAC] (High Efficiency Advanced Audio Coding, mp4a.40.5)
* LC-AAC (Low Complexity Advanced Audio Coding, mp4a.40.2)
## General Notable Features
The following is a list of some, but not all, common streaming features supported by VHS.
It is meant to highlight some common use cases (and provide for easy searching), but is
not meant serve as an exhaustive list.
* VOD (video on demand)
* LIVE
* Multiple audio tracks
* Timed [ID3] Metadata is automatically translated into HTML5 metedata text tracks
* Cross-domain credentials support with [CORS]
* Any browser supported resolution (e.g., 4k)
* Any browser supported framerate (e.g., 60fps)
* [DRM] via [videojs-contrib-eme]
* Audio only (non DASH)
* Video only (non DASH)
* In-manifest [WebVTT] subtitles are automatically translated into standard HTML5 subtitle
tracks
* [AES-128] segment encryption
## Notable Missing Features
Note that the following features have not yet been implemented or may work but are not
currently suppported in browsers that do not rely on the native player. For browsers that
use the native player (e.g., Safari for HLS), please refer to their documentation.
### Container Formats
* [WebM]
* [WAV]
* [MP3]
* [OGG]
### Codecs
If the content is packaged within an [MP4] container and the browser supports the codec, it
will play. However, the following are some codecs that are not routinely tested, or are not
supported when packaged within [TS].
* [MP3]
* [Vorbis]
* [WAV]
* [FLAC]
* [Opus]
* [VP8]
* [VP9]
* [Dolby Vision] (DVHE)
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] (E-AC-3)
### HLS Missing Features
Note: features for low latency HLS in the [2nd edition of HTTP Live Streaming] are on the
roadmap, but not currently available.
VHS strives to support all of the features in the HLS specification, however, some have
not yet been implemented. VHS currently supports everything in the
[HLS specification v7, revision 23], except the following:
* Use of [EXT-X-MAP] with [TS] segments
* [EXT-X-MAP] is currently supported for [MP4] segments, but not yet for TS
* I-Frame playlists via [EXT-X-I-FRAMES-ONLY] and [EXT-X-I-FRAME-STREAM-INF]
* [MP3] Audio
* [Dolby Digital] Audio (AC-3)
* [Dolby Digital Plus] Audio (E-AC-3)
* KEYFORMATVERSIONS of [EXT-X-KEY]
* [EXT-X-DATERANGE]
* [EXT-X-SESSION-DATA]
* [EXT-X-SESSION-KEY]
* [EXT-X-INDEPENDENT-SEGMENTS]
* Use of [EXT-X-START] (value parsed but not used)
* Alternate video via [EXT-X-MEDIA] of type video
* ASSOC-LANGUAGE in [EXT-X-MEDIA]
* CHANNELS in [EXT-X-MEDIA]
* Use of AVERAGE-BANDWIDTH in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of FRAME-RATE in [EXT-X-STREAM-INF] (value parsed but not used)
* Use of HDCP-LEVEL in [EXT-X-STREAM-INF]
* SAMPLE-AES segment encryption
In the event of encoding changes within a playlist (see
https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-6.3.3), the
behavior will depend on the browser.
### DASH Missing Features
DASH support is more recent than HLS support in VHS, however, VHS strives to achieve as
complete compatibility as possible with the DASH spec. The following are some notable
features in the DASH specification that are not yet implemented in VHS:
Note that many of the following are parsed by [mpd-parser] but are either not yet used, or
simply take on their default values (in the case where they have valid defaults).
* Audio and video only streams
* Audio rendition switching
* Each video rendition is paired with an audio rendition for the duration of playback.
* MPD
* @id
* @profiles
* @availabilityStartTime
* @availabilityEndTime
* @minBufferTime
* @maxSegmentDuration
* @maxSubsegmentDuration
* ProgramInformation
* Metrics
* Period
* @xlink:href
* @xlink:actuate
* @id
* @duration
* Normally used for determing the PeriodStart of the next period, VHS instead relies
on segment durations to determine timing of each segment and timeline
* @bitstreamSwitching
* Subset
* AdaptationSet
* @xlink:href
* @xlink:actuate
* @id
* @group
* @par (picture aspect ratio)
* @minBandwidth
* @maxBandwidth
* @minWidth
* @maxWidth
* @minHeight
* @maxHeight
* @minFrameRate
* @maxFrameRate
* @segmentAlignment
* @bitstreamSwitching
* @subsegmentAlignment
* @subsegmentStartsWithSAP
* Accessibility
* Rating
* Viewpoint
* ContentComponent
* Representation
* @id (used for SegmentTemplate but not exposed otherwise)
* @qualityRanking
* @dependencyId (dependent representation)
* @mediaStreamStructureId
* SubRepresentation
* CommonAttributesElements (for AdaptationSet, Representation and SubRepresentation elements)
* @profiles
* @sar
* @frameRate
* @audioSamplingRate
* @segmentProfiles
* @maximumSAPPeriod
* @startWithSAP
* @maxPlayoutRate
* @codingDependency
* @scanType
* FramePacking
* AudioChannelConfiguration
* SegmentBase
* @presentationTimeOffset
* @indexRangeExact
* RepresentationIndex
* MultipleSegmentBaseInformation elements
* SegmentList
* @xlink:href
* @xlink:actuate
* MultipleSegmentBaseInformation
* SegmentURL
* @index
* @indexRange
* SegmentTemplate
* MultipleSegmentBaseInformation
* @index
* @bitstreamSwitching
* BaseURL
* @serviceLocation
* Template-based Segment URL construction
* Live DASH assets that use $Time$ in a SegmentTemplate, and also have a SegmentTimeline
where only the first S has a t and the rest only have a d do not update on playlist
refreshes
See: https://github.com/videojs/http-streaming#dash-assets-with-time-interpolation-and-segmenttimelines-with-no-t
* ContentComponent elements
* Right now manifests are assumed to have a single content component, with the properties
described directly on the AdaptationSet element
* SubRepresentation elements
* Subset elements
* Early Available Periods (may work, but has not been tested)
* Access to subsegments via a subsegment index ('ssix')
* The @profiles attribute is ignored (best support for all profiles is attempted, without
consideration of the specific profile). For descriptions on profiles, see section 8 of
the DASH spec.
* Construction of byte range URLs via a BaseURL byteRange template (Annex E.2)
* Multiperiod content where the representation sets are not the same across periods
* In the event that an S element has a t attribute that is greater than what is expected,
it is not treated as a discontinuity, but instead retains its segment value, and may
result in a gap in the content
[MSE]: https://www.w3.org/TR/media-source/
[HLS]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[MPEG-DASH]: https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
[TS]: https://en.wikipedia.org/wiki/MPEG_transport_stream
[MP4]: https://en.wikipedia.org/wiki/MPEG-4_Part_14
[AAC]: https://en.wikipedia.org/wiki/Advanced_Audio_Coding
[AVC]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[AVC1]: https://en.wikipedia.org/wiki/Advanced_Video_Coding
[HE-AAC]: https://en.wikipedia.org/wiki/High-Efficiency_Advanced_Audio_Coding
[ID3]: https://en.wikipedia.org/wiki/ID3
[CORS]: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
[DRM]: https://en.wikipedia.org/wiki/Digital_rights_management
[WebVTT]: https://www.w3.org/TR/webvtt1/
[AES-128]: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[WebM]: https://en.wikipedia.org/wiki/WebM
[WAV]: https://en.wikipedia.org/wiki/WAV
[MP3]: https://en.wikipedia.org/wiki/MP3
[OGG]: https://en.wikipedia.org/wiki/Ogg
[Vorbis]: https://en.wikipedia.org/wiki/Vorbis
[FLAC]: https://en.wikipedia.org/wiki/FLAC
[Opus]: https://en.wikipedia.org/wiki/Opus_(audio_format)
[VP8]: https://en.wikipedia.org/wiki/VP8
[VP9]: https://en.wikipedia.org/wiki/VP9
[overrideNative]: https://github.com/videojs/http-streaming#overridenative
[the transmuxer]: https://github.com/videojs/mux.js
[videojs-contrib-eme]: https://github.com/videojs/videojs-contrib-eme
[2nd edition of HTTP Live Streaming]: https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-07.html
[HLS specification v7, revision 23]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23
[EXT-X-MAP]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.5
[EXT-X-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.2
[EXT-X-SESSION-DATA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.4
[EXT-X-DATERANGE]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.2.7
[EXT-X-I-FRAMES-ONLY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.3.6
[EXT-X-I-FRAME-STREAM-INF]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.3
[EXT-X-SESSION-KEY]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.5
[EXT-X-INDEPENDENT-SEGMENTS]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.1
[EXT-X-START]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.5.2
[EXT-X-MEDIA]: https://tools.ietf.org/html/draft-pantos-http-live-streaming-23#section-4.3.4.1
[Dolby Vision]: https://en.wikipedia.org/wiki/High-dynamic-range_video#Dolby_Vision
[Dolby Digital]: https://en.wikipedia.org/wiki/Dolby_Digital
[Dolby Digital Plus]: https://en.wikipedia.org/wiki/Dolby_Digital_Plus
[mpd-parser]: https://github.com/videojs/mpd-parser

View file

@ -0,0 +1,67 @@
# Troubleshooting Guide
## Other troubleshooting guides
For issues around data embedded into media segments (e.g., 608 captions), see the [mux.js troubleshooting guide](https://github.com/videojs/mux.js/blob/master/docs/troubleshooting.md).
## Tools
### Thumbcoil
Thumbcoil is a video inspector tool that can unpackage various media containers and inspect the bitstreams therein. Thumbcoil runs entirely within your browser so that none of your video data is ever transmitted to a server.
http://thumb.co.il<br/>
http://beta.thumb.co.il<br/>
https://github.com/videojs/thumbcoil<br/>
## Table of Contents
- [Content plays on Mac but not on Windows](#content-plays-on-mac-but-not-windows)
- ["No compatible source was found" on IE11 Win 7](#no-compatible-source-was-found-on-ie11-win-7)
- [CORS: No Access-Control-Allow-Origin header](#cors-no-access-control-allow-origin-header)
- [Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers](#desktop-safariios-safariandroid-chromeedge-exhibit-different-behavior-from-other-browsers)
- [MEDIA_ERR_DECODE error on Desktop Safari](#media_err_decode-error-on-desktop-safari)
- [Network requests are still being made while paused](#network-requests-are-still-being-made-while-paused)
## Content plays on Mac but not Windows
Some browsers may not be able to play audio sample rates higher than 48 kHz. See https://docs.microsoft.com/en-gb/windows/desktop/medfound/aac-decoder#format-constraints
Potential solution: re-encode with a Windows supported audio sample rate
## "No compatible source was found" on IE11 Win 7
videojs-http-streaming does not support Flash HLS playback (like the videojs-contrib-hls plugin does)
Solution: include the FlasHLS source handler https://github.com/brightcove/videojs-flashls-source-handler#usage
## CORS: No Access-Control-Allow-Origin header
If you see an error along the lines of
```
XMLHttpRequest cannot load ... No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin ... is therefore not allowed access.
```
you need to properly configure CORS on your server: https://github.com/videojs/http-streaming#hosting-considerations
## Desktop Safari/iOS Safari/Android Chrome/Edge exhibit different behavior from other browsers
Some browsers support native playback of certain streaming formats. By default, we defer to the native players. However, this means that features specific to videojs-http-streaming will not be available.
On Edge and mobile Chrome, 608 captions, ID3 tags or live streaming may not work as expected with native playback, it is recommended that `overrideNative` be used on those platforms if necessary.
Solution: use videojs-http-streaming based playback on those devices: https://github.com/videojs/http-streaming#overridenative
## MEDIA_ERR_DECODE error on Desktop Safari
This error may occur for a number of reasons, as it is particularly common for misconfigured content. One instance of misconfiguration is if the source manifest has `CLOSED-CAPTIONS=NONE` and an external text track is loaded into the player. Safari does not allow the inclusion any captions if the manifest indicates that captions will not be provided.
Solution: remove `CLOSED-CAPTIONS=NONE` from the manifest
## Network requests are still being made while paused
There are a couple of cases where network requests will still be made by VHS when the video is paused.
1) If the forward buffer (buffered content ahead of the playhead) has not reached the GOAL\_BUFFER\_LENGTH. For instance, if the playhead is at time 10 seconds, the buffered range goes from 5 seconds to 20 seconds, and the GOAL\_BUFFER\_LENGTH is set to 30 seconds, then segments will continue to be requested, even while paused, until the buffer ends at a time greater than or equal to 10 seconds (current time) + 30 seconds (GOAL\_BUFFER\_LENGTH) = 40 seconds. This is expected behavior in order to provide a better playback experience.
2) If the stream is LIVE, then the manifest will continue to be refreshed even while paused. This is because it is easier to keep playback in sync if we receieve manifest updates consistently.

139
node_modules/@videojs/http-streaming/index.html generated vendored Normal file
View file

@ -0,0 +1,139 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>videojs-http-streaming Demo</title>
<link rel="icon" href="logo.svg">
<link href="node_modules/video.js/dist/video-js.css" rel="stylesheet">
<link href="node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.css" rel="stylesheet">
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
}
.info {
background-color: #eee;
border: thin solid #333;
border-radius: 3px;
padding: 0 5px;
margin: 20px 0;
}
label {
display: block;
width: 700px;
width: fit-content;
margin-top: 4px;
}
.options label {
background-color: hsl(0, 0%, 90%);
padding: 0.25em;
margin: 0.25em;
}
input[type=url], select {
min-width: 600px;
}
#preload {
min-width: auto;
}
h3 {
margin-bottom: 5px;
}
#keysystems {
display: block;
}
</style>
</head>
<body>
<div id="player-fixture">
</div>
<label>Representations</label>
<select id='representations'></select>
<h3>Options</h3>
<div class="options">
<label>
<input id=minified type="checkbox">
Minified VHS (reloads player)
</label>
<label>
<input id=sync-workers type="checkbox">
Synchronous Web Workers (reloads player)
</label>
<label>
<input id=liveui type="checkbox">
Enable the live UI (reloads player)
</label>
<label>
<input id=debug type="checkbox">
Debug Logging
</label>
<label>
<input id=muted type="checkbox">
Muted
</label>
<label>
<input id=autoplay type="checkbox">
Autoplay
</label>
<label>
<input id=llhls type="checkbox">
[EXPERIMENTAL] Enables support for ll-hls (reloads player)
</label>
<label>
<input id=buffer-water type="checkbox">
[EXPERIMENTAL] Use Buffer Level for ABR (reloads player)
</label>
<label>
<input id=override-native type="checkbox" checked>
Override Native (reloads player)
</label>
<label>
<input id=mirror-source type="checkbox" checked>
Mirror sources from player.src (reloads player, uses EXPERIMENTAL sourceset option)
</label>
<label>
Preload (reloads player)
<select id=preload>
<option selected>auto</option>
<option>none</option>
<option>metadata</option>
</select>
</div>
<h3>Load a URL</h3>
<label>Url:</label>
<input id=url type=url>
<label>Type: (uses url extension if blank, usually application/x-mpegURL or application/dash+xml)</label>
<input id=type type=text>
<label>Optional Keystems JSON:</label>
<textarea id=keysystems cols=100 rows=5></textarea>
<button id=load-url type=button>Load</button>
<h3>Load a Source</h3>
<select id=load-source>
<optgroup label="hls">
</optgroup>
<optgroup label="dash">
</optgroup>
<optgroup label="drm">
</optgroup>
<optgroup label="live">
</optgroup>
<optgroup label="low latency live">
</optgroup>
</select>
<h3>Navigation</h3>
<ul>
<li><a href="test/debug.html">Run unit tests in browser.</a></li>
<li><a href="docs/api/">Read generated docs.</a></li>
<li><a href="utils/stats/">Stats</a></li>
</ul>
<script src="scripts/index-demo-page.js"></script>
<script>
window.startDemo(function(player) {
// do something with setup player
});
</script>
</body>
</html>

148
node_modules/@videojs/http-streaming/package.json generated vendored Normal file
View file

@ -0,0 +1,148 @@
{
"_from": "@videojs/http-streaming@2.9.1",
"_id": "@videojs/http-streaming@2.9.1",
"_inBundle": false,
"_integrity": "sha512-QAtlrBBILOflrei1KE0GcSDDWiP888ZOySck6zWlQNYi/pXOm6QXTJHzOMIKiRQOndyJIZRTfLHedeUdUIDNLA==",
"_location": "/@videojs/http-streaming",
"_phantomChildren": {},
"_requested": {
"type": "version",
"registry": true,
"raw": "@videojs/http-streaming@2.9.1",
"name": "@videojs/http-streaming",
"escapedName": "@videojs%2fhttp-streaming",
"scope": "@videojs",
"rawSpec": "2.9.1",
"saveSpec": null,
"fetchSpec": "2.9.1"
},
"_requiredBy": [
"/video.js"
],
"_resolved": "https://registry.npmjs.org/@videojs/http-streaming/-/http-streaming-2.9.1.tgz",
"_shasum": "16b59efe24a832b89b5bd6a6c52f0d80ad7996a2",
"_spec": "@videojs/http-streaming@2.9.1",
"_where": "F:\\Documents\\websites\\BMM\\node_modules\\video.js",
"author": {
"name": "Brightcove, Inc"
},
"browserslist": [
"defaults",
"ie 11"
],
"bugs": {
"url": "https://github.com/videojs/http-streaming/issues"
},
"bundleDependencies": false,
"dependencies": {
"@babel/runtime": "^7.12.5",
"@videojs/vhs-utils": "^3.0.2",
"aes-decrypter": "3.1.2",
"global": "^4.4.0",
"m3u8-parser": "4.7.0",
"mpd-parser": "0.17.0",
"mux.js": "5.11.1",
"video.js": "^6 || ^7"
},
"deprecated": false,
"description": "Play back HLS and DASH with Video.js, even where it's not natively supported",
"devDependencies": {
"@rollup/plugin-replace": "^2.3.4",
"@rollup/plugin-strip": "^2.0.1",
"@videojs/generator-helpers": "~2.0.1",
"d3": "^3.4.8",
"es5-shim": "^4.5.13",
"es6-shim": "^0.35.5",
"jsdoc": "~3.6.6",
"karma": "^5.2.3",
"lodash": "^4.17.4",
"lodash-compat": "^3.10.0",
"nomnoml": "^0.3.0",
"rollup": "^2.36.1",
"rollup-plugin-worker-factory": "0.5.5",
"shelljs": "^0.8.4",
"sinon": "^8.1.1",
"url-toolkit": "^2.2.1",
"videojs-contrib-eme": "^3.8.1",
"videojs-contrib-quality-levels": "^2.0.4",
"videojs-generate-karma-config": "^7.1.0",
"videojs-generate-rollup-config": "~6.1.0",
"videojs-generator-verify": "~3.0.1",
"videojs-http-source-selector": "^1.1.6",
"videojs-standard": "^8.0.4"
},
"engines": {
"node": ">=8",
"npm": ">=5"
},
"files": [
"CONTRIBUTING.md",
"dist/",
"docs/",
"index.html",
"scripts/",
"src/"
],
"generator-videojs-plugin": {
"version": "7.6.3"
},
"homepage": "https://github.com/videojs/http-streaming#readme",
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"keywords": [
"videojs",
"videojs-plugin"
],
"license": "Apache-2.0",
"lint-staged": {
"*.js": "vjsstandard --fix",
"README.md": "doctoc --notitle"
},
"main": "dist/videojs-http-streaming.cjs.js",
"module": "dist/videojs-http-streaming.es.js",
"name": "@videojs/http-streaming",
"peerDependencies": {
"video.js": "^6 || ^7"
},
"repository": {
"type": "git",
"url": "git+ssh://git@github.com/videojs/http-streaming.git"
},
"scripts": {
"build": "npm-run-all -s clean -p build:*",
"build-prod": "cross-env-shell NO_TEST_BUNDLE=1 'npm run build'",
"build-test": "cross-env-shell TEST_BUNDLE_ONLY=1 'npm run build'",
"build:js": "rollup -c scripts/rollup.config.js",
"clean": "shx rm -rf ./dist ./test/dist && shx mkdir -p ./dist ./test/dist",
"docs": "npm-run-all docs:*",
"docs:api": "jsdoc src -g plugins/markdown -r -d docs/api",
"docs:images": "node ./scripts/create-docs-images.js",
"docs:toc": "doctoc --notitle README.md",
"lint": "vjsstandard",
"netlify": "node scripts/netlify.js",
"posttest": "[ \"$CI_TEST_TYPE\" != 'coverage' ] || shx cat test/dist/coverage/text.txt",
"prenetlify": "npm run build",
"prepublishOnly": "npm-run-all build-prod && vjsverify --verbose",
"server": "karma start scripts/karma.conf.js --singleRun=false --auto-watch",
"start": "npm-run-all -p server watch",
"test": "npm-run-all lint build-test && karma start scripts/karma.conf.js",
"update-changelog": "conventional-changelog -p videojs -i CHANGELOG.md -s",
"version": "is-prerelease || npm run update-changelog && git add CHANGELOG.md",
"watch": "npm-run-all -p watch:*",
"watch:js": "npm run build:js -- -w"
},
"version": "2.9.1",
"vjsstandard": {
"ignore": [
"dist",
"docs",
"deploy",
"test/dist",
"utils",
"src/*.worker.js"
]
}
}

View file

@ -0,0 +1,31 @@
/* eslint-disable no-console */
const nomnoml = require('nomnoml');
const fs = require('fs');
const path = require('path');
const basePath = path.resolve(__dirname, '..');
const docImageDir = path.join(basePath, 'docs/images');
const nomnomlSourceDir = path.join(basePath, 'docs/images/sources');
const buildImages = {
build() {
const files = fs.readdirSync(nomnomlSourceDir);
while (files.length > 0) {
const file = path.resolve(nomnomlSourceDir, files.shift());
const basename = path.basename(file, 'txt');
if (/.nomnoml/.test(basename)) {
const fileContents = fs.readFileSync(file, 'utf-8');
const generated = nomnoml.renderSvg(fileContents);
const newFilePath = path.join(docImageDir, basename) + 'svg';
const outFile = fs.createWriteStream(newFilePath);
console.log(`wrote file ${newFilePath}`);
outFile.write(generated);
}
}
}
};
buildImages.build();

View file

@ -0,0 +1,115 @@
/* global window */
const fs = require('fs');
const path = require('path');
const baseDir = path.join(__dirname, '..');
const manifestsDir = path.join(baseDir, 'test', 'manifests');
const segmentsDir = path.join(baseDir, 'test', 'segments');
const base64ToUint8Array = function(base64) {
const decoded = window.atob(base64);
const uint8Array = new Uint8Array(new ArrayBuffer(decoded.length));
for (let i = 0; i < decoded.length; i++) {
uint8Array[i] = decoded.charCodeAt(i);
}
return uint8Array;
};
const getManifests = () => (fs.readdirSync(manifestsDir) || [])
.filter((f) => ((/\.(m3u8|mpd)/).test(path.extname(f))))
.map((f) => path.resolve(manifestsDir, f));
const getSegments = () => (fs.readdirSync(segmentsDir) || [])
.filter((f) => ((/\.(ts|mp4|key|webm|aac|ac3)/).test(path.extname(f))))
.map((f) => path.resolve(segmentsDir, f));
const buildManifestString = function() {
let manifests = 'export default {\n';
getManifests().forEach((file) => {
// translate this manifest
manifests += ' \'' + path.basename(file, path.extname(file)) + '\': ';
manifests += fs.readFileSync(file, 'utf8')
.split(/\r\n|\n/)
// quote and concatenate
.map((line) => ' \'' + line + '\\n\' +\n')
.join('')
// strip leading spaces and the trailing '+'
.slice(4, -3);
manifests += ',\n';
});
// clean up and close the objects
manifests = manifests.slice(0, -2);
manifests += '\n};\n';
return manifests;
};
const buildSegmentString = function() {
const segmentData = {};
getSegments().forEach((file) => {
// read the file directly as a buffer before converting to base64
const base64Segment = fs.readFileSync(file).toString('base64');
segmentData[path.basename(file, path.extname(file))] = base64Segment;
});
const segmentDataExportStrings = Object.keys(segmentData).reduce((acc, key) => {
// use a function since the segment may be cleared out on usage
acc.push(`export const ${key} = () => {
cache.${key} = cache.${key} || base64ToUint8Array('${segmentData[key]}');
const dest = new Uint8Array(cache.${key}.byteLength);
dest.set(cache.${key});
return dest;
};`);
return acc;
}, []);
const segmentsFile =
'const cache = {};\n' +
`const base64ToUint8Array = ${base64ToUint8Array.toString()};\n` +
segmentDataExportStrings.join('\n');
return segmentsFile;
};
/* we refer to them as .js, so that babel and other plugins can work on them */
const segmentsKey = 'create-test-data!segments.js';
const manifestsKey = 'create-test-data!manifests.js';
module.exports = function() {
return {
name: 'createTestData',
buildStart() {
this.addWatchFile(segmentsDir);
this.addWatchFile(manifestsDir);
[].concat(getSegments())
.concat(getManifests())
.forEach((file) => this.addWatchFile(file));
},
resolveId(importee, importer) {
// if this is not an id we can resolve return
if (importee.indexOf('create-test-data!') !== 0) {
return;
}
const name = importee.split('!')[1];
return (name.indexOf('segments') === 0) ? segmentsKey : manifestsKey;
},
load(id) {
if (id === segmentsKey) {
return buildSegmentString.call(this);
}
if (id === manifestsKey) {
return buildManifestString.call(this);
}
}
};
};

View file

@ -0,0 +1,518 @@
/* global window document */
/* eslint-disable vars-on-top, no-var, object-shorthand, no-console */
(function(window) {
var representationsEl = document.getElementById('representations');
representationsEl.addEventListener('change', function() {
var selectedIndex = representationsEl.selectedIndex;
if (!selectedIndex || selectedIndex < 1 || !window.vhs) {
return;
}
var selectedOption = representationsEl.options[representationsEl.selectedIndex];
if (!selectedOption) {
return;
}
var id = selectedOption.value;
window.vhs.representations().forEach(function(rep) {
rep.playlist.disabled = rep.id !== id;
});
window.mpc.fastQualityChange_();
});
var hlsOptGroup = document.querySelector('[label="hls"]');
var dashOptGroup = document.querySelector('[label="dash"]');
var drmOptGroup = document.querySelector('[label="drm"]');
var liveOptGroup = document.querySelector('[label="live"]');
var llliveOptGroup = document.querySelector('[label="low latency live"]');
// get the sources list squared away
var xhr = new window.XMLHttpRequest();
xhr.addEventListener('load', function() {
var sources = JSON.parse(xhr.responseText);
sources.forEach(function(source) {
var option = document.createElement('option');
option.innerText = source.name;
option.value = source.uri;
if (source.keySystems) {
option.setAttribute('data-key-systems', JSON.stringify(source.keySystems, null, 2));
}
if (source.mimetype) {
option.setAttribute('data-mimetype', source.mimetype);
}
if (source.features.indexOf('low-latency') !== -1) {
llliveOptGroup.appendChild(option);
} else if (source.features.indexOf('live') !== -1) {
liveOptGroup.appendChild(option);
} else if (source.keySystems) {
drmOptGroup.appendChild(option);
} else if (source.mimetype === 'application/x-mpegurl') {
hlsOptGroup.appendChild(option);
} else if (source.mimetype === 'application/dash+xml') {
dashOptGroup.appendChild(option);
}
});
});
xhr.open('GET', './scripts/sources.json');
xhr.send();
// all relevant elements
var urlButton = document.getElementById('load-url');
var sources = document.getElementById('load-source');
var stateEls = {};
var getInputValue = function(el) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
return encodeURIComponent(el.value);
} else if (el.type === 'select-one') {
return el.options[el.selectedIndex].value;
} else if (el.type === 'checkbox') {
return el.checked;
}
console.warn('unhandled input type ' + el.type);
return '';
};
var setInputValue = function(el, value) {
if (el.type === 'url' || el.type === 'text' || el.nodeName.toLowerCase() === 'textarea') {
el.value = decodeURIComponent(value);
} else if (el.type === 'select-one') {
for (var i = 0; i < el.options.length; i++) {
if (el.options[i].value === value) {
el.options[i].selected = true;
}
}
} else {
// get the `value` into a Boolean.
el.checked = JSON.parse(value);
}
};
var newEvent = function(name) {
var event;
if (typeof window.Event === 'function') {
event = new window.Event(name);
} else {
event = document.createEvent('Event');
event.initEvent(name, true, true);
}
return event;
};
// taken from video.js
var getFileExtension = function(path) {
var splitPathRe;
var pathParts;
if (typeof path === 'string') {
splitPathRe = /^(\/?)([\s\S]*?)((?:\.{1,2}|[^\/]*?)(\.([^\.\/\?]+)))(?:[\/]*|[\?].*)$/i;
pathParts = splitPathRe.exec(path);
if (pathParts) {
return pathParts.pop().toLowerCase();
}
}
return '';
};
var saveState = function() {
var query = '';
if (!window.history.replaceState) {
return;
}
Object.keys(stateEls).forEach(function(elName) {
var symbol = query.length ? '&' : '?';
query += symbol + elName + '=' + getInputValue(stateEls[elName]);
});
window.history.replaceState({}, 'vhs demo', query);
};
window.URLSearchParams = window.URLSearchParams || function(locationSearch) {
this.get = function(name) {
var results = new RegExp('[\?&]' + name + '=([^&#]*)').exec(locationSearch);
return results ? decodeURIComponent(results[1]) : null;
};
};
// eslint-disable-next-line
var loadState = function() {
var params = new window.URLSearchParams(window.location.search);
return Object.keys(stateEls).reduce(function(acc, elName) {
acc[elName] = typeof params.get(elName) !== 'object' ? params.get(elName) : getInputValue(stateEls[elName]);
return acc;
}, {});
};
// eslint-disable-next-line
var reloadScripts = function(urls, cb) {
var el = document.getElementById('reload-scripts');
if (!el) {
el = document.createElement('div');
el.id = 'reload-scripts';
document.body.appendChild(el);
}
while (el.firstChild) {
el.removeChild(el.firstChild);
}
var loaded = [];
var checkDone = function() {
if (loaded.length === urls.length) {
cb();
}
};
urls.forEach(function(url) {
var script = document.createElement('script');
// scripts marked as defer will be loaded asynchronously but will be executed in the order they are in the DOM
script.defer = true;
// dynamically created scripts are async by default unless otherwise specified
// async scripts are loaded asynchronously but also executed as soon as they are loaded
// we want to load them in the order they are added therefore we want to turn off async
script.async = false;
script.src = url;
script.onload = function() {
loaded.push(url);
checkDone();
};
el.appendChild(script);
});
};
var regenerateRepresentations = function() {
while (representationsEl.firstChild) {
representationsEl.removeChild(representationsEl.firstChild);
}
var selectedIndex;
window.vhs.representations().forEach(function(rep, i) {
var option = document.createElement('option');
option.value = rep.id;
option.innerText = JSON.stringify({
id: rep.id,
videoCodec: rep.codecs.video,
audioCodec: rep.codecs.audio,
bandwidth: rep.bandwidth,
heigth: rep.heigth,
width: rep.width
});
if (window.mpc.media().id === rep.id) {
selectedIndex = i;
}
representationsEl.appendChild(option);
});
representationsEl.selectedIndex = selectedIndex;
};
[
'debug',
'autoplay',
'muted',
'minified',
'sync-workers',
'liveui',
'llhls',
'url',
'type',
'keysystems',
'buffer-water',
'override-native',
'preload',
'mirror-source'
].forEach(function(name) {
stateEls[name] = document.getElementById(name);
});
window.startDemo = function(cb) {
var state = loadState();
Object.keys(state).forEach(function(elName) {
setInputValue(stateEls[elName], state[elName]);
});
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === state.url) {
sources.selectedIndex = i;
}
});
stateEls.muted.addEventListener('change', function(event) {
saveState();
window.player.muted(event.target.checked);
});
stateEls.autoplay.addEventListener('change', function(event) {
saveState();
window.player.autoplay(event.target.checked);
});
stateEls['mirror-source'].addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls['sync-workers'].addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls.preload.addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls.debug.addEventListener('change', function(event) {
saveState();
window.videojs.log.level(event.target.checked ? 'debug' : 'info');
});
stateEls.llhls.addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls['buffer-water'].addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls['override-native'].addEventListener('change', function(event) {
saveState();
// reload the player and scripts
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls.liveui.addEventListener('change', function(event) {
saveState();
stateEls.minified.dispatchEvent(newEvent('change'));
});
stateEls.minified.addEventListener('change', function(event) {
var urls = [
'node_modules/video.js/dist/alt/video.core',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector'
].map(function(url) {
return url + (event.target.checked ? '.min' : '') + '.js';
});
if (stateEls['sync-workers'].checked) {
urls.push('dist/videojs-http-streaming-sync-workers.js');
} else {
urls.push('dist/videojs-http-streaming' + (event.target.checked ? '.min' : '') + '.js');
}
saveState();
if (window.player) {
window.player.dispose();
delete window.player;
}
if (window.videojs) {
delete window.videojs;
}
reloadScripts(urls, function() {
var player;
var fixture = document.getElementById('player-fixture');
var videoEl = document.createElement('video-js');
videoEl.setAttribute('controls', '');
videoEl.setAttribute('preload', stateEls.preload.options[stateEls.preload.selectedIndex].value || 'auto');
videoEl.className = 'vjs-default-skin';
fixture.appendChild(videoEl);
var mirrorSource = getInputValue(stateEls['mirror-source']);
player = window.player = window.videojs(videoEl, {
plugins: {
httpSourceSelector: {
default: 'auto'
}
},
liveui: stateEls.liveui.checked,
enableSourceset: mirrorSource,
html5: {
vhs: {
overrideNative: getInputValue(stateEls['override-native']),
experimentalBufferBasedABR: getInputValue(stateEls['buffer-water']),
experimentalLLHLS: getInputValue(stateEls.llhls)
}
}
});
player.on('sourceset', function() {
var source = player.currentSource();
if (source.keySystems) {
var copy = JSON.parse(JSON.stringify(source.keySystems));
// have to delete pssh as it will often make keySystems too big
// for a uri
Object.keys(copy).forEach(function(key) {
if (copy[key].hasOwnProperty('pssh')) {
delete copy[key].pssh;
}
});
stateEls.keysystems.value = JSON.stringify(copy, null, 2);
}
if (source.src) {
stateEls.url.value = encodeURI(source.src);
}
if (source.type) {
stateEls.type.value = source.type;
}
saveState();
});
player.width(640);
player.height(264);
// configure videojs-contrib-eme
player.eme();
stateEls.debug.dispatchEvent(newEvent('change'));
stateEls.muted.dispatchEvent(newEvent('change'));
stateEls.autoplay.dispatchEvent(newEvent('change'));
// run the load url handler for the intial source
if (stateEls.url.value) {
urlButton.dispatchEvent(newEvent('click'));
} else {
sources.dispatchEvent(newEvent('change'));
}
player.on('loadedmetadata', function() {
if (player.tech_.vhs) {
window.vhs = player.tech_.vhs;
window.mpc = player.tech_.vhs.masterPlaylistController_;
window.mpc.masterPlaylistLoader_.on('mediachange', regenerateRepresentations);
regenerateRepresentations();
} else {
window.vhs = null;
window.mpc = null;
}
});
cb(player);
});
});
var urlButtonClick = function(event) {
var ext;
var type = stateEls.type.value;
if (!type.trim()) {
ext = getFileExtension(stateEls.url.value);
if (ext === 'mpd') {
type = 'application/dash+xml';
} else if (ext === 'm3u8') {
type = 'application/x-mpegURL';
}
}
saveState();
var source = {
src: stateEls.url.value,
type: type
};
if (stateEls.keysystems.value) {
source.keySystems = JSON.parse(stateEls.keysystems.value);
}
sources.selectedIndex = -1;
Array.prototype.forEach.call(sources.options, function(s, i) {
if (s.value === stateEls.url.value) {
sources.selectedIndex = i;
}
});
window.player.src(source);
};
urlButton.addEventListener('click', urlButtonClick);
urlButton.addEventListener('tap', urlButtonClick);
sources.addEventListener('change', function(event) {
var selectedOption = sources.options[sources.selectedIndex];
if (!selectedOption) {
return;
}
var src = selectedOption.value;
stateEls.url.value = src;
stateEls.type.value = selectedOption.getAttribute('data-mimetype');
stateEls.keysystems.value = selectedOption.getAttribute('data-key-systems');
urlButton.dispatchEvent(newEvent('click'));
});
stateEls.url.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
stateEls.url.addEventListener('input', function(event) {
if (stateEls.type.value.length) {
stateEls.type.value = '';
}
});
stateEls.type.addEventListener('keyup', function(event) {
if (event.key === 'Enter') {
urlButton.click();
}
});
// run the change handler for the first time
stateEls.minified.dispatchEvent(newEvent('change'));
};
}(window));

View file

@ -0,0 +1,47 @@
const generate = require('videojs-generate-karma-config');
const CI_TEST_TYPE = process.env.CI_TEST_TYPE || '';
module.exports = function(config) {
// see https://github.com/videojs/videojs-generate-karma-config
// for options
const options = {
coverage: CI_TEST_TYPE === 'coverage' ? true : false,
preferHeadless: false,
browsers(aboutToRun) {
return aboutToRun.filter(function(launcherName) {
return !(/^(Safari|Chromium)/).test(launcherName);
});
},
files(defaults) {
defaults.unshift('node_modules/es5-shim/es5-shim.js');
defaults.unshift('node_modules/es6-shim/es6-shim.js');
defaults.splice(
defaults.indexOf('node_modules/video.js/dist/video.js'),
1,
'node_modules/video.js/dist/alt/video.core.js'
);
return defaults;
},
browserstackLaunchers(defaults) {
delete defaults.bsSafariMojave;
delete defaults.bsSafariElCapitan;
// do not run on browserstack for coverage
if (CI_TEST_TYPE === 'coverage') {
defaults = {};
}
return defaults;
},
serverBrowsers() {
return [];
}
};
config = generate(config, options);
// any other custom stuff not supported by options here!
};

View file

@ -0,0 +1,35 @@
const path = require('path');
const sh = require('shelljs');
const deployDir = 'deploy';
const files = [
'node_modules/video.js/dist/video-js.css',
'node_modules/video.js/dist/alt/video.core.js',
'node_modules/video.js/dist/alt/video.core.min.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.js',
'node_modules/videojs-contrib-eme/dist/videojs-contrib-eme.min.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.js',
'node_modules/videojs-contrib-quality-levels/dist/videojs-contrib-quality-levels.min.js',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.css',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.js',
'node_modules/videojs-http-source-selector/dist/videojs-http-source-selector.min.js',
'node_modules/d3/d3.min.js',
'logo.svg',
'scripts/sources.json',
'scripts/index-demo-page.js'
];
// cleanup previous deploy
sh.rm('-rf', deployDir);
// make sure the directory exists
sh.mkdir('-p', deployDir);
// create nested directories
files
.map((file) => path.dirname(file))
.forEach((dir) => sh.mkdir('-p', path.join(deployDir, dir)));
// copy files/folders to deploy dir
files
.concat('dist', 'index.html', 'utils')
.forEach((file) => sh.cp('-r', file, path.join(deployDir, file)));

View file

@ -0,0 +1,133 @@
const generate = require('videojs-generate-rollup-config');
const worker = require('rollup-plugin-worker-factory');
const {terser} = require('rollup-plugin-terser');
const createTestData = require('./create-test-data.js');
const replace = require('@rollup/plugin-replace');
const strip = require('@rollup/plugin-strip');
const CI_TEST_TYPE = process.env.CI_TEST_TYPE || '';
let syncWorker;
// see https://github.com/videojs/videojs-generate-rollup-config
// for options
const options = {
input: 'src/videojs-http-streaming.js',
distName: 'videojs-http-streaming',
excludeCoverage(defaults) {
defaults.push(/^rollup-plugin-worker-factory/);
defaults.push(/^create-test-data!/);
return defaults;
},
globals(defaults) {
defaults.browser.xmldom = 'window';
defaults.test.xmldom = 'window';
return defaults;
},
externals(defaults) {
return Object.assign(defaults, {
module: defaults.module.concat([
'aes-decrypter',
'm3u8-parser',
'mpd-parser',
'mux.js',
'@videojs/vhs-utils'
])
});
},
plugins(defaults) {
// add worker and createTestData to the front of plugin lists
defaults.module.unshift('worker');
defaults.browser.unshift('worker');
// change this to `syncWorker` for syncronous web worker
// during unit tests
if (CI_TEST_TYPE === 'coverage') {
defaults.test.unshift('syncWorker');
} else {
defaults.test.unshift('worker');
}
defaults.test.unshift('createTestData');
if (CI_TEST_TYPE === 'playback-min') {
defaults.test.push('uglify');
}
// istanbul is only in the list for regular builds and not watch
if (CI_TEST_TYPE !== 'coverage' && defaults.test.indexOf('istanbul') !== -1) {
defaults.test.splice(defaults.test.indexOf('istanbul'), 1);
}
defaults.module.unshift('replace');
defaults.module.unshift('strip');
defaults.browser.unshift('strip');
return defaults;
},
primedPlugins(defaults) {
defaults = Object.assign(defaults, {
replace: replace({
// single quote replace
"require('@videojs/vhs-utils/es": "require('@videojs/vhs-utils/cjs",
// double quote replace
'require("@videojs/vhs-utils/es': 'require("@videojs/vhs-utils/cjs'
}),
uglify: terser({
output: {comments: 'some'},
compress: {passes: 2}
}),
strip: strip({
functions: ['TEST_ONLY_*']
}),
createTestData: createTestData()
});
defaults.worker = worker({type: 'browser', plugins: [
defaults.resolve,
defaults.json,
defaults.commonjs,
defaults.babel
]});
defaults.syncWorker = syncWorker = worker({type: 'mock', plugins: [
defaults.resolve,
defaults.json,
defaults.commonjs,
defaults.babel
]});
return defaults;
},
babel(defaults) {
const presetEnvSettings = defaults.presets[0][1];
presetEnvSettings.exclude = presetEnvSettings.exclude || [];
presetEnvSettings.exclude.push('@babel/plugin-transform-typeof-symbol');
return defaults;
}
};
if (CI_TEST_TYPE === 'playback' || CI_TEST_TYPE === 'playback-min') {
options.testInput = 'test/playback.test.js';
} else if (CI_TEST_TYPE === 'unit' || CI_TEST_TYPE === 'coverage') {
options.testInput = {include: ['test/**/*.test.js'], exclude: ['test/playback.test.js']};
}
const config = generate(options);
if (config.builds.browser) {
config.builds.syncWorkers = config.makeBuild('browser', {
output: {
name: 'httpStreaming',
format: 'umd',
file: 'dist/videojs-http-streaming-sync-workers.js'
}
});
config.builds.syncWorkers.plugins[0] = syncWorker;
}
// Add additonal builds/customization here!
// export the builds to rollup
export default Object.values(config.builds);

View file

@ -0,0 +1,391 @@
[
{
"name": "Bipbop - Muxed TS with 1 alt Audio, 5 captions",
"uri": "https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "FMP4 and ts both muxed",
"uri": "https://d2zihajmogu5jn.cloudfront.net/ts-fmp4/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - ts and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_ts/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 and captions muxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/img_bipbop_adv_example_fmp4/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Advanced Bipbop - FMP4 hevc, demuxed",
"uri": "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_adv_example_hevc/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Angel One - FMP4 demuxed, many audio/captions",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Parkour - FMP4 demuxed",
"uri": "https://bitdash-a.akamaihd.net/content/MI201109210084_1/m3u8s-fmp4/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Song - ts Audio only",
"uri": "https://s3.amazonaws.com/qa.jwplayer.com/~alex/121628/new_master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Coit Tower drone footage - 4 8 second segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/CoitTower/master_ts_segtimes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Disney's Oceans trailer - HLSe, ts Encrypted",
"uri": "https://playertest.longtailvideo.com/adaptive/oceans_aes/oceans_aes.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Sintel - ts with audio/subs and a 4k rendtion",
"uri": "https://bitmovin-a.akamaihd.net/content/sintel/hls/playlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Ipsum Subs - HLS + subtitles",
"uri": "https://d2zihajmogu5jn.cloudfront.net/hls-webvtt/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Video Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/video-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Audio Only",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat 4K",
"uri": "https://d2zihajmogu5jn.cloudfront.net/4k-hls/out.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Boat Misaligned - 3, 5, 7, second segment playlists",
"uri": "https://d2zihajmogu5jn.cloudfront.net/misaligned-playlists/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths-hls/hls.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Big Buck Bunny - demuxed audio/video, includes 4K, burns in frame, pts, resolution, bitrate values",
"uri": "https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - fmp4, webm, subs (TODO: subs are broken), alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Angel One - Widevine, fmp4, webm, subs, alternate audio tracks",
"uri": "https://storage.googleapis.com/shaka-demo-assets/angel-one-widevine/dash.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://cwip-shaka-proxy.appspot.com/no_auth"
}
},
{
"name": "BBB-CMIF: Big Buck Bunny Dark Truths - demuxed, fmp4",
"uri": "https://storage.googleapis.com/shaka-demo-assets/bbb-dark-truths/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX demuxed, 2 audio",
"uri": "https://dash.akamaized.net/dash264/TestCases/10a/1/iis_forest_short_poem_multi_lang_480p_single_adapt_aaclc_sidx.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "SIDX bipbop-like",
"uri": "https://download.tsi.telecom-paristech.fr/gpac/DASH_CONFORMANCE/TelecomParisTech/mp4-onDemand/mp4-onDemand-mpd-AV.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Google self-driving car - SIDX",
"uri": "https://yt-dash-mse-test.commondatastorage.googleapis.com/media/car-20120827-manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Sintel - single rendition",
"uri": "https://d2zihajmogu5jn.cloudfront.net/sintel_dash/sintel_vod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "HLS - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DASH - Live - Axinom live stream, may not always be available",
"uri": "https://akamai-axtest.akamaized.net/routes/lapd-v1-acceptance/www_c4/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Live simulated DASH from DASH IF",
"uri": "https://livesim.dashif.org/livesim/mup_30/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DASH - Shaka Player Source Simulated Live",
"uri": "https://storage.googleapis.com/shaka-live-assets/player-source.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Apple's LL-HLS test stream",
"uri": "https://ll-hls-test.apple.com/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Apple's LL-HLS test stream, cmaf, fmp4",
"uri": "https://ll-hls-test.apple.com/cmaf/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live", "low-latency"]
},
{
"name": "Axinom Multi DRM - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-SingleKey/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiOWViNDA1MGQtZTQ0Yi00ODAyLTkzMmUtMjdkNzUwODNlMjY2IiwiZW5jcnlwdGVkX2tleSI6ImxLM09qSExZVzI0Y3Iya3RSNzRmbnc9PSJ9XX19.4lWwW46k-oWcah8oN18LPj5OLS5ZU-_AQv7fe0JhNjA"
}
}
}
},
{
"name": "Axinom Multi DRM, Multi Period - DASH, 4k, HEVC, Playready, Widevine",
"uri": "https://media.axprod.net/TestVectors/v7-MultiDRM-MultiKey-MultiPeriod/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": {
"url": "https://drm-playready-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
},
"com.widevine.alpha": {
"url": "https://drm-widevine-licensing.axtest.net/AcquireLicense",
"licenseHeaders": {
"X-AxDRM-Message": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiYjMzNjRlYjUtNTFmNi00YWUzLThjOTgtMzNjZWQ1ZTMxYzc4IiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiMDg3Mjc4NmUtZjllNy00NjVmLWEzYTItNGU1YjBlZjhmYTQ1IiwiZW5jcnlwdGVkX2tleSI6IlB3NitlRVlOY3ZqWWJmc2gzWDNmbWc9PSJ9LHsiaWQiOiJjMTRmMDcwOS1mMmI5LTQ0MjctOTE2Yi02MWI1MjU4NjUwNmEiLCJlbmNyeXB0ZWRfa2V5IjoiLzErZk5paDM4bXFSdjR5Y1l6bnQvdz09In0seyJpZCI6IjhiMDI5ZTUxLWQ1NmEtNDRiZC05MTBmLWQ0YjVmZDkwZmJhMiIsImVuY3J5cHRlZF9rZXkiOiJrcTBKdVpFanBGTjhzYVRtdDU2ME9nPT0ifSx7ImlkIjoiMmQ2ZTkzODctNjBjYS00MTQ1LWFlYzItYzQwODM3YjRiMDI2IiwiZW5jcnlwdGVkX2tleSI6IlRjUlFlQld4RW9IT0tIcmFkNFNlVlE9PSJ9LHsiaWQiOiJkZTAyZjA3Zi1hMDk4LTRlZTAtYjU1Ni05MDdjMGQxN2ZiYmMiLCJlbmNyeXB0ZWRfa2V5IjoicG9lbmNTN0dnbWVHRmVvSjZQRUFUUT09In0seyJpZCI6IjkxNGU2OWY0LTBhYjMtNDUzNC05ZTlmLTk4NTM2MTVlMjZmNiIsImVuY3J5cHRlZF9rZXkiOiJlaUkvTXNsbHJRNHdDbFJUL0xObUNBPT0ifSx7ImlkIjoiZGE0NDQ1YzItZGI1ZS00OGVmLWIwOTYtM2VmMzQ3YjE2YzdmIiwiZW5jcnlwdGVkX2tleSI6IjJ3K3pkdnFycERWM3hSMGJKeTR1Z3c9PSJ9LHsiaWQiOiIyOWYwNWU4Zi1hMWFlLTQ2ZTQtODBlOS0yMmRjZDQ0Y2Q3YTEiLCJlbmNyeXB0ZWRfa2V5IjoiL3hsU0hweHdxdTNnby9nbHBtU2dhUT09In0seyJpZCI6IjY5ZmU3MDc3LWRhZGQtNGI1NS05NmNkLWMzZWRiMzk5MTg1MyIsImVuY3J5cHRlZF9rZXkiOiJ6dTZpdXpOMnBzaTBaU3hRaUFUa1JRPT0ifV19fQ.BXr93Et1krYMVs-CUnf7F3ywJWFRtxYdkR7Qn4w3-to"
}
}
}
},
{
"name": "Axinom Clear - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "Axinom Clear MultiPeriod - DASH, 4k, HEVC",
"uri": "https://media.axprod.net/TestVectors/v7-Clear/Manifest_MultiPeriod.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "DASH-IF simulated live",
"uri": "https://livesim.dashif.org/livesim/testpic_2s/Manifest.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Tears of Steal - Widevine (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-widevine.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://widevine-proxy.appspot.com/proxy"
}
},
{
"name": "Tears of Steal - PlayReady (Unified Streaming)",
"uri": "https://demo.unified-streaming.com/video/tears-of-steel/tears-of-steel-dash-playready.ism/.mpd",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.microsoft.playready": "https://test.playready.microsoft.com/service/rightsmanager.asmx"
}
},
{
"name": "Unified Streaming Live DASH",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "Unified Streaming Live HLS",
"uri": "https://live.unified-streaming.com/scte35/scte35.isml/.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - DASH, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/manifestBR2.mpd",
"mimetype": "application/dash+xml",
"features": ["live"]
},
{
"name": "DOESN'T WORK - Bayerrischer Rundfunk Recorded Loop - HLS, may not always be available",
"uri": "https://irtdashreference-i.akamaihd.net/dash/live/901161/keepixo1/playlistBR2.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "Big Buck Bunny - Azure - DASH, Widevine, PlayReady",
"uri": "https://amssamples.streaming.mediaservices.windows.net/622b189f-ec39-43f2-93a2-201ac4e31ce1/BigBuckBunny.ism/manifest(format=mpd-time-csf)",
"mimetype": "application/dash+xml",
"features": [],
"keySystems": {
"com.widevine.alpha": "https://amssamples.keydelivery.mediaservices.windows.net/Widevine/?KID=1ab45440-532c-4399-94dc-5c5ad9584bac",
"com.microsoft.playready": "https://amssamples.keydelivery.mediaservices.windows.net/PlayReady/"
}
},
{
"name": "Big Buck Bunny Audio only, groups have same uri as renditons",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only-dupe-groups/prog_index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Big Buck Bunny Demuxed av, audio only rendition same as group",
"uri": "https://d2zihajmogu5jn.cloudfront.net/demuxed-ts-with-audio-only-rendition/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "sidx v1 dash",
"uri": "https://d2zihajmogu5jn.cloudfront.net/sidx-v1-dash/Dog.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "fmp4 x264/flac no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-flac-no-manifest-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "fmp4 x264/opus no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-opus-no-manifest-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "fmp4 h264/aac no manifest codecs",
"uri": "https://d2zihajmogu5jn.cloudfront.net/fmp4-muxed-no-playlist-codecs/index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "ts one valid codec among many invalid",
"uri": "https://d2zihajmogu5jn.cloudfront.net/ts-one-valid-many-invalid-codecs/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Legacy AVC Codec",
"uri": "https://d2zihajmogu5jn.cloudfront.net/legacy-avc-codec/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Pseudo-Live PDT test source",
"uri": "https://d2zihajmogu5jn.cloudfront.net/pdt-test-source/no-endlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": ["live"]
},
{
"name": "PDT test source",
"uri": "https://d2zihajmogu5jn.cloudfront.net/pdt-test-source/endlist.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "audio only dash, two groups",
"uri": "https://d2zihajmogu5jn.cloudfront.net/audio-only-dash/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "video only dash, two renditions",
"uri": "https://d2zihajmogu5jn.cloudfront.net/video-only-dash/dash.mpd",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "encrypted init segment",
"uri": "https://d2zihajmogu5jn.cloudfront.net/encrypted-init-segment/master.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
},
{
"name": "Dash data uri for https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps.mpd",
"uri": "data:application/dash+xml;charset=utf-8,%3CMPD%20mediaPresentationDuration=%22PT634.566S%22%20minBufferTime=%22PT2.00S%22%20profiles=%22urn:hbbtv:dash:profile:isoff-live:2012,urn:mpeg:dash:profile:isoff-live:2011%22%20type=%22static%22%20xmlns=%22urn:mpeg:dash:schema:mpd:2011%22%20xmlns:xsi=%22http://www.w3.org/2001/XMLSchema-instance%22%20xsi:schemaLocation=%22urn:mpeg:DASH:schema:MPD:2011%20DASH-MPD.xsd%22%3E%20%3CBaseURL%3Ehttps://dash.akamaized.net/akamai/bbb_30fps/%3C/BaseURL%3E%20%3CPeriod%3E%20%20%3CAdaptationSet%20mimeType=%22video/mp4%22%20contentType=%22video%22%20subsegmentAlignment=%22true%22%20subsegmentStartsWithSAP=%221%22%20par=%2216:9%22%3E%20%20%20%3CSegmentTemplate%20duration=%22120%22%20timescale=%2230%22%20media=%22$RepresentationID$/$RepresentationID$_$Number$.m4v%22%20startNumber=%221%22%20initialization=%22$RepresentationID$/$RepresentationID$_0.m4v%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1024x576_2500k%22%20codecs=%22avc1.64001f%22%20bandwidth=%223134488%22%20width=%221024%22%20height=%22576%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1280x720_4000k%22%20codecs=%22avc1.64001f%22%20bandwidth=%224952892%22%20width=%221280%22%20height=%22720%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_1920x1080_8000k%22%20codecs=%22avc1.640028%22%20bandwidth=%229914554%22%20width=%221920%22%20height=%221080%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_320x180_200k%22%20codecs=%22avc1.64000d%22%20bandwidth=%22254320%22%20width=%22320%22%20height=%22180%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_320x180_400k%22%20codecs=%22avc1.64000d%22%20bandwidth=%22507246%22%20width=%22320%22%20height=%22180%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_480x270_600k%22%20codecs=%22avc1.640015%22%20bandwidth=%22759798%22%20width=%22480%22%20height=%22270%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_640x360_1000k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221254758%22%20width=%22640%22%20height=%22360%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_640x360_800k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221013310%22%20width=%22640%22%20height=%22360%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_768x432_1500k%22%20codecs=%22avc1.64001e%22%20bandwidth=%221883700%22%20width=%22768%22%20height=%22432%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_30fps_3840x2160_12000k%22%20codecs=%22avc1.640033%22%20bandwidth=%2214931538%22%20width=%223840%22%20height=%222160%22%20frameRate=%2230%22%20sar=%221:1%22%20scanType=%22progressive%22/%3E%20%20%3C/AdaptationSet%3E%20%20%3CAdaptationSet%20mimeType=%22audio/mp4%22%20contentType=%22audio%22%20subsegmentAlignment=%22true%22%20subsegmentStartsWithSAP=%221%22%3E%20%20%20%3CAccessibility%20schemeIdUri=%22urn:tva:metadata:cs:AudioPurposeCS:2007%22%20value=%226%22/%3E%20%20%20%3CRole%20schemeIdUri=%22urn:mpeg:dash:role:2011%22%20value=%22main%22/%3E%20%20%20%3CSegmentTemplate%20duration=%22192512%22%20timescale=%2248000%22%20media=%22$RepresentationID$/$RepresentationID$_$Number$.m4a%22%20startNumber=%221%22%20initialization=%22$RepresentationID$/$RepresentationID$_0.m4a%22/%3E%20%20%20%3CRepresentation%20id=%22bbb_a64k%22%20codecs=%22mp4a.40.5%22%20bandwidth=%2267071%22%20audioSamplingRate=%2248000%22%3E%20%20%20%20%3CAudioChannelConfiguration%20schemeIdUri=%22urn:mpeg:dash:23003:3:audio_channel_configuration:2011%22%20value=%222%22/%3E%20%20%20%3C/Representation%3E%20%20%3C/AdaptationSet%3E%20%3C/Period%3E%3C/MPD%3E",
"mimetype": "application/dash+xml",
"features": []
},
{
"name": "HLS data uri for https://d2zihajmogu5jn.cloudfront.net/bipbop-advanced/bipbop_16x9_variant.m3u8",
"uri": "data:application/x-mpegurl;charset=utf-8,%23EXTM3U%0D%0A%0D%0A%23EXT-X-MEDIA%3ATYPE%3DAUDIO%2CGROUP-ID%3D%22bipbop_audio%22%2CLANGUAGE%3D%22eng%22%2CNAME%3D%22BipBop%20Audio%201%22%2CAUTOSELECT%3DYES%2CDEFAULT%3DYES%0D%0A%23EXT-X-MEDIA%3ATYPE%3DAUDIO%2CGROUP-ID%3D%22bipbop_audio%22%2CLANGUAGE%3D%22eng%22%2CNAME%3D%22BipBop%20Audio%202%22%2CAUTOSELECT%3DNO%2CDEFAULT%3DNO%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Falternate_audio_aac_sinewave%2Fprog_index.m3u8%22%0D%0A%0D%0A%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22English%22%2CDEFAULT%3DYES%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22en%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Feng%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22English%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22en%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Feng_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Fran%C3%83%C2%A7ais%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22fr%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Ffra%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Fran%C3%83%C2%A7ais%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22fr%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Ffra_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Espa%C3%83%C2%B1ol%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22es%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fspa%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22Espa%C3%83%C2%B1ol%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22es%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fspa_forced%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22%C3%A6%C2%97%C2%A5%C3%A6%C2%9C%C2%AC%C3%A8%C2%AA%C2%9E%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DYES%2CFORCED%3DNO%2CLANGUAGE%3D%22ja%22%2CCHARACTERISTICS%3D%22public.accessibility.transcribes-spoken-dialog%2C%20public.accessibility.describes-music-and-sound%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fjpn%2Fprog_index.m3u8%22%0D%0A%23EXT-X-MEDIA%3ATYPE%3DSUBTITLES%2CGROUP-ID%3D%22subs%22%2CNAME%3D%22%C3%A6%C2%97%C2%A5%C3%A6%C2%9C%C2%AC%C3%A8%C2%AA%C2%9E%20%28Forced%29%22%2CDEFAULT%3DNO%2CAUTOSELECT%3DNO%2CFORCED%3DYES%2CLANGUAGE%3D%22ja%22%2CURI%3D%22https%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fsubtitles%2Fjpn_forced%2Fprog_index.m3u8%22%0D%0A%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D263851%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d400d%22%2CRESOLUTION%3D416x234%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear1%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D577610%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401e%22%2CRESOLUTION%3D640x360%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear2%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D915905%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D960x540%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear3%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D1030138%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D1280x720%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear4%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D1924009%2CCODECS%3D%22mp4a.40.2%2C%20avc1.4d401f%22%2CRESOLUTION%3D1920x1080%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear5%2Fprog_index.m3u8%0D%0A%0D%0A%23EXT-X-STREAM-INF%3ABANDWIDTH%3D41457%2CCODECS%3D%22mp4a.40.2%22%2CAUDIO%3D%22bipbop_audio%22%2CSUBTITLES%3D%22subs%22%0D%0Ahttps%3A%2F%2Fd2zihajmogu5jn.cloudfront.net%2Fbipbop-advanced%2Fgear0%2Fprog_index.m3u8",
"mimetype": "application/x-mpegurl",
"features": []
}
]

101
node_modules/@videojs/http-streaming/src/ad-cue-tags.js generated vendored Normal file
View file

@ -0,0 +1,101 @@
/**
* @file ad-cue-tags.js
*/
import window from 'global/window';
/**
* Searches for an ad cue that overlaps with the given mediaTime
*
* @param {Object} track
* the track to find the cue for
*
* @param {number} mediaTime
* the time to find the cue at
*
* @return {Object|null}
* the found cue or null
*/
export const findAdCue = function(track, mediaTime) {
const cues = track.cues;
for (let i = 0; i < cues.length; i++) {
const cue = cues[i];
if (mediaTime >= cue.adStartTime && mediaTime <= cue.adEndTime) {
return cue;
}
}
return null;
};
export const updateAdCues = function(media, track, offset = 0) {
if (!media.segments) {
return;
}
let mediaTime = offset;
let cue;
for (let i = 0; i < media.segments.length; i++) {
const segment = media.segments[i];
if (!cue) {
// Since the cues will span for at least the segment duration, adding a fudge
// factor of half segment duration will prevent duplicate cues from being
// created when timing info is not exact (e.g. cue start time initialized
// at 10.006677, but next call mediaTime is 10.003332 )
cue = findAdCue(track, mediaTime + (segment.duration / 2));
}
if (cue) {
if ('cueIn' in segment) {
// Found a CUE-IN so end the cue
cue.endTime = mediaTime;
cue.adEndTime = mediaTime;
mediaTime += segment.duration;
cue = null;
continue;
}
if (mediaTime < cue.endTime) {
// Already processed this mediaTime for this cue
mediaTime += segment.duration;
continue;
}
// otherwise extend cue until a CUE-IN is found
cue.endTime += segment.duration;
} else {
if ('cueOut' in segment) {
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
segment.cueOut
);
cue.adStartTime = mediaTime;
// Assumes tag format to be
// #EXT-X-CUE-OUT:30
cue.adEndTime = mediaTime + parseFloat(segment.cueOut);
track.addCue(cue);
}
if ('cueOutCont' in segment) {
// Entered into the middle of an ad cue
// Assumes tag formate to be
// #EXT-X-CUE-OUT-CONT:10/30
const [adOffset, adTotal] = segment.cueOutCont.split('/').map(parseFloat);
cue = new window.VTTCue(
mediaTime,
mediaTime + segment.duration,
''
);
cue.adStartTime = mediaTime - adOffset;
cue.adEndTime = cue.adStartTime + adTotal;
track.addCue(cue);
}
}
mediaTime += segment.duration;
}
};

129
node_modules/@videojs/http-streaming/src/bin-utils.js generated vendored Normal file
View file

@ -0,0 +1,129 @@
/**
* @file bin-utils.js
*/
/**
* convert a TimeRange to text
*
* @param {TimeRange} range the timerange to use for conversion
* @param {number} i the iterator on the range to convert
* @return {string} the range in string format
*/
const textRange = function(range, i) {
return range.start(i) + '-' + range.end(i);
};
/**
* format a number as hex string
*
* @param {number} e The number
* @param {number} i the iterator
* @return {string} the hex formatted number as a string
*/
const formatHexString = function(e, i) {
const value = e.toString(16);
return '00'.substring(0, 2 - value.length) + value + (i % 2 ? ' ' : '');
};
const formatAsciiString = function(e) {
if (e >= 0x20 && e < 0x7e) {
return String.fromCharCode(e);
}
return '.';
};
/**
* Creates an object for sending to a web worker modifying properties that are TypedArrays
* into a new object with seperated properties for the buffer, byteOffset, and byteLength.
*
* @param {Object} message
* Object of properties and values to send to the web worker
* @return {Object}
* Modified message with TypedArray values expanded
* @function createTransferableMessage
*/
export const createTransferableMessage = function(message) {
const transferable = {};
Object.keys(message).forEach((key) => {
const value = message[key];
if (ArrayBuffer.isView(value)) {
transferable[key] = {
bytes: value.buffer,
byteOffset: value.byteOffset,
byteLength: value.byteLength
};
} else {
transferable[key] = value;
}
});
return transferable;
};
/**
* Returns a unique string identifier for a media initialization
* segment.
*
* @param {Object} initSegment
* the init segment object.
*
* @return {string} the generated init segment id
*/
export const initSegmentId = function(initSegment) {
const byterange = initSegment.byterange || {
length: Infinity,
offset: 0
};
return [
byterange.length, byterange.offset, initSegment.resolvedUri
].join(',');
};
/**
* Returns a unique string identifier for a media segment key.
*
* @param {Object} key the encryption key
* @return {string} the unique id for the media segment key.
*/
export const segmentKeyId = function(key) {
return key.resolvedUri;
};
/**
* utils to help dump binary data to the console
*
* @param {Array|TypedArray} data
* data to dump to a string
*
* @return {string} the data as a hex string.
*/
export const hexDump = (data) => {
const bytes = Array.prototype.slice.call(data);
const step = 16;
let result = '';
let hex;
let ascii;
for (let j = 0; j < bytes.length / step; j++) {
hex = bytes.slice(j * step, j * step + step).map(formatHexString).join('');
ascii = bytes.slice(j * step, j * step + step).map(formatAsciiString).join('');
result += hex + ' ' + ascii + '\n';
}
return result;
};
export const tagDump = ({ bytes }) => hexDump(bytes);
export const textRanges = (ranges) => {
let result = '';
let i;
for (i = 0; i < ranges.length; i++) {
result += textRange(ranges, i) + ' ';
}
return result;
};

21
node_modules/@videojs/http-streaming/src/config.js generated vendored Normal file
View file

@ -0,0 +1,21 @@
export default {
GOAL_BUFFER_LENGTH: 30,
MAX_GOAL_BUFFER_LENGTH: 60,
BACK_BUFFER_LENGTH: 30,
GOAL_BUFFER_LENGTH_RATE: 1,
// 0.5 MB/s
INITIAL_BANDWIDTH: 4194304,
// A fudge factor to apply to advertised playlist bitrates to account for
// temporary flucations in client bandwidth
BANDWIDTH_VARIANCE: 1.2,
// How much of the buffer must be filled before we consider upswitching
BUFFER_LOW_WATER_LINE: 0,
MAX_BUFFER_LOW_WATER_LINE: 30,
// TODO: Remove this when experimentalBufferBasedABR is removed
EXPERIMENTAL_MAX_BUFFER_LOW_WATER_LINE: 16,
BUFFER_LOW_WATER_LINE_RATE: 1,
// If the buffer is greater than the high water line, we won't switch down
BUFFER_HIGH_WATER_LINE: 30
};

View file

@ -0,0 +1,855 @@
import videojs from 'video.js';
import {
parse as parseMpd,
addSidxSegmentsToPlaylist,
generateSidxKey,
parseUTCTiming
} from 'mpd-parser';
import {
refreshDelay,
updateMaster as updatePlaylist,
isPlaylistUnchanged
} from './playlist-loader';
import { resolveUrl, resolveManifestRedirect } from './resolve-url';
import parseSidx from 'mux.js/lib/tools/parse-sidx';
import { segmentXhrHeaders } from './xhr';
import window from 'global/window';
import {
forEachMediaGroup,
addPropertiesToMaster
} from './manifest';
import containerRequest from './util/container-request.js';
import {toUint8} from '@videojs/vhs-utils/es/byte-helpers';
import logger from './util/logger';
const { EventTarget, mergeOptions } = videojs;
const dashPlaylistUnchanged = function(a, b) {
if (!isPlaylistUnchanged(a, b)) {
return false;
}
// for dash the above check will often return true in scenarios where
// the playlist actually has changed because mediaSequence isn't a
// dash thing, and we often set it to 1. So if the playlists have the same amount
// of segments we return true.
// So for dash we need to make sure that the underlying segments are different.
// if sidx changed then the playlists are different.
if (a.sidx && b.sidx && (a.sidx.offset !== b.sidx.offset || a.sidx.length !== b.sidx.length)) {
return false;
} else if ((!a.sidx && b.sidx) || (a.sidx && !b.sidx)) {
return false;
}
// one or the other does not have segments
// there was a change.
if (a.segments && !b.segments || !a.segments && b.segments) {
return false;
}
// neither has segments nothing changed
if (!a.segments && !b.segments) {
return true;
}
// check segments themselves
for (let i = 0; i < a.segments.length; i++) {
const aSegment = a.segments[i];
const bSegment = b.segments[i];
// if uris are different between segments there was a change
if (aSegment.uri !== bSegment.uri) {
return false;
}
// neither segment has a byterange, there will be no byterange change.
if (!aSegment.byterange && !bSegment.byterange) {
continue;
}
const aByterange = aSegment.byterange;
const bByterange = bSegment.byterange;
// if byterange only exists on one of the segments, there was a change.
if ((aByterange && !bByterange) || (!aByterange && bByterange)) {
return false;
}
// if both segments have byterange with different offsets, there was a change.
if (aByterange.offset !== bByterange.offset || aByterange.length !== bByterange.length) {
return false;
}
}
// if everything was the same with segments, this is the same playlist.
return true;
};
/**
* Parses the master XML string and updates playlist URI references.
*
* @param {Object} config
* Object of arguments
* @param {string} config.masterXml
* The mpd XML
* @param {string} config.srcUrl
* The mpd URL
* @param {Date} config.clientOffset
* A time difference between server and client
* @param {Object} config.sidxMapping
* SIDX mappings for moof/mdat URIs and byte ranges
* @return {Object}
* The parsed mpd manifest object
*/
export const parseMasterXml = ({ masterXml, srcUrl, clientOffset, sidxMapping }) => {
const master = parseMpd(masterXml, {
manifestUri: srcUrl,
clientOffset,
sidxMapping
});
addPropertiesToMaster(master, srcUrl);
return master;
};
/**
* Returns a new master manifest that is the result of merging an updated master manifest
* into the original version.
*
* @param {Object} oldMaster
* The old parsed mpd object
* @param {Object} newMaster
* The updated parsed mpd object
* @return {Object}
* A new object representing the original master manifest with the updated media
* playlists merged in
*/
export const updateMaster = (oldMaster, newMaster, sidxMapping) => {
let noChanges = true;
let update = mergeOptions(oldMaster, {
// These are top level properties that can be updated
duration: newMaster.duration,
minimumUpdatePeriod: newMaster.minimumUpdatePeriod
});
// First update the playlists in playlist list
for (let i = 0; i < newMaster.playlists.length; i++) {
const playlist = newMaster.playlists[i];
if (playlist.sidx) {
const sidxKey = generateSidxKey(playlist.sidx);
// add sidx segments to the playlist if we have all the sidx info already
if (sidxMapping && sidxMapping[sidxKey] && sidxMapping[sidxKey].sidx) {
addSidxSegmentsToPlaylist(playlist, sidxMapping[sidxKey].sidx, playlist.sidx.resolvedUri);
}
}
const playlistUpdate = updatePlaylist(update, playlist, dashPlaylistUnchanged);
if (playlistUpdate) {
update = playlistUpdate;
noChanges = false;
}
}
// Then update media group playlists
forEachMediaGroup(newMaster, (properties, type, group, label) => {
if (properties.playlists && properties.playlists.length) {
const id = properties.playlists[0].id;
const playlistUpdate = updatePlaylist(update, properties.playlists[0], dashPlaylistUnchanged);
if (playlistUpdate) {
update = playlistUpdate;
// update the playlist reference within media groups
update.mediaGroups[type][group][label].playlists[0] = update.playlists[id];
noChanges = false;
}
}
});
if (newMaster.minimumUpdatePeriod !== oldMaster.minimumUpdatePeriod) {
noChanges = false;
}
if (noChanges) {
return null;
}
return update;
};
// SIDX should be equivalent if the URI and byteranges of the SIDX match.
// If the SIDXs have maps, the two maps should match,
// both `a` and `b` missing SIDXs is considered matching.
// If `a` or `b` but not both have a map, they aren't matching.
const equivalentSidx = (a, b) => {
const neitherMap = Boolean(!a.map && !b.map);
const equivalentMap = neitherMap || Boolean(a.map && b.map &&
a.map.byterange.offset === b.map.byterange.offset &&
a.map.byterange.length === b.map.byterange.length);
return equivalentMap &&
a.uri === b.uri &&
a.byterange.offset === b.byterange.offset &&
a.byterange.length === b.byterange.length;
};
// exported for testing
export const compareSidxEntry = (playlists, oldSidxMapping) => {
const newSidxMapping = {};
for (const id in playlists) {
const playlist = playlists[id];
const currentSidxInfo = playlist.sidx;
if (currentSidxInfo) {
const key = generateSidxKey(currentSidxInfo);
if (!oldSidxMapping[key]) {
break;
}
const savedSidxInfo = oldSidxMapping[key].sidxInfo;
if (equivalentSidx(savedSidxInfo, currentSidxInfo)) {
newSidxMapping[key] = oldSidxMapping[key];
}
}
}
return newSidxMapping;
};
/**
* A function that filters out changed items as they need to be requested separately.
*
* The method is exported for testing
*
* @param {Object} master the parsed mpd XML returned via mpd-parser
* @param {Object} oldSidxMapping the SIDX to compare against
*/
export const filterChangedSidxMappings = (master, oldSidxMapping) => {
const videoSidx = compareSidxEntry(master.playlists, oldSidxMapping);
let mediaGroupSidx = videoSidx;
forEachMediaGroup(master, (properties, mediaType, groupKey, labelKey) => {
if (properties.playlists && properties.playlists.length) {
const playlists = properties.playlists;
mediaGroupSidx = mergeOptions(
mediaGroupSidx,
compareSidxEntry(playlists, oldSidxMapping)
);
}
});
return mediaGroupSidx;
};
export default class DashPlaylistLoader extends EventTarget {
// DashPlaylistLoader must accept either a src url or a playlist because subsequent
// playlist loader setups from media groups will expect to be able to pass a playlist
// (since there aren't external URLs to media playlists with DASH)
constructor(srcUrlOrPlaylist, vhs, options = { }, masterPlaylistLoader) {
super();
this.masterPlaylistLoader_ = masterPlaylistLoader || this;
if (!masterPlaylistLoader) {
this.isMaster_ = true;
}
const { withCredentials = false, handleManifestRedirects = false } = options;
this.vhs_ = vhs;
this.withCredentials = withCredentials;
this.handleManifestRedirects = handleManifestRedirects;
if (!srcUrlOrPlaylist) {
throw new Error('A non-empty playlist URL or object is required');
}
// event naming?
this.on('minimumUpdatePeriod', () => {
this.refreshXml_();
});
// live playlist staleness timeout
this.on('mediaupdatetimeout', () => {
this.refreshMedia_(this.media().id);
});
this.state = 'HAVE_NOTHING';
this.loadedPlaylists_ = {};
this.logger_ = logger('DashPlaylistLoader');
// initialize the loader state
// The masterPlaylistLoader will be created with a string
if (this.isMaster_) {
this.masterPlaylistLoader_.srcUrl = srcUrlOrPlaylist;
// TODO: reset sidxMapping between period changes
// once multi-period is refactored
this.masterPlaylistLoader_.sidxMapping_ = {};
} else {
this.childPlaylist_ = srcUrlOrPlaylist;
}
}
requestErrored_(err, request, startingState) {
// disposed
if (!this.request) {
return true;
}
// pending request is cleared
this.request = null;
if (err) {
// use the provided error object or create one
// based on the request/response
this.error = typeof err === 'object' && !(err instanceof Error) ? err : {
status: request.status,
message: 'DASH request error at URL: ' + request.uri,
response: request.response,
// MEDIA_ERR_NETWORK
code: 2
};
if (startingState) {
this.state = startingState;
}
this.trigger('error');
return true;
}
}
/**
* Verify that the container of the sidx segment can be parsed
* and if it can, get and parse that segment.
*/
addSidxSegments_(playlist, startingState, cb) {
const sidxKey = playlist.sidx && generateSidxKey(playlist.sidx);
// playlist lacks sidx or sidx segments were added to this playlist already.
if (!playlist.sidx || !sidxKey || this.masterPlaylistLoader_.sidxMapping_[sidxKey]) {
// keep this function async
this.mediaRequest_ = window.setTimeout(() => cb(false), 0);
return;
}
// resolve the segment URL relative to the playlist
const uri = resolveManifestRedirect(this.handleManifestRedirects, playlist.sidx.resolvedUri);
const fin = (err, request) => {
if (this.requestErrored_(err, request, startingState)) {
return;
}
const sidxMapping = this.masterPlaylistLoader_.sidxMapping_;
let sidx;
try {
sidx = parseSidx(toUint8(request.response).subarray(8));
} catch (e) {
// sidx parsing failed.
this.requestErrored_(e, request, startingState);
return;
}
sidxMapping[sidxKey] = {
sidxInfo: playlist.sidx,
sidx
};
addSidxSegmentsToPlaylist(playlist, sidx, playlist.sidx.resolvedUri);
return cb(true);
};
this.request = containerRequest(uri, this.vhs_.xhr, (err, request, container, bytes) => {
if (err) {
return fin(err, request);
}
if (!container || container !== 'mp4') {
return fin({
status: request.status,
message: `Unsupported ${container || 'unknown'} container type for sidx segment at URL: ${uri}`,
// response is just bytes in this case
// but we really don't want to return that.
response: '',
playlist,
internal: true,
blacklistDuration: Infinity,
// MEDIA_ERR_NETWORK
code: 2
}, request);
}
// if we already downloaded the sidx bytes in the container request, use them
const {offset, length} = playlist.sidx.byterange;
if (bytes.length >= (length + offset)) {
return fin(err, {
response: bytes.subarray(offset, offset + length),
status: request.status,
uri: request.uri
});
}
// otherwise request sidx bytes
this.request = this.vhs_.xhr({
uri,
responseType: 'arraybuffer',
headers: segmentXhrHeaders({byterange: playlist.sidx.byterange})
}, fin);
});
}
dispose() {
this.trigger('dispose');
this.stopRequest();
this.loadedPlaylists_ = {};
window.clearTimeout(this.minimumUpdatePeriodTimeout_);
window.clearTimeout(this.mediaRequest_);
window.clearTimeout(this.mediaUpdateTimeout);
this.mediaUpdateTimeout = null;
this.mediaRequest_ = null;
this.minimumUpdatePeriodTimeout_ = null;
if (this.masterPlaylistLoader_.createMupOnMedia_) {
this.off('loadedmetadata', this.masterPlaylistLoader_.createMupOnMedia_);
this.masterPlaylistLoader_.createMupOnMedia_ = null;
}
this.off();
}
hasPendingRequest() {
return this.request || this.mediaRequest_;
}
stopRequest() {
if (this.request) {
const oldRequest = this.request;
this.request = null;
oldRequest.onreadystatechange = null;
oldRequest.abort();
}
}
media(playlist) {
// getter
if (!playlist) {
return this.media_;
}
// setter
if (this.state === 'HAVE_NOTHING') {
throw new Error('Cannot switch media playlist from ' + this.state);
}
const startingState = this.state;
// find the playlist object if the target playlist has been specified by URI
if (typeof playlist === 'string') {
if (!this.masterPlaylistLoader_.master.playlists[playlist]) {
throw new Error('Unknown playlist URI: ' + playlist);
}
playlist = this.masterPlaylistLoader_.master.playlists[playlist];
}
const mediaChange = !this.media_ || playlist.id !== this.media_.id;
// switch to previously loaded playlists immediately
if (mediaChange &&
this.loadedPlaylists_[playlist.id] &&
this.loadedPlaylists_[playlist.id].endList) {
this.state = 'HAVE_METADATA';
this.media_ = playlist;
// trigger media change if the active media has been updated
if (mediaChange) {
this.trigger('mediachanging');
this.trigger('mediachange');
}
return;
}
// switching to the active playlist is a no-op
if (!mediaChange) {
return;
}
// switching from an already loaded playlist
if (this.media_) {
this.trigger('mediachanging');
}
this.addSidxSegments_(playlist, startingState, (sidxChanged) => {
// everything is ready just continue to haveMetadata
this.haveMetadata({startingState, playlist});
});
}
haveMetadata({startingState, playlist}) {
this.state = 'HAVE_METADATA';
this.loadedPlaylists_[playlist.id] = playlist;
this.mediaRequest_ = null;
// This will trigger loadedplaylist
this.refreshMedia_(playlist.id);
// fire loadedmetadata the first time a media playlist is loaded
// to resolve setup of media groups
if (startingState === 'HAVE_MASTER') {
this.trigger('loadedmetadata');
} else {
// trigger media change if the active media has been updated
this.trigger('mediachange');
}
}
pause() {
if (this.masterPlaylistLoader_.createMupOnMedia_) {
this.off('loadedmetadata', this.masterPlaylistLoader_.createMupOnMedia_);
this.masterPlaylistLoader_.createMupOnMedia_ = null;
}
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
this.mediaUpdateTimeout = null;
if (this.isMaster_) {
window.clearTimeout(this.masterPlaylistLoader_.minimumUpdatePeriodTimeout_);
this.masterPlaylistLoader_.minimumUpdatePeriodTimeout_ = null;
}
if (this.state === 'HAVE_NOTHING') {
// If we pause the loader before any data has been retrieved, its as if we never
// started, so reset to an unstarted state.
this.started = false;
}
}
load(isFinalRendition) {
window.clearTimeout(this.mediaUpdateTimeout);
this.mediaUpdateTimeout = null;
const media = this.media();
if (isFinalRendition) {
const delay = media ? (media.targetDuration / 2) * 1000 : 5 * 1000;
this.mediaUpdateTimeout = window.setTimeout(() => this.load(), delay);
return;
}
// because the playlists are internal to the manifest, load should either load the
// main manifest, or do nothing but trigger an event
if (!this.started) {
this.start();
return;
}
if (media && !media.endList) {
// Check to see if this is the master loader and the MUP was cleared (this happens
// when the loader was paused). `media` should be set at this point since one is always
// set during `start()`.
if (this.isMaster_ && !this.minimumUpdatePeriodTimeout_) {
// Trigger minimumUpdatePeriod to refresh the master manifest
this.trigger('minimumUpdatePeriod');
// Since there was no prior minimumUpdatePeriodTimeout it should be recreated
this.updateMinimumUpdatePeriodTimeout_();
}
this.trigger('mediaupdatetimeout');
} else {
this.trigger('loadedplaylist');
}
}
start() {
this.started = true;
// We don't need to request the master manifest again
// Call this asynchronously to match the xhr request behavior below
if (!this.isMaster_) {
this.mediaRequest_ = window.setTimeout(() => this.haveMaster_(), 0);
return;
}
this.requestMaster_((req, masterChanged) => {
this.haveMaster_();
if (!this.hasPendingRequest() && !this.media_) {
this.media(this.masterPlaylistLoader_.master.playlists[0]);
}
});
}
requestMaster_(cb) {
this.request = this.vhs_.xhr({
uri: this.masterPlaylistLoader_.srcUrl,
withCredentials: this.withCredentials
}, (error, req) => {
if (this.requestErrored_(error, req)) {
if (this.state === 'HAVE_NOTHING') {
this.started = false;
}
return;
}
const masterChanged = req.responseText !== this.masterPlaylistLoader_.masterXml_;
this.masterPlaylistLoader_.masterXml_ = req.responseText;
if (req.responseHeaders && req.responseHeaders.date) {
this.masterLoaded_ = Date.parse(req.responseHeaders.date);
} else {
this.masterLoaded_ = Date.now();
}
this.masterPlaylistLoader_.srcUrl = resolveManifestRedirect(this.handleManifestRedirects, this.masterPlaylistLoader_.srcUrl, req);
if (masterChanged) {
this.handleMaster_();
this.syncClientServerClock_(() => {
return cb(req, masterChanged);
});
return;
}
return cb(req, masterChanged);
});
}
/**
* Parses the master xml for UTCTiming node to sync the client clock to the server
* clock. If the UTCTiming node requires a HEAD or GET request, that request is made.
*
* @param {Function} done
* Function to call when clock sync has completed
*/
syncClientServerClock_(done) {
const utcTiming = parseUTCTiming(this.masterPlaylistLoader_.masterXml_);
// No UTCTiming element found in the mpd. Use Date header from mpd request as the
// server clock
if (utcTiming === null) {
this.masterPlaylistLoader_.clientOffset_ = this.masterLoaded_ - Date.now();
return done();
}
if (utcTiming.method === 'DIRECT') {
this.masterPlaylistLoader_.clientOffset_ = utcTiming.value - Date.now();
return done();
}
this.request = this.vhs_.xhr({
uri: resolveUrl(this.masterPlaylistLoader_.srcUrl, utcTiming.value),
method: utcTiming.method,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
if (error) {
// sync request failed, fall back to using date header from mpd
// TODO: log warning
this.masterPlaylistLoader_.clientOffset_ = this.masterLoaded_ - Date.now();
return done();
}
let serverTime;
if (utcTiming.method === 'HEAD') {
if (!req.responseHeaders || !req.responseHeaders.date) {
// expected date header not preset, fall back to using date header from mpd
// TODO: log warning
serverTime = this.masterLoaded_;
} else {
serverTime = Date.parse(req.responseHeaders.date);
}
} else {
serverTime = Date.parse(req.responseText);
}
this.masterPlaylistLoader_.clientOffset_ = serverTime - Date.now();
done();
});
}
haveMaster_() {
this.state = 'HAVE_MASTER';
if (this.isMaster_) {
// We have the master playlist at this point, so
// trigger this to allow MasterPlaylistController
// to make an initial playlist selection
this.trigger('loadedplaylist');
} else if (!this.media_) {
// no media playlist was specifically selected so select
// the one the child playlist loader was created with
this.media(this.childPlaylist_);
}
}
handleMaster_() {
// clear media request
this.mediaRequest_ = null;
let newMaster = parseMasterXml({
masterXml: this.masterPlaylistLoader_.masterXml_,
srcUrl: this.masterPlaylistLoader_.srcUrl,
clientOffset: this.masterPlaylistLoader_.clientOffset_,
sidxMapping: this.masterPlaylistLoader_.sidxMapping_
});
const oldMaster = this.masterPlaylistLoader_.master;
// if we have an old master to compare the new master against
if (oldMaster) {
newMaster = updateMaster(oldMaster, newMaster, this.masterPlaylistLoader_.sidxMapping_);
}
// only update master if we have a new master
this.masterPlaylistLoader_.master = newMaster ? newMaster : oldMaster;
const location = this.masterPlaylistLoader_.master.locations && this.masterPlaylistLoader_.master.locations[0];
if (location && location !== this.masterPlaylistLoader_.srcUrl) {
this.masterPlaylistLoader_.srcUrl = location;
}
if (!oldMaster || (newMaster && newMaster.minimumUpdatePeriod !== oldMaster.minimumUpdatePeriod)) {
this.updateMinimumUpdatePeriodTimeout_();
}
return Boolean(newMaster);
}
updateMinimumUpdatePeriodTimeout_() {
const mpl = this.masterPlaylistLoader_;
// cancel any pending creation of mup on media
// a new one will be added if needed.
if (mpl.createMupOnMedia_) {
mpl.off('loadedmetadata', mpl.createMupOnMedia_);
mpl.createMupOnMedia_ = null;
}
// clear any pending timeouts
if (mpl.minimumUpdatePeriodTimeout_) {
window.clearTimeout(mpl.minimumUpdatePeriodTimeout_);
mpl.minimumUpdatePeriodTimeout_ = null;
}
let mup = mpl.master && mpl.master.minimumUpdatePeriod;
// If the minimumUpdatePeriod has a value of 0, that indicates that the current
// MPD has no future validity, so a new one will need to be acquired when new
// media segments are to be made available. Thus, we use the target duration
// in this case
if (mup === 0) {
if (mpl.media()) {
mup = mpl.media().targetDuration * 1000;
} else {
mpl.createMupOnMedia_ = mpl.updateMinimumUpdatePeriodTimeout_;
mpl.one('loadedmetadata', mpl.createMupOnMedia_);
}
}
// if minimumUpdatePeriod is invalid or <= zero, which
// can happen when a live video becomes VOD. skip timeout
// creation.
if (typeof mup !== 'number' || mup <= 0) {
if (mup < 0) {
this.logger_(`found invalid minimumUpdatePeriod of ${mup}, not setting a timeout`);
}
return;
}
this.createMUPTimeout_(mup);
}
createMUPTimeout_(mup) {
const mpl = this.masterPlaylistLoader_;
mpl.minimumUpdatePeriodTimeout_ = window.setTimeout(() => {
mpl.minimumUpdatePeriodTimeout_ = null;
mpl.trigger('minimumUpdatePeriod');
mpl.createMUPTimeout_(mup);
}, mup);
}
/**
* Sends request to refresh the master xml and updates the parsed master manifest
*/
refreshXml_() {
this.requestMaster_((req, masterChanged) => {
if (!masterChanged) {
return;
}
if (this.media_) {
this.media_ = this.masterPlaylistLoader_.master.playlists[this.media_.id];
}
// This will filter out updated sidx info from the mapping
this.masterPlaylistLoader_.sidxMapping_ = filterChangedSidxMappings(
this.masterPlaylistLoader_.master,
this.masterPlaylistLoader_.sidxMapping_
);
this.addSidxSegments_(this.media(), this.state, (sidxChanged) => {
// TODO: do we need to reload the current playlist?
this.refreshMedia_(this.media().id);
});
});
}
/**
* Refreshes the media playlist by re-parsing the master xml and updating playlist
* references. If this is an alternate loader, the updated parsed manifest is retrieved
* from the master loader.
*/
refreshMedia_(mediaID) {
if (!mediaID) {
throw new Error('refreshMedia_ must take a media id');
}
// for master we have to reparse the master xml
// to re-create segments based on current timing values
// which may change media. We only skip updating master
// if this is the first time this.media_ is being set.
// as master was just parsed in that case.
if (this.media_ && this.isMaster_) {
this.handleMaster_();
}
const playlists = this.masterPlaylistLoader_.master.playlists;
const mediaChanged = !this.media_ || this.media_ !== playlists[mediaID];
if (mediaChanged) {
this.media_ = playlists[mediaID];
} else {
this.trigger('playlistunchanged');
}
if (!this.mediaUpdateTimeout) {
const createMediaUpdateTimeout = () => {
if (this.media().endList) {
return;
}
this.mediaUpdateTimeout = window.setTimeout(() => {
this.trigger('mediaupdatetimeout');
createMediaUpdateTimeout();
}, refreshDelay(this.media(), Boolean(mediaChanged)));
};
createMediaUpdateTimeout();
}
this.trigger('loadedplaylist');
}
}

View file

@ -0,0 +1,41 @@
/* global self */
import { Decrypter } from 'aes-decrypter';
import { createTransferableMessage } from './bin-utils';
/**
* Our web worker interface so that things can talk to aes-decrypter
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*/
self.onmessage = function(event) {
const data = event.data;
const encrypted = new Uint8Array(
data.encrypted.bytes,
data.encrypted.byteOffset,
data.encrypted.byteLength
);
const key = new Uint32Array(
data.key.bytes,
data.key.byteOffset,
data.key.byteLength / 4
);
const iv = new Uint32Array(
data.iv.bytes,
data.iv.byteOffset,
data.iv.byteLength / 4
);
/* eslint-disable no-new, handle-callback-err */
new Decrypter(
encrypted,
key,
iv,
function(err, bytes) {
self.postMessage(createTransferableMessage({
source: data.source,
decrypted: bytes
}), [bytes.buffer]);
}
);
/* eslint-enable */
};

View file

@ -0,0 +1,654 @@
/*! @name @videojs/http-streaming @version 2.5.0 @license Apache-2.0 */
var decrypterWorker = (function () {
'use strict';
function _defineProperties(target, props) {
for (var i = 0; i < props.length; i++) {
var descriptor = props[i];
descriptor.enumerable = descriptor.enumerable || false;
descriptor.configurable = true;
if ("value" in descriptor) descriptor.writable = true;
Object.defineProperty(target, descriptor.key, descriptor);
}
}
function _createClass(Constructor, protoProps, staticProps) {
if (protoProps) _defineProperties(Constructor.prototype, protoProps);
if (staticProps) _defineProperties(Constructor, staticProps);
return Constructor;
}
var createClass = _createClass;
function _inheritsLoose(subClass, superClass) {
subClass.prototype = Object.create(superClass.prototype);
subClass.prototype.constructor = subClass;
subClass.__proto__ = superClass;
}
var inheritsLoose = _inheritsLoose;
/**
* @file stream.js
*/
/**
* A lightweight readable stream implemention that handles event dispatching.
*
* @class Stream
*/
var Stream = /*#__PURE__*/function () {
function Stream() {
this.listeners = {};
}
/**
* Add a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener the callback to be invoked when an event of
* the specified type occurs
*/
var _proto = Stream.prototype;
_proto.on = function on(type, listener) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(listener);
}
/**
* Remove a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener a function previously registered for this
* type of event through `on`
* @return {boolean} if we could turn it off or not
*/
;
_proto.off = function off(type, listener) {
if (!this.listeners[type]) {
return false;
}
var index = this.listeners[type].indexOf(listener); // TODO: which is better?
// In Video.js we slice listener functions
// on trigger so that it does not mess up the order
// while we loop through.
//
// Here we slice on off so that the loop in trigger
// can continue using it's old reference to loop without
// messing up the order.
this.listeners[type] = this.listeners[type].slice(0);
this.listeners[type].splice(index, 1);
return index > -1;
}
/**
* Trigger an event of the specified type on this stream. Any additional
* arguments to this function are passed as parameters to event listeners.
*
* @param {string} type the event name
*/
;
_proto.trigger = function trigger(type) {
var callbacks = this.listeners[type];
if (!callbacks) {
return;
} // Slicing the arguments on every invocation of this method
// can add a significant amount of overhead. Avoid the
// intermediate object creation for the common case of a
// single callback argument
if (arguments.length === 2) {
var length = callbacks.length;
for (var i = 0; i < length; ++i) {
callbacks[i].call(this, arguments[1]);
}
} else {
var args = Array.prototype.slice.call(arguments, 1);
var _length = callbacks.length;
for (var _i = 0; _i < _length; ++_i) {
callbacks[_i].apply(this, args);
}
}
}
/**
* Destroys the stream and cleans up.
*/
;
_proto.dispose = function dispose() {
this.listeners = {};
}
/**
* Forwards all `data` events on this stream to the destination stream. The
* destination stream should provide a method `push` to receive the data
* events as they arrive.
*
* @param {Stream} destination the stream that will receive all `data` events
* @see http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
*/
;
_proto.pipe = function pipe(destination) {
this.on('data', function (data) {
destination.push(data);
});
};
return Stream;
}();
/*! @name pkcs7 @version 1.0.4 @license Apache-2.0 */
/**
* Returns the subarray of a Uint8Array without PKCS#7 padding.
*
* @param padded {Uint8Array} unencrypted bytes that have been padded
* @return {Uint8Array} the unpadded bytes
* @see http://tools.ietf.org/html/rfc5652
*/
function unpad(padded) {
return padded.subarray(0, padded.byteLength - padded[padded.byteLength - 1]);
}
/*! @name aes-decrypter @version 3.1.2 @license Apache-2.0 */
/**
* @file aes.js
*
* This file contains an adaptation of the AES decryption algorithm
* from the Standford Javascript Cryptography Library. That work is
* covered by the following copyright and permissions notice:
*
* Copyright 2009-2010 Emily Stark, Mike Hamburg, Dan Boneh.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
* IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* The views and conclusions contained in the software and documentation
* are those of the authors and should not be interpreted as representing
* official policies, either expressed or implied, of the authors.
*/
/**
* Expand the S-box tables.
*
* @private
*/
var precompute = function precompute() {
var tables = [[[], [], [], [], []], [[], [], [], [], []]];
var encTable = tables[0];
var decTable = tables[1];
var sbox = encTable[4];
var sboxInv = decTable[4];
var i;
var x;
var xInv;
var d = [];
var th = [];
var x2;
var x4;
var x8;
var s;
var tEnc;
var tDec; // Compute double and third tables
for (i = 0; i < 256; i++) {
th[(d[i] = i << 1 ^ (i >> 7) * 283) ^ i] = i;
}
for (x = xInv = 0; !sbox[x]; x ^= x2 || 1, xInv = th[xInv] || 1) {
// Compute sbox
s = xInv ^ xInv << 1 ^ xInv << 2 ^ xInv << 3 ^ xInv << 4;
s = s >> 8 ^ s & 255 ^ 99;
sbox[x] = s;
sboxInv[s] = x; // Compute MixColumns
x8 = d[x4 = d[x2 = d[x]]];
tDec = x8 * 0x1010101 ^ x4 * 0x10001 ^ x2 * 0x101 ^ x * 0x1010100;
tEnc = d[s] * 0x101 ^ s * 0x1010100;
for (i = 0; i < 4; i++) {
encTable[i][x] = tEnc = tEnc << 24 ^ tEnc >>> 8;
decTable[i][s] = tDec = tDec << 24 ^ tDec >>> 8;
}
} // Compactify. Considerable speedup on Firefox.
for (i = 0; i < 5; i++) {
encTable[i] = encTable[i].slice(0);
decTable[i] = decTable[i].slice(0);
}
return tables;
};
var aesTables = null;
/**
* Schedule out an AES key for both encryption and decryption. This
* is a low-level class. Use a cipher mode to do bulk encryption.
*
* @class AES
* @param key {Array} The key as an array of 4, 6 or 8 words.
*/
var AES = /*#__PURE__*/function () {
function AES(key) {
/**
* The expanded S-box and inverse S-box tables. These will be computed
* on the client so that we don't have to send them down the wire.
*
* There are two tables, _tables[0] is for encryption and
* _tables[1] is for decryption.
*
* The first 4 sub-tables are the expanded S-box with MixColumns. The
* last (_tables[01][4]) is the S-box itself.
*
* @private
*/
// if we have yet to precompute the S-box tables
// do so now
if (!aesTables) {
aesTables = precompute();
} // then make a copy of that object for use
this._tables = [[aesTables[0][0].slice(), aesTables[0][1].slice(), aesTables[0][2].slice(), aesTables[0][3].slice(), aesTables[0][4].slice()], [aesTables[1][0].slice(), aesTables[1][1].slice(), aesTables[1][2].slice(), aesTables[1][3].slice(), aesTables[1][4].slice()]];
var i;
var j;
var tmp;
var sbox = this._tables[0][4];
var decTable = this._tables[1];
var keyLen = key.length;
var rcon = 1;
if (keyLen !== 4 && keyLen !== 6 && keyLen !== 8) {
throw new Error('Invalid aes key size');
}
var encKey = key.slice(0);
var decKey = [];
this._key = [encKey, decKey]; // schedule encryption keys
for (i = keyLen; i < 4 * keyLen + 28; i++) {
tmp = encKey[i - 1]; // apply sbox
if (i % keyLen === 0 || keyLen === 8 && i % keyLen === 4) {
tmp = sbox[tmp >>> 24] << 24 ^ sbox[tmp >> 16 & 255] << 16 ^ sbox[tmp >> 8 & 255] << 8 ^ sbox[tmp & 255]; // shift rows and add rcon
if (i % keyLen === 0) {
tmp = tmp << 8 ^ tmp >>> 24 ^ rcon << 24;
rcon = rcon << 1 ^ (rcon >> 7) * 283;
}
}
encKey[i] = encKey[i - keyLen] ^ tmp;
} // schedule decryption keys
for (j = 0; i; j++, i--) {
tmp = encKey[j & 3 ? i : i - 4];
if (i <= 4 || j < 4) {
decKey[j] = tmp;
} else {
decKey[j] = decTable[0][sbox[tmp >>> 24]] ^ decTable[1][sbox[tmp >> 16 & 255]] ^ decTable[2][sbox[tmp >> 8 & 255]] ^ decTable[3][sbox[tmp & 255]];
}
}
}
/**
* Decrypt 16 bytes, specified as four 32-bit words.
*
* @param {number} encrypted0 the first word to decrypt
* @param {number} encrypted1 the second word to decrypt
* @param {number} encrypted2 the third word to decrypt
* @param {number} encrypted3 the fourth word to decrypt
* @param {Int32Array} out the array to write the decrypted words
* into
* @param {number} offset the offset into the output array to start
* writing results
* @return {Array} The plaintext.
*/
var _proto = AES.prototype;
_proto.decrypt = function decrypt(encrypted0, encrypted1, encrypted2, encrypted3, out, offset) {
var key = this._key[1]; // state variables a,b,c,d are loaded with pre-whitened data
var a = encrypted0 ^ key[0];
var b = encrypted3 ^ key[1];
var c = encrypted2 ^ key[2];
var d = encrypted1 ^ key[3];
var a2;
var b2;
var c2; // key.length === 2 ?
var nInnerRounds = key.length / 4 - 2;
var i;
var kIndex = 4;
var table = this._tables[1]; // load up the tables
var table0 = table[0];
var table1 = table[1];
var table2 = table[2];
var table3 = table[3];
var sbox = table[4]; // Inner rounds. Cribbed from OpenSSL.
for (i = 0; i < nInnerRounds; i++) {
a2 = table0[a >>> 24] ^ table1[b >> 16 & 255] ^ table2[c >> 8 & 255] ^ table3[d & 255] ^ key[kIndex];
b2 = table0[b >>> 24] ^ table1[c >> 16 & 255] ^ table2[d >> 8 & 255] ^ table3[a & 255] ^ key[kIndex + 1];
c2 = table0[c >>> 24] ^ table1[d >> 16 & 255] ^ table2[a >> 8 & 255] ^ table3[b & 255] ^ key[kIndex + 2];
d = table0[d >>> 24] ^ table1[a >> 16 & 255] ^ table2[b >> 8 & 255] ^ table3[c & 255] ^ key[kIndex + 3];
kIndex += 4;
a = a2;
b = b2;
c = c2;
} // Last round.
for (i = 0; i < 4; i++) {
out[(3 & -i) + offset] = sbox[a >>> 24] << 24 ^ sbox[b >> 16 & 255] << 16 ^ sbox[c >> 8 & 255] << 8 ^ sbox[d & 255] ^ key[kIndex++];
a2 = a;
a = b;
b = c;
c = d;
d = a2;
}
};
return AES;
}();
/**
* A wrapper around the Stream class to use setTimeout
* and run stream "jobs" Asynchronously
*
* @class AsyncStream
* @extends Stream
*/
var AsyncStream = /*#__PURE__*/function (_Stream) {
inheritsLoose(AsyncStream, _Stream);
function AsyncStream() {
var _this;
_this = _Stream.call(this, Stream) || this;
_this.jobs = [];
_this.delay = 1;
_this.timeout_ = null;
return _this;
}
/**
* process an async job
*
* @private
*/
var _proto = AsyncStream.prototype;
_proto.processJob_ = function processJob_() {
this.jobs.shift()();
if (this.jobs.length) {
this.timeout_ = setTimeout(this.processJob_.bind(this), this.delay);
} else {
this.timeout_ = null;
}
}
/**
* push a job into the stream
*
* @param {Function} job the job to push into the stream
*/
;
_proto.push = function push(job) {
this.jobs.push(job);
if (!this.timeout_) {
this.timeout_ = setTimeout(this.processJob_.bind(this), this.delay);
}
};
return AsyncStream;
}(Stream);
/**
* Convert network-order (big-endian) bytes into their little-endian
* representation.
*/
var ntoh = function ntoh(word) {
return word << 24 | (word & 0xff00) << 8 | (word & 0xff0000) >> 8 | word >>> 24;
};
/**
* Decrypt bytes using AES-128 with CBC and PKCS#7 padding.
*
* @param {Uint8Array} encrypted the encrypted bytes
* @param {Uint32Array} key the bytes of the decryption key
* @param {Uint32Array} initVector the initialization vector (IV) to
* use for the first round of CBC.
* @return {Uint8Array} the decrypted bytes
*
* @see http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
* @see http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Block_Chaining_.28CBC.29
* @see https://tools.ietf.org/html/rfc2315
*/
var decrypt = function decrypt(encrypted, key, initVector) {
// word-level access to the encrypted bytes
var encrypted32 = new Int32Array(encrypted.buffer, encrypted.byteOffset, encrypted.byteLength >> 2);
var decipher = new AES(Array.prototype.slice.call(key)); // byte and word-level access for the decrypted output
var decrypted = new Uint8Array(encrypted.byteLength);
var decrypted32 = new Int32Array(decrypted.buffer); // temporary variables for working with the IV, encrypted, and
// decrypted data
var init0;
var init1;
var init2;
var init3;
var encrypted0;
var encrypted1;
var encrypted2;
var encrypted3; // iteration variable
var wordIx; // pull out the words of the IV to ensure we don't modify the
// passed-in reference and easier access
init0 = initVector[0];
init1 = initVector[1];
init2 = initVector[2];
init3 = initVector[3]; // decrypt four word sequences, applying cipher-block chaining (CBC)
// to each decrypted block
for (wordIx = 0; wordIx < encrypted32.length; wordIx += 4) {
// convert big-endian (network order) words into little-endian
// (javascript order)
encrypted0 = ntoh(encrypted32[wordIx]);
encrypted1 = ntoh(encrypted32[wordIx + 1]);
encrypted2 = ntoh(encrypted32[wordIx + 2]);
encrypted3 = ntoh(encrypted32[wordIx + 3]); // decrypt the block
decipher.decrypt(encrypted0, encrypted1, encrypted2, encrypted3, decrypted32, wordIx); // XOR with the IV, and restore network byte-order to obtain the
// plaintext
decrypted32[wordIx] = ntoh(decrypted32[wordIx] ^ init0);
decrypted32[wordIx + 1] = ntoh(decrypted32[wordIx + 1] ^ init1);
decrypted32[wordIx + 2] = ntoh(decrypted32[wordIx + 2] ^ init2);
decrypted32[wordIx + 3] = ntoh(decrypted32[wordIx + 3] ^ init3); // setup the IV for the next round
init0 = encrypted0;
init1 = encrypted1;
init2 = encrypted2;
init3 = encrypted3;
}
return decrypted;
};
/**
* The `Decrypter` class that manages decryption of AES
* data through `AsyncStream` objects and the `decrypt`
* function
*
* @param {Uint8Array} encrypted the encrypted bytes
* @param {Uint32Array} key the bytes of the decryption key
* @param {Uint32Array} initVector the initialization vector (IV) to
* @param {Function} done the function to run when done
* @class Decrypter
*/
var Decrypter = /*#__PURE__*/function () {
function Decrypter(encrypted, key, initVector, done) {
var step = Decrypter.STEP;
var encrypted32 = new Int32Array(encrypted.buffer);
var decrypted = new Uint8Array(encrypted.byteLength);
var i = 0;
this.asyncStream_ = new AsyncStream(); // split up the encryption job and do the individual chunks asynchronously
this.asyncStream_.push(this.decryptChunk_(encrypted32.subarray(i, i + step), key, initVector, decrypted));
for (i = step; i < encrypted32.length; i += step) {
initVector = new Uint32Array([ntoh(encrypted32[i - 4]), ntoh(encrypted32[i - 3]), ntoh(encrypted32[i - 2]), ntoh(encrypted32[i - 1])]);
this.asyncStream_.push(this.decryptChunk_(encrypted32.subarray(i, i + step), key, initVector, decrypted));
} // invoke the done() callback when everything is finished
this.asyncStream_.push(function () {
// remove pkcs#7 padding from the decrypted bytes
done(null, unpad(decrypted));
});
}
/**
* a getter for step the maximum number of bytes to process at one time
*
* @return {number} the value of step 32000
*/
var _proto = Decrypter.prototype;
/**
* @private
*/
_proto.decryptChunk_ = function decryptChunk_(encrypted, key, initVector, decrypted) {
return function () {
var bytes = decrypt(encrypted, key, initVector);
decrypted.set(bytes, encrypted.byteOffset);
};
};
createClass(Decrypter, null, [{
key: "STEP",
get: function get() {
// 4 * 8000;
return 32000;
}
}]);
return Decrypter;
}();
/**
* @file bin-utils.js
*/
/**
* Creates an object for sending to a web worker modifying properties that are TypedArrays
* into a new object with seperated properties for the buffer, byteOffset, and byteLength.
*
* @param {Object} message
* Object of properties and values to send to the web worker
* @return {Object}
* Modified message with TypedArray values expanded
* @function createTransferableMessage
*/
var createTransferableMessage = function createTransferableMessage(message) {
var transferable = {};
Object.keys(message).forEach(function (key) {
var value = message[key];
if (ArrayBuffer.isView(value)) {
transferable[key] = {
bytes: value.buffer,
byteOffset: value.byteOffset,
byteLength: value.byteLength
};
} else {
transferable[key] = value;
}
});
return transferable;
};
/* global self */
/**
* Our web worker interface so that things can talk to aes-decrypter
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self
* the scope for the web worker
*/
var DecrypterWorker = function DecrypterWorker(self) {
self.onmessage = function (event) {
var data = event.data;
var encrypted = new Uint8Array(data.encrypted.bytes, data.encrypted.byteOffset, data.encrypted.byteLength);
var key = new Uint32Array(data.key.bytes, data.key.byteOffset, data.key.byteLength / 4);
var iv = new Uint32Array(data.iv.bytes, data.iv.byteOffset, data.iv.byteLength / 4);
/* eslint-disable no-new, handle-callback-err */
new Decrypter(encrypted, key, iv, function (err, bytes) {
self.postMessage(createTransferableMessage({
source: data.source,
decrypted: bytes
}), [bytes.buffer]);
});
/* eslint-enable */
};
};
var decrypterWorker = new DecrypterWorker(self);
return decrypterWorker;
}());

View file

@ -0,0 +1,2 @@
// https://www.w3.org/TR/WebIDL-1/#quotaexceedederror
export const QUOTA_EXCEEDED_ERR = 22;

322
node_modules/@videojs/http-streaming/src/manifest.js generated vendored Normal file
View file

@ -0,0 +1,322 @@
import videojs from 'video.js';
import window from 'global/window';
import { Parser as M3u8Parser } from 'm3u8-parser';
import { resolveUrl } from './resolve-url';
import { getLastParts } from './playlist.js';
const { log } = videojs;
export const createPlaylistID = (index, uri) => {
return `${index}-${uri}`;
};
/**
* Parses a given m3u8 playlist
*
* @param {Function} [onwarn]
* a function to call when the parser triggers a warning event.
* @param {Function} [oninfo]
* a function to call when the parser triggers an info event.
* @param {string} manifestString
* The downloaded manifest string
* @param {Object[]} [customTagParsers]
* An array of custom tag parsers for the m3u8-parser instance
* @param {Object[]} [customTagMappers]
* An array of custom tag mappers for the m3u8-parser instance
* @param {boolean} [experimentalLLHLS=false]
* Whether to keep ll-hls features in the manifest after parsing.
* @return {Object}
* The manifest object
*/
export const parseManifest = ({
onwarn,
oninfo,
manifestString,
customTagParsers = [],
customTagMappers = [],
experimentalLLHLS
}) => {
const parser = new M3u8Parser();
if (onwarn) {
parser.on('warn', onwarn);
}
if (oninfo) {
parser.on('info', oninfo);
}
customTagParsers.forEach(customParser => parser.addParser(customParser));
customTagMappers.forEach(mapper => parser.addTagMapper(mapper));
parser.push(manifestString);
parser.end();
const manifest = parser.manifest;
// remove llhls features from the parsed manifest
// if we don't want llhls support.
if (!experimentalLLHLS) {
[
'preloadSegment',
'skip',
'serverControl',
'renditionReports',
'partInf',
'partTargetDuration'
].forEach(function(k) {
if (manifest.hasOwnProperty(k)) {
delete manifest[k];
}
});
if (manifest.segments) {
manifest.segments.forEach(function(segment) {
['parts', 'preloadHints'].forEach(function(k) {
if (segment.hasOwnProperty(k)) {
delete segment[k];
}
});
});
}
}
if (!manifest.targetDuration) {
let targetDuration = 10;
if (manifest.segments && manifest.segments.length) {
targetDuration = manifest
.segments.reduce((acc, s) => Math.max(acc, s.duration), 0);
}
if (onwarn) {
onwarn(`manifest has no targetDuration defaulting to ${targetDuration}`);
}
manifest.targetDuration = targetDuration;
}
const parts = getLastParts(manifest);
if (parts.length && !manifest.partTargetDuration) {
const partTargetDuration = parts.reduce((acc, p) => Math.max(acc, p.duration), 0);
if (onwarn) {
onwarn(`manifest has no partTargetDuration defaulting to ${partTargetDuration}`);
log.error('LL-HLS manifest has parts but lacks required #EXT-X-PART-INF:PART-TARGET value. See https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-09#section-4.4.3.7. Playback is not guaranteed.');
}
manifest.partTargetDuration = partTargetDuration;
}
return manifest;
};
/**
* Loops through all supported media groups in master and calls the provided
* callback for each group
*
* @param {Object} master
* The parsed master manifest object
* @param {Function} callback
* Callback to call for each media group
*/
export const forEachMediaGroup = (master, callback) => {
if (!master.mediaGroups) {
return;
}
['AUDIO', 'SUBTITLES'].forEach((mediaType) => {
if (!master.mediaGroups[mediaType]) {
return;
}
for (const groupKey in master.mediaGroups[mediaType]) {
for (const labelKey in master.mediaGroups[mediaType][groupKey]) {
const mediaProperties = master.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};
/**
* Adds properties and attributes to the playlist to keep consistent functionality for
* playlists throughout VHS.
*
* @param {Object} config
* Arguments object
* @param {Object} config.playlist
* The media playlist
* @param {string} [config.uri]
* The uri to the media playlist (if media playlist is not from within a master
* playlist)
* @param {string} id
* ID to use for the playlist
*/
export const setupMediaPlaylist = ({ playlist, uri, id }) => {
playlist.id = id;
playlist.playlistErrors_ = 0;
if (uri) {
// For media playlists, m3u8-parser does not have access to a URI, as HLS media
// playlists do not contain their own source URI, but one is needed for consistency in
// VHS.
playlist.uri = uri;
}
// For HLS master playlists, even though certain attributes MUST be defined, the
// stream may still be played without them.
// For HLS media playlists, m3u8-parser does not attach an attributes object to the
// manifest.
//
// To avoid undefined reference errors through the project, and make the code easier
// to write/read, add an empty attributes object for these cases.
playlist.attributes = playlist.attributes || {};
};
/**
* Adds ID, resolvedUri, and attributes properties to each playlist of the master, where
* necessary. In addition, creates playlist IDs for each playlist and adds playlist ID to
* playlist references to the playlists array.
*
* @param {Object} master
* The master playlist
*/
export const setupMediaPlaylists = (master) => {
let i = master.playlists.length;
while (i--) {
const playlist = master.playlists[i];
setupMediaPlaylist({
playlist,
id: createPlaylistID(i, playlist.uri)
});
playlist.resolvedUri = resolveUrl(master.uri, playlist.uri);
master.playlists[playlist.id] = playlist;
// URI reference added for backwards compatibility
master.playlists[playlist.uri] = playlist;
// Although the spec states an #EXT-X-STREAM-INF tag MUST have a BANDWIDTH attribute,
// the stream can be played without it. Although an attributes property may have been
// added to the playlist to prevent undefined references, issue a warning to fix the
// manifest.
if (!playlist.attributes.BANDWIDTH) {
log.warn('Invalid playlist STREAM-INF detected. Missing BANDWIDTH attribute.');
}
}
};
/**
* Adds resolvedUri properties to each media group.
*
* @param {Object} master
* The master playlist
*/
export const resolveMediaGroupUris = (master) => {
forEachMediaGroup(master, (properties) => {
if (properties.uri) {
properties.resolvedUri = resolveUrl(master.uri, properties.uri);
}
});
};
/**
* Creates a master playlist wrapper to insert a sole media playlist into.
*
* @param {Object} media
* Media playlist
* @param {string} uri
* The media URI
*
* @return {Object}
* Master playlist
*/
export const masterForMedia = (media, uri) => {
const id = createPlaylistID(0, uri);
const master = {
mediaGroups: {
'AUDIO': {},
'VIDEO': {},
'CLOSED-CAPTIONS': {},
'SUBTITLES': {}
},
uri: window.location.href,
resolvedUri: window.location.href,
playlists: [{
uri,
id,
resolvedUri: uri,
// m3u8-parser does not attach an attributes property to media playlists so make
// sure that the property is attached to avoid undefined reference errors
attributes: {}
}]
};
// set up ID reference
master.playlists[id] = master.playlists[0];
// URI reference added for backwards compatibility
master.playlists[uri] = master.playlists[0];
return master;
};
/**
* Does an in-place update of the master manifest to add updated playlist URI references
* as well as other properties needed by VHS that aren't included by the parser.
*
* @param {Object} master
* Master manifest object
* @param {string} uri
* The source URI
*/
export const addPropertiesToMaster = (master, uri) => {
master.uri = uri;
for (let i = 0; i < master.playlists.length; i++) {
if (!master.playlists[i].uri) {
// Set up phony URIs for the playlists since playlists are referenced by their URIs
// throughout VHS, but some formats (e.g., DASH) don't have external URIs
// TODO: consider adding dummy URIs in mpd-parser
const phonyUri = `placeholder-uri-${i}`;
master.playlists[i].uri = phonyUri;
}
}
forEachMediaGroup(master, (properties, mediaType, groupKey, labelKey) => {
const groupId = `placeholder-uri-${mediaType}-${groupKey}-${labelKey}`;
if (!properties.playlists || !properties.playlists.length) {
properties.playlists = [Object.assign({}, properties)];
}
properties.playlists.forEach(function(p, i) {
const id = createPlaylistID(i, groupId);
if (p.uri) {
p.resolvedUri = p.resolvedUri || resolveUrl(master.uri, p.uri);
} else {
// DEPRECATED, this has been added to prevent a breaking change.
// previously we only ever had a single media group playlist, so
// we mark the first playlist uri without prepending the index as we used to
// ideally we would do all of the playlists the same way.
p.uri = i === 0 ? groupId : id;
// don't resolve a placeholder uri to an absolute url, just use
// the placeholder again
p.resolvedUri = p.uri;
}
p.id = p.id || id;
// add an empty attributes object, all playlists are
// expected to have this.
p.attributes = p.attributes || {};
// setup ID and URI references (URI for backwards compatibility)
master.playlists[p.id] = p;
master.playlists[p.uri] = p;
});
});
setupMediaPlaylists(master);
resolveMediaGroupUris(master);
};

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,964 @@
import videojs from 'video.js';
import PlaylistLoader from './playlist-loader';
import DashPlaylistLoader from './dash-playlist-loader';
import noop from './util/noop';
import {isAudioOnly, playlistMatch} from './playlist.js';
import logger from './util/logger';
/**
* Convert the properties of an HLS track into an audioTrackKind.
*
* @private
*/
const audioTrackKind_ = (properties) => {
let kind = properties.default ? 'main' : 'alternative';
if (properties.characteristics &&
properties.characteristics.indexOf('public.accessibility.describes-video') >= 0) {
kind = 'main-desc';
}
return kind;
};
/**
* Pause provided segment loader and playlist loader if active
*
* @param {SegmentLoader} segmentLoader
* SegmentLoader to pause
* @param {Object} mediaType
* Active media type
* @function stopLoaders
*/
export const stopLoaders = (segmentLoader, mediaType) => {
segmentLoader.abort();
segmentLoader.pause();
if (mediaType && mediaType.activePlaylistLoader) {
mediaType.activePlaylistLoader.pause();
mediaType.activePlaylistLoader = null;
}
};
/**
* Start loading provided segment loader and playlist loader
*
* @param {PlaylistLoader} playlistLoader
* PlaylistLoader to start loading
* @param {Object} mediaType
* Active media type
* @function startLoaders
*/
export const startLoaders = (playlistLoader, mediaType) => {
// Segment loader will be started after `loadedmetadata` or `loadedplaylist` from the
// playlist loader
mediaType.activePlaylistLoader = playlistLoader;
playlistLoader.load();
};
/**
* Returns a function to be called when the media group changes. It performs a
* non-destructive (preserve the buffer) resync of the SegmentLoader. This is because a
* change of group is merely a rendition switch of the same content at another encoding,
* rather than a change of content, such as switching audio from English to Spanish.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a non-destructive resync of SegmentLoader when the active media
* group changes.
* @function onGroupChanged
*/
export const onGroupChanged = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.getActiveGroup();
const previousActiveLoader = mediaType.activePlaylistLoader;
const lastGroup = mediaType.lastGroup_;
// the group did not change do nothing
if (activeGroup && lastGroup && activeGroup.id === lastGroup.id) {
return;
}
mediaType.lastGroup_ = activeGroup;
mediaType.lastTrack_ = activeTrack;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup || activeGroup.isMasterPlaylist) {
// there is no group active or active group is a main playlist and won't change
return;
}
if (!activeGroup.playlistLoader) {
if (previousActiveLoader) {
// The previous group had a playlist loader but the new active group does not
// this means we are switching from demuxed to muxed audio. In this case we want to
// do a destructive reset of the main segment loader and not restart the audio
// loaders.
mainSegmentLoader.resetEverything();
}
return;
}
// Non-destructive resync
segmentLoader.resyncLoader();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onGroupChanging = (type, settings) => () => {
const {
segmentLoaders: {
[type]: segmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
mediaType.lastGroup_ = null;
segmentLoader.abort();
segmentLoader.pause();
};
/**
* Returns a function to be called when the media track changes. It performs a
* destructive reset of the SegmentLoader to ensure we start loading as close to
* currentTime as possible.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Handler for a destructive reset of SegmentLoader when the active media
* track changes.
* @function onTrackChanged
*/
export const onTrackChanged = (type, settings) => () => {
const {
masterPlaylistLoader,
segmentLoaders: {
[type]: segmentLoader,
main: mainSegmentLoader
},
mediaTypes: { [type]: mediaType }
} = settings;
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.getActiveGroup();
const previousActiveLoader = mediaType.activePlaylistLoader;
const lastTrack = mediaType.lastTrack_;
// track did not change, do nothing
if (lastTrack && activeTrack && lastTrack.id === activeTrack.id) {
return;
}
mediaType.lastGroup_ = activeGroup;
mediaType.lastTrack_ = activeTrack;
stopLoaders(segmentLoader, mediaType);
if (!activeGroup) {
// there is no group active so we do not want to restart loaders
return;
}
if (activeGroup.isMasterPlaylist) {
// track did not change, do nothing
if (!activeTrack || !lastTrack || activeTrack.id === lastTrack.id) {
return;
}
const mpc = settings.vhs.masterPlaylistController_;
const newPlaylist = mpc.selectPlaylist();
// media will not change do nothing
if (mpc.media() === newPlaylist) {
return;
}
mediaType.logger_(`track change. Switching master audio from ${lastTrack.id} to ${activeTrack.id}`);
masterPlaylistLoader.pause();
mainSegmentLoader.resetEverything();
mpc.fastQualityChange_(newPlaylist);
return;
}
if (type === 'AUDIO') {
if (!activeGroup.playlistLoader) {
// when switching from demuxed audio/video to muxed audio/video (noted by no
// playlist loader for the audio group), we want to do a destructive reset of the
// main segment loader and not restart the audio loaders
mainSegmentLoader.setAudio(true);
// don't have to worry about disabling the audio of the audio segment loader since
// it should be stopped
mainSegmentLoader.resetEverything();
return;
}
// although the segment loader is an audio segment loader, call the setAudio
// function to ensure it is prepared to re-append the init segment (or handle other
// config changes)
segmentLoader.setAudio(true);
mainSegmentLoader.setAudio(false);
}
if (previousActiveLoader === activeGroup.playlistLoader) {
// Nothing has actually changed. This can happen because track change events can fire
// multiple times for a "single" change. One for enabling the new active track, and
// one for disabling the track that was active
startLoaders(activeGroup.playlistLoader, mediaType);
return;
}
if (segmentLoader.track) {
// For WebVTT, set the new text track in the segmentloader
segmentLoader.track(activeTrack);
}
// destructive reset
segmentLoader.resetEverything();
startLoaders(activeGroup.playlistLoader, mediaType);
};
export const onError = {
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning (or error if the playlist is blacklisted) to
* console and switches back to default audio track.
* @function onError.AUDIO
*/
AUDIO: (type, settings) => () => {
const {
segmentLoaders: { [type]: segmentLoader},
mediaTypes: { [type]: mediaType },
blacklistCurrentPlaylist
} = settings;
stopLoaders(segmentLoader, mediaType);
// switch back to default audio track
const activeTrack = mediaType.activeTrack();
const activeGroup = mediaType.activeGroup();
const id = (activeGroup.filter(group => group.default)[0] || activeGroup[0]).id;
const defaultTrack = mediaType.tracks[id];
if (activeTrack === defaultTrack) {
// Default track encountered an error. All we can do now is blacklist the current
// rendition and hope another will switch audio groups
blacklistCurrentPlaylist({
message: 'Problem encountered loading the default audio track.'
});
return;
}
videojs.log.warn('Problem encountered loading the alternate audio track.' +
'Switching back to default.');
for (const trackId in mediaType.tracks) {
mediaType.tracks[trackId].enabled = mediaType.tracks[trackId] === defaultTrack;
}
mediaType.onTrackChanged();
},
/**
* Returns a function to be called when a SegmentLoader or PlaylistLoader encounters
* an error.
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Error handler. Logs warning to console and disables the active subtitle track
* @function onError.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const {
segmentLoaders: { [type]: segmentLoader},
mediaTypes: { [type]: mediaType }
} = settings;
videojs.log.warn('Problem encountered loading the subtitle track.' +
'Disabling subtitle track.');
stopLoaders(segmentLoader, mediaType);
const track = mediaType.activeTrack();
if (track) {
track.mode = 'disabled';
}
mediaType.onTrackChanged();
}
};
export const setupListeners = {
/**
* Setup event listeners for audio playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.AUDIO
*/
AUDIO: (type, playlistLoader, settings) => {
if (!playlistLoader) {
// no playlist loader means audio will be muxed with the video
return;
}
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
},
/**
* Setup event listeners for subtitle playlist loader
*
* @param {string} type
* MediaGroup type
* @param {PlaylistLoader|null} playlistLoader
* PlaylistLoader to register listeners on
* @param {Object} settings
* Object containing required information for media groups
* @function setupListeners.SUBTITLES
*/
SUBTITLES: (type, playlistLoader, settings) => {
const {
tech,
requestOptions,
segmentLoaders: { [type]: segmentLoader },
mediaTypes: { [type]: mediaType }
} = settings;
playlistLoader.on('loadedmetadata', () => {
const media = playlistLoader.media();
segmentLoader.playlist(media, requestOptions);
segmentLoader.track(mediaType.activeTrack());
// if the video is already playing, or if this isn't a live video and preload
// permits, start downloading segments
if (!tech.paused() || (media.endList && tech.preload() !== 'none')) {
segmentLoader.load();
}
});
playlistLoader.on('loadedplaylist', () => {
segmentLoader.playlist(playlistLoader.media(), requestOptions);
// If the player isn't paused, ensure that the segment loader is running
if (!tech.paused()) {
segmentLoader.load();
}
});
playlistLoader.on('error', onError[type](type, settings));
}
};
export const initialize = {
/**
* Setup PlaylistLoaders and AudioTracks for the audio groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.AUDIO
*/
'AUDIO': (type, settings) => {
const {
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
master: {mediaGroups},
mediaTypes: {
[type]: {
groups,
tracks,
logger_
}
},
masterPlaylistLoader
} = settings;
const audioOnlyMaster = isAudioOnly(masterPlaylistLoader.master);
// force a default if we have none
if (!mediaGroups[type] ||
Object.keys(mediaGroups[type]).length === 0) {
mediaGroups[type] = { main: { default: { default: true } } };
}
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
let properties = mediaGroups[type][groupId][variantLabel];
let playlistLoader;
if (audioOnlyMaster) {
logger_(`AUDIO group '${groupId}' label '${variantLabel}' is a master playlist`);
properties.isMasterPlaylist = true;
playlistLoader = null;
// if vhs-json was provided as the source, and the media playlist was resolved,
// use the resolved media playlist object
} else if (sourceType === 'vhs-json' && properties.playlists) {
playlistLoader = new PlaylistLoader(
properties.playlists[0],
vhs,
requestOptions
);
} else if (properties.resolvedUri) {
playlistLoader = new PlaylistLoader(
properties.resolvedUri,
vhs,
requestOptions
);
// TODO: dash isn't the only type with properties.playlists
// should we even have properties.playlists in this check.
} else if (properties.playlists && sourceType === 'dash') {
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
masterPlaylistLoader
);
} else {
// no resolvedUri means the audio is muxed with the video when using this
// audio track
playlistLoader = null;
}
properties = videojs.mergeOptions(
{ id: variantLabel, playlistLoader },
properties
);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = new videojs.AudioTrack({
id: variantLabel,
kind: audioTrackKind_(properties),
enabled: false,
language: properties.language,
default: properties.default,
label: variantLabel
});
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup PlaylistLoaders and TextTracks for the subtitle groups
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize.SUBTITLES
*/
'SUBTITLES': (type, settings) => {
const {
tech,
vhs,
sourceType,
segmentLoaders: { [type]: segmentLoader },
requestOptions,
master: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
},
masterPlaylistLoader
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
if (mediaGroups[type][groupId][variantLabel].forced) {
// Subtitle playlists with the forced attribute are not selectable in Safari.
// According to Apple's HLS Authoring Specification:
// If content has forced subtitles and regular subtitles in a given language,
// the regular subtitles track in that language MUST contain both the forced
// subtitles and the regular subtitles for that language.
// Because of this requirement and that Safari does not add forced subtitles,
// forced subtitles are skipped here to maintain consistent experience across
// all platforms
continue;
}
let properties = mediaGroups[type][groupId][variantLabel];
let playlistLoader;
if (sourceType === 'hls') {
playlistLoader =
new PlaylistLoader(properties.resolvedUri, vhs, requestOptions);
} else if (sourceType === 'dash') {
const playlists = properties.playlists.filter((p) => p.excludeUntil !== Infinity);
if (!playlists.length) {
return;
}
playlistLoader = new DashPlaylistLoader(
properties.playlists[0],
vhs,
requestOptions,
masterPlaylistLoader
);
} else if (sourceType === 'vhs-json') {
playlistLoader = new PlaylistLoader(
// if the vhs-json object included the media playlist, use the media playlist
// as provided, otherwise use the resolved URI to load the playlist
properties.playlists ? properties.playlists[0] : properties.resolvedUri,
vhs,
requestOptions
);
}
properties = videojs.mergeOptions({
id: variantLabel,
playlistLoader
}, properties);
setupListeners[type](type, properties.playlistLoader, settings);
groups[groupId].push(properties);
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: variantLabel,
kind: 'subtitles',
default: properties.default && properties.autoselect,
language: properties.language,
label: variantLabel
}, false).track;
tracks[variantLabel] = track;
}
}
}
// setup single error event handler for the segment loader
segmentLoader.on('error', onError[type](type, settings));
},
/**
* Setup TextTracks for the closed-caption groups
*
* @param {String} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @function initialize['CLOSED-CAPTIONS']
*/
'CLOSED-CAPTIONS': (type, settings) => {
const {
tech,
master: { mediaGroups },
mediaTypes: {
[type]: {
groups,
tracks
}
}
} = settings;
for (const groupId in mediaGroups[type]) {
if (!groups[groupId]) {
groups[groupId] = [];
}
for (const variantLabel in mediaGroups[type][groupId]) {
const properties = mediaGroups[type][groupId][variantLabel];
// Look for either 608 (CCn) or 708 (SERVICEn) caption services
if (!/^(?:CC|SERVICE)/.test(properties.instreamId)) {
continue;
}
const captionServices = tech.options_.vhs && tech.options_.vhs.captionServices || {};
let newProps = {
label: variantLabel,
language: properties.language,
instreamId: properties.instreamId,
default: properties.default && properties.autoselect
};
if (captionServices[newProps.instreamId]) {
newProps = videojs.mergeOptions(newProps, captionServices[newProps.instreamId]);
}
if (newProps.default === undefined) {
delete newProps.default;
}
// No PlaylistLoader is required for Closed-Captions because the captions are
// embedded within the video stream
groups[groupId].push(videojs.mergeOptions({ id: variantLabel }, properties));
if (typeof tracks[variantLabel] === 'undefined') {
const track = tech.addRemoteTextTrack({
id: newProps.instreamId,
kind: 'captions',
default: newProps.default,
language: newProps.language,
label: newProps.label
}, false).track;
tracks[variantLabel] = track;
}
}
}
}
};
const groupMatch = (list, media) => {
for (let i = 0; i < list.length; i++) {
if (playlistMatch(media, list[i])) {
return true;
}
if (list[i].playlists && groupMatch(list[i].playlists, media)) {
return true;
}
}
return false;
};
/**
* Returns a function used to get the active group of the provided type
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media group for the provided type. Takes an
* optional parameter {TextTrack} track. If no track is provided, a list of all
* variants in the group, otherwise the variant corresponding to the provided
* track is returned.
* @function activeGroup
*/
export const activeGroup = (type, settings) => (track) => {
const {
masterPlaylistLoader,
mediaTypes: { [type]: { groups } }
} = settings;
const media = masterPlaylistLoader.media();
if (!media) {
return null;
}
let variants = null;
// set to variants to main media active group
if (media.attributes[type]) {
variants = groups[media.attributes[type]];
}
const groupKeys = Object.keys(groups);
if (!variants) {
// find the masterPlaylistLoader media
// that is in a media group if we are dealing
// with audio only
if (type === 'AUDIO' && groupKeys.length > 1 && isAudioOnly(settings.master)) {
for (let i = 0; i < groupKeys.length; i++) {
const groupPropertyList = groups[groupKeys[i]];
if (groupMatch(groupPropertyList, media)) {
variants = groupPropertyList;
break;
}
}
// use the main group if it exists
} else if (groups.main) {
variants = groups.main;
// only one group, use that one
} else if (groupKeys.length === 1) {
variants = groups[groupKeys[0]];
}
}
if (typeof track === 'undefined') {
return variants;
}
if (track === null || !variants) {
// An active track was specified so a corresponding group is expected. track === null
// means no track is currently active so there is no corresponding group
return null;
}
return variants.filter((props) => props.id === track.id)[0] || null;
};
export const activeTrack = {
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.AUDIO
*/
AUDIO: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].enabled) {
return tracks[id];
}
}
return null;
},
/**
* Returns a function used to get the active track of type provided
*
* @param {string} type
* MediaGroup type
* @param {Object} settings
* Object containing required information for media groups
* @return {Function}
* Function that returns the active media track for the provided type. Returns
* null if no track is active
* @function activeTrack.SUBTITLES
*/
SUBTITLES: (type, settings) => () => {
const { mediaTypes: { [type]: { tracks } } } = settings;
for (const id in tracks) {
if (tracks[id].mode === 'showing' || tracks[id].mode === 'hidden') {
return tracks[id];
}
}
return null;
}
};
export const getActiveGroup = (type, {mediaTypes}) => () => {
const activeTrack_ = mediaTypes[type].activeTrack();
if (!activeTrack_) {
return null;
}
return mediaTypes[type].activeGroup(activeTrack_);
};
/**
* Setup PlaylistLoaders and Tracks for media groups (Audio, Subtitles,
* Closed-Captions) specified in the master manifest.
*
* @param {Object} settings
* Object containing required information for setting up the media groups
* @param {Tech} settings.tech
* The tech of the player
* @param {Object} settings.requestOptions
* XHR request options used by the segment loaders
* @param {PlaylistLoader} settings.masterPlaylistLoader
* PlaylistLoader for the master source
* @param {VhsHandler} settings.vhs
* VHS SourceHandler
* @param {Object} settings.master
* The parsed master manifest
* @param {Object} settings.mediaTypes
* Object to store the loaders, tracks, and utility methods for each media type
* @param {Function} settings.blacklistCurrentPlaylist
* Blacklists the current rendition and forces a rendition switch.
* @function setupMediaGroups
*/
export const setupMediaGroups = (settings) => {
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
initialize[type](type, settings);
});
const {
mediaTypes,
masterPlaylistLoader,
tech,
vhs,
segmentLoaders: {
['AUDIO']: audioSegmentLoader,
main: mainSegmentLoader
}
} = settings;
// setup active group and track getters and change event handlers
['AUDIO', 'SUBTITLES'].forEach((type) => {
mediaTypes[type].activeGroup = activeGroup(type, settings);
mediaTypes[type].activeTrack = activeTrack[type](type, settings);
mediaTypes[type].onGroupChanged = onGroupChanged(type, settings);
mediaTypes[type].onGroupChanging = onGroupChanging(type, settings);
mediaTypes[type].onTrackChanged = onTrackChanged(type, settings);
mediaTypes[type].getActiveGroup = getActiveGroup(type, settings);
});
// DO NOT enable the default subtitle or caption track.
// DO enable the default audio track
const audioGroup = mediaTypes.AUDIO.activeGroup();
if (audioGroup) {
const groupId = (audioGroup.filter(group => group.default)[0] || audioGroup[0]).id;
mediaTypes.AUDIO.tracks[groupId].enabled = true;
mediaTypes.AUDIO.onGroupChanged();
mediaTypes.AUDIO.onTrackChanged();
const activeAudioGroup = mediaTypes.AUDIO.getActiveGroup();
// a similar check for handling setAudio on each loader is run again each time the
// track is changed, but needs to be handled here since the track may not be considered
// changed on the first call to onTrackChanged
if (!activeAudioGroup.playlistLoader) {
// either audio is muxed with video or the stream is audio only
mainSegmentLoader.setAudio(true);
} else {
// audio is demuxed
mainSegmentLoader.setAudio(false);
audioSegmentLoader.setAudio(true);
}
}
masterPlaylistLoader.on('mediachange', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanged());
});
masterPlaylistLoader.on('mediachanging', () => {
['AUDIO', 'SUBTITLES'].forEach(type => mediaTypes[type].onGroupChanging());
});
// custom audio track change event handler for usage event
const onAudioTrackChanged = () => {
mediaTypes.AUDIO.onTrackChanged();
tech.trigger({ type: 'usage', name: 'vhs-audio-change' });
tech.trigger({ type: 'usage', name: 'hls-audio-change' });
};
tech.audioTracks().addEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().addEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
vhs.on('dispose', () => {
tech.audioTracks().removeEventListener('change', onAudioTrackChanged);
tech.remoteTextTracks().removeEventListener(
'change',
mediaTypes.SUBTITLES.onTrackChanged
);
});
// clear existing audio tracks and add the ones we just created
tech.clearTracks('audio');
for (const id in mediaTypes.AUDIO.tracks) {
tech.audioTracks().addTrack(mediaTypes.AUDIO.tracks[id]);
}
};
/**
* Creates skeleton object used to store the loaders, tracks, and utility methods for each
* media type
*
* @return {Object}
* Object to store the loaders, tracks, and utility methods for each media type
* @function createMediaTypes
*/
export const createMediaTypes = () => {
const mediaTypes = {};
['AUDIO', 'SUBTITLES', 'CLOSED-CAPTIONS'].forEach((type) => {
mediaTypes[type] = {
groups: {},
tracks: {},
activePlaylistLoader: null,
activeGroup: noop,
activeTrack: noop,
getActiveGroup: noop,
onGroupChanged: noop,
onTrackChanged: noop,
lastTrack_: null,
logger_: logger(`MediaGroups[${type}]`)
};
});
return mediaTypes;
};

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,641 @@
/**
* @file playback-watcher.js
*
* Playback starts, and now my watch begins. It shall not end until my death. I shall
* take no wait, hold no uncleared timeouts, father no bad seeks. I shall wear no crowns
* and win no glory. I shall live and die at my post. I am the corrector of the underflow.
* I am the watcher of gaps. I am the shield that guards the realms of seekable. I pledge
* my life and honor to the Playback Watch, for this Player and all the Players to come.
*/
import window from 'global/window';
import * as Ranges from './ranges';
import logger from './util/logger';
// Set of events that reset the playback-watcher time check logic and clear the timeout
const timerCancelEvents = [
'seeking',
'seeked',
'pause',
'playing',
'error'
];
/**
* Returns whether or not the current time should be considered close to buffered content,
* taking into consideration whether there's enough buffered content for proper playback.
*
* @param {Object} options
* Options object
* @param {TimeRange} options.buffered
* Current buffer
* @param {number} options.targetDuration
* The active playlist's target duration
* @param {number} options.currentTime
* The current time of the player
* @return {boolean}
* Whether the current time should be considered close to the buffer
*/
export const closeToBufferedContent = ({ buffered, targetDuration, currentTime }) => {
if (!buffered.length) {
return false;
}
// At least two to three segments worth of content should be buffered before there's a
// full enough buffer to consider taking any actions.
if (buffered.end(0) - buffered.start(0) < targetDuration * 2) {
return false;
}
// It's possible that, on seek, a remove hasn't completed and the buffered range is
// somewhere past the current time. In that event, don't consider the buffered content
// close.
if (currentTime > buffered.start(0)) {
return false;
}
// Since target duration generally represents the max (or close to max) duration of a
// segment, if the buffer is within a segment of the current time, the gap probably
// won't be closed, and current time should be considered close to buffered content.
return buffered.start(0) - currentTime < targetDuration;
};
/**
* @class PlaybackWatcher
*/
export default class PlaybackWatcher {
/**
* Represents an PlaybackWatcher object.
*
* @class
* @param {Object} options an object that includes the tech and settings
*/
constructor(options) {
this.masterPlaylistController_ = options.masterPlaylistController;
this.tech_ = options.tech;
this.seekable = options.seekable;
this.allowSeeksWithinUnsafeLiveWindow = options.allowSeeksWithinUnsafeLiveWindow;
this.liveRangeSafeTimeDelta = options.liveRangeSafeTimeDelta;
this.media = options.media;
this.consecutiveUpdates = 0;
this.lastRecordedTime = null;
this.timer_ = null;
this.checkCurrentTimeTimeout_ = null;
this.logger_ = logger('PlaybackWatcher');
this.logger_('initialize');
const playHandler = () => this.monitorCurrentTime_();
const canPlayHandler = () => this.monitorCurrentTime_();
const waitingHandler = () => this.techWaiting_();
const cancelTimerHandler = () => this.cancelTimer_();
const fixesBadSeeksHandler = () => this.fixesBadSeeks_();
const mpc = this.masterPlaylistController_;
const loaderTypes = ['main', 'subtitle', 'audio'];
const loaderChecks = {};
loaderTypes.forEach((type) => {
loaderChecks[type] = {
reset: () => this.resetSegmentDownloads_(type),
updateend: () => this.checkSegmentDownloads_(type)
};
mpc[`${type}SegmentLoader_`].on('appendsdone', loaderChecks[type].updateend);
// If a rendition switch happens during a playback stall where the buffer
// isn't changing we want to reset. We cannot assume that the new rendition
// will also be stalled, until after new appends.
mpc[`${type}SegmentLoader_`].on('playlistupdate', loaderChecks[type].reset);
// Playback stalls should not be detected right after seeking.
// This prevents one segment playlists (single vtt or single segment content)
// from being detected as stalling. As the buffer will not change in those cases, since
// the buffer is the entire video duration.
this.tech_.on(['seeked', 'seeking'], loaderChecks[type].reset);
});
this.tech_.on('seekablechanged', fixesBadSeeksHandler);
this.tech_.on('waiting', waitingHandler);
this.tech_.on(timerCancelEvents, cancelTimerHandler);
this.tech_.on('canplay', canPlayHandler);
/*
An edge case exists that results in gaps not being skipped when they exist at the beginning of a stream. This case
is surfaced in one of two ways:
1) The `waiting` event is fired before the player has buffered content, making it impossible
to find or skip the gap. The `waiting` event is followed by a `play` event. On first play
we can check if playback is stalled due to a gap, and skip the gap if necessary.
2) A source with a gap at the beginning of the stream is loaded programatically while the player
is in a playing state. To catch this case, it's important that our one-time play listener is setup
even if the player is in a playing state
*/
this.tech_.one('play', playHandler);
// Define the dispose function to clean up our events
this.dispose = () => {
this.logger_('dispose');
this.tech_.off('seekablechanged', fixesBadSeeksHandler);
this.tech_.off('waiting', waitingHandler);
this.tech_.off(timerCancelEvents, cancelTimerHandler);
this.tech_.off('canplay', canPlayHandler);
this.tech_.off('play', playHandler);
loaderTypes.forEach((type) => {
mpc[`${type}SegmentLoader_`].off('appendsdone', loaderChecks[type].updateend);
mpc[`${type}SegmentLoader_`].off('playlistupdate', loaderChecks[type].reset);
this.tech_.off(['seeked', 'seeking'], loaderChecks[type].reset);
});
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
this.cancelTimer_();
};
}
/**
* Periodically check current time to see if playback stopped
*
* @private
*/
monitorCurrentTime_() {
this.checkCurrentTime_();
if (this.checkCurrentTimeTimeout_) {
window.clearTimeout(this.checkCurrentTimeTimeout_);
}
// 42 = 24 fps // 250 is what Webkit uses // FF uses 15
this.checkCurrentTimeTimeout_ =
window.setTimeout(this.monitorCurrentTime_.bind(this), 250);
}
/**
* Reset stalled download stats for a specific type of loader
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#playlistupdate
* @listens Tech#seeking
* @listens Tech#seeked
*/
resetSegmentDownloads_(type) {
const loader = this.masterPlaylistController_[`${type}SegmentLoader_`];
if (this[`${type}StalledDownloads_`] > 0) {
this.logger_(`resetting possible stalled download count for ${type} loader`);
}
this[`${type}StalledDownloads_`] = 0;
this[`${type}Buffered_`] = loader.buffered_();
}
/**
* Checks on every segment `appendsdone` to see
* if segment appends are making progress. If they are not
* and we are still downloading bytes. We blacklist the playlist.
*
* @param {string} type
* The segment loader type to check.
*
* @listens SegmentLoader#appendsdone
*/
checkSegmentDownloads_(type) {
const mpc = this.masterPlaylistController_;
const loader = mpc[`${type}SegmentLoader_`];
const buffered = loader.buffered_();
const isBufferedDifferent = Ranges.isRangeDifferent(this[`${type}Buffered_`], buffered);
this[`${type}Buffered_`] = buffered;
// if another watcher is going to fix the issue or
// the buffered value for this loader changed
// appends are working
if (isBufferedDifferent) {
this.resetSegmentDownloads_(type);
return;
}
this[`${type}StalledDownloads_`]++;
this.logger_(`found #${this[`${type}StalledDownloads_`]} ${type} appends that did not increase buffer (possible stalled download)`, {
playlistId: loader.playlist_ && loader.playlist_.id,
buffered: Ranges.timeRangesToArray(buffered)
});
// after 10 possibly stalled appends with no reset, exclude
if (this[`${type}StalledDownloads_`] < 10) {
return;
}
this.logger_(`${type} loader stalled download exclusion`);
this.resetSegmentDownloads_(type);
this.tech_.trigger({type: 'usage', name: `vhs-${type}-download-exclusion`});
if (type === 'subtitle') {
return;
}
// TODO: should we exclude audio tracks rather than main tracks
// when type is audio?
mpc.blacklistCurrentPlaylist({
message: `Excessive ${type} segment downloading detected.`
}, Infinity);
}
/**
* The purpose of this function is to emulate the "waiting" event on
* browsers that do not emit it when they are waiting for more
* data to continue playback
*
* @private
*/
checkCurrentTime_() {
if (this.tech_.seeking() && this.fixesBadSeeks_()) {
this.consecutiveUpdates = 0;
this.lastRecordedTime = this.tech_.currentTime();
return;
}
if (this.tech_.paused() || this.tech_.seeking()) {
return;
}
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
if (this.lastRecordedTime === currentTime &&
(!buffered.length ||
currentTime + Ranges.SAFE_TIME_DELTA >= buffered.end(buffered.length - 1))) {
// If current time is at the end of the final buffered region, then any playback
// stall is most likely caused by buffering in a low bandwidth environment. The tech
// should fire a `waiting` event in this scenario, but due to browser and tech
// inconsistencies. Calling `techWaiting_` here allows us to simulate
// responding to a native `waiting` event when the tech fails to emit one.
return this.techWaiting_();
}
if (this.consecutiveUpdates >= 5 &&
currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
this.waiting_();
} else if (currentTime === this.lastRecordedTime) {
this.consecutiveUpdates++;
} else {
this.consecutiveUpdates = 0;
this.lastRecordedTime = currentTime;
}
}
/**
* Cancels any pending timers and resets the 'timeupdate' mechanism
* designed to detect that we are stalled
*
* @private
*/
cancelTimer_() {
this.consecutiveUpdates = 0;
if (this.timer_) {
this.logger_('cancelTimer_');
clearTimeout(this.timer_);
}
this.timer_ = null;
}
/**
* Fixes situations where there's a bad seek
*
* @return {boolean} whether an action was taken to fix the seek
* @private
*/
fixesBadSeeks_() {
const seeking = this.tech_.seeking();
if (!seeking) {
return false;
}
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
const isAfterSeekableRange = this.afterSeekableWindow_(
seekable,
currentTime,
this.media(),
this.allowSeeksWithinUnsafeLiveWindow
);
let seekTo;
if (isAfterSeekableRange) {
const seekableEnd = seekable.end(seekable.length - 1);
// sync to live point (if VOD, our seekable was updated and we're simply adjusting)
seekTo = seekableEnd;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const seekableStart = seekable.start(0);
// sync to the beginning of the live window
// provide a buffer of .1 seconds to handle rounding/imprecise numbers
seekTo = seekableStart +
// if the playlist is too short and the seekable range is an exact time (can
// happen in live with a 3 segment playlist), then don't use a time delta
(seekableStart === seekable.end(0) ? 0 : Ranges.SAFE_TIME_DELTA);
}
if (typeof seekTo !== 'undefined') {
this.logger_(`Trying to seek outside of seekable at time ${currentTime} with ` +
`seekable range ${Ranges.printableRange(seekable)}. Seeking to ` +
`${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
const buffered = this.tech_.buffered();
if (
closeToBufferedContent({
buffered,
targetDuration: this.media().targetDuration,
currentTime
})
) {
seekTo = buffered.start(0) + Ranges.SAFE_TIME_DELTA;
this.logger_(`Buffered region starts (${buffered.start(0)}) ` +
` just beyond seek point (${currentTime}). Seeking to ${seekTo}.`);
this.tech_.setCurrentTime(seekTo);
return true;
}
return false;
}
/**
* Handler for situations when we determine the player is waiting.
*
* @private
*/
waiting_() {
if (this.techWaiting_()) {
return;
}
// All tech waiting checks failed. Use last resort correction
const currentTime = this.tech_.currentTime();
const buffered = this.tech_.buffered();
const currentRange = Ranges.findRange(buffered, currentTime);
// Sometimes the player can stall for unknown reasons within a contiguous buffered
// region with no indication that anything is amiss (seen in Firefox). Seeking to
// currentTime is usually enough to kickstart the player. This checks that the player
// is currently within a buffered region before attempting a corrective seek.
// Chrome does not appear to continue `timeupdate` events after a `waiting` event
// until there is ~ 3 seconds of forward buffer available. PlaybackWatcher should also
// make sure there is ~3 seconds of forward buffer before taking any corrective action
// to avoid triggering an `unknownwaiting` event when the network is slow.
if (currentRange.length && currentTime + 3 <= currentRange.end(0)) {
this.cancelTimer_();
this.tech_.setCurrentTime(currentTime);
this.logger_(`Stopped at ${currentTime} while inside a buffered region ` +
`[${currentRange.start(0)} -> ${currentRange.end(0)}]. Attempting to resume ` +
'playback by seeking to the current time.');
// unknown waiting corrections may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-unknown-waiting'});
this.tech_.trigger({type: 'usage', name: 'hls-unknown-waiting'});
return;
}
}
/**
* Handler for situations when the tech fires a `waiting` event
*
* @return {boolean}
* True if an action (or none) was needed to correct the waiting. False if no
* checks passed
* @private
*/
techWaiting_() {
const seekable = this.seekable();
const currentTime = this.tech_.currentTime();
if (this.tech_.seeking() && this.fixesBadSeeks_()) {
// Tech is seeking or bad seek fixed, no action needed
return true;
}
if (this.tech_.seeking() || this.timer_ !== null) {
// Tech is seeking or already waiting on another action, no action needed
return true;
}
if (this.beforeSeekableWindow_(seekable, currentTime)) {
const livePoint = seekable.end(seekable.length - 1);
this.logger_(`Fell out of live window at time ${currentTime}. Seeking to ` +
`live point (seekable end) ${livePoint}`);
this.cancelTimer_();
this.tech_.setCurrentTime(livePoint);
// live window resyncs may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-live-resync'});
this.tech_.trigger({type: 'usage', name: 'hls-live-resync'});
return true;
}
const sourceUpdater = this.tech_.vhs.masterPlaylistController_.sourceUpdater_;
const buffered = this.tech_.buffered();
const videoUnderflow = this.videoUnderflow_({
audioBuffered: sourceUpdater.audioBuffered(),
videoBuffered: sourceUpdater.videoBuffered(),
currentTime
});
if (videoUnderflow) {
// Even though the video underflowed and was stuck in a gap, the audio overplayed
// the gap, leading currentTime into a buffered range. Seeking to currentTime
// allows the video to catch up to the audio position without losing any audio
// (only suffering ~3 seconds of frozen video and a pause in audio playback).
this.cancelTimer_();
this.tech_.setCurrentTime(currentTime);
// video underflow may be useful for monitoring QoS
this.tech_.trigger({type: 'usage', name: 'vhs-video-underflow'});
this.tech_.trigger({type: 'usage', name: 'hls-video-underflow'});
return true;
}
const nextRange = Ranges.findNextRange(buffered, currentTime);
// check for gap
if (nextRange.length > 0) {
const difference = nextRange.start(0) - currentTime;
this.logger_(`Stopped at ${currentTime}, setting timer for ${difference}, seeking ` +
`to ${nextRange.start(0)}`);
this.cancelTimer_();
this.timer_ = setTimeout(
this.skipTheGap_.bind(this),
difference * 1000,
currentTime
);
return true;
}
// All checks failed. Returning false to indicate failure to correct waiting
return false;
}
afterSeekableWindow_(seekable, currentTime, playlist, allowSeeksWithinUnsafeLiveWindow = false) {
if (!seekable.length) {
// we can't make a solid case if there's no seekable, default to false
return false;
}
let allowedEnd = seekable.end(seekable.length - 1) + Ranges.SAFE_TIME_DELTA;
const isLive = !playlist.endList;
if (isLive && allowSeeksWithinUnsafeLiveWindow) {
allowedEnd = seekable.end(seekable.length - 1) + (playlist.targetDuration * 3);
}
if (currentTime > allowedEnd) {
return true;
}
return false;
}
beforeSeekableWindow_(seekable, currentTime) {
if (seekable.length &&
// can't fall before 0 and 0 seekable start identifies VOD stream
seekable.start(0) > 0 &&
currentTime < seekable.start(0) - this.liveRangeSafeTimeDelta) {
return true;
}
return false;
}
videoUnderflow_({videoBuffered, audioBuffered, currentTime}) {
// audio only content will not have video underflow :)
if (!videoBuffered) {
return;
}
let gap;
// find a gap in demuxed content.
if (videoBuffered.length && audioBuffered.length) {
// in Chrome audio will continue to play for ~3s when we run out of video
// so we have to check that the video buffer did have some buffer in the
// past.
const lastVideoRange = Ranges.findRange(videoBuffered, currentTime - 3);
const videoRange = Ranges.findRange(videoBuffered, currentTime);
const audioRange = Ranges.findRange(audioBuffered, currentTime);
if (audioRange.length && !videoRange.length && lastVideoRange.length) {
gap = {start: lastVideoRange.end(0), end: audioRange.end(0)};
}
// find a gap in muxed content.
} else {
const nextRange = Ranges.findNextRange(videoBuffered, currentTime);
// Even if there is no available next range, there is still a possibility we are
// stuck in a gap due to video underflow.
if (!nextRange.length) {
gap = this.gapFromVideoUnderflow_(videoBuffered, currentTime);
}
}
if (gap) {
this.logger_(`Encountered a gap in video from ${gap.start} to ${gap.end}. ` +
`Seeking to current time ${currentTime}`);
return true;
}
return false;
}
/**
* Timer callback. If playback still has not proceeded, then we seek
* to the start of the next buffered region.
*
* @private
*/
skipTheGap_(scheduledCurrentTime) {
const buffered = this.tech_.buffered();
const currentTime = this.tech_.currentTime();
const nextRange = Ranges.findNextRange(buffered, currentTime);
this.cancelTimer_();
if (nextRange.length === 0 ||
currentTime !== scheduledCurrentTime) {
return;
}
this.logger_(
'skipTheGap_:',
'currentTime:', currentTime,
'scheduled currentTime:', scheduledCurrentTime,
'nextRange start:', nextRange.start(0)
);
// only seek if we still have not played
this.tech_.setCurrentTime(nextRange.start(0) + Ranges.TIME_FUDGE_FACTOR);
this.tech_.trigger({type: 'usage', name: 'vhs-gap-skip'});
this.tech_.trigger({type: 'usage', name: 'hls-gap-skip'});
}
gapFromVideoUnderflow_(buffered, currentTime) {
// At least in Chrome, if there is a gap in the video buffer, the audio will continue
// playing for ~3 seconds after the video gap starts. This is done to account for
// video buffer underflow/underrun (note that this is not done when there is audio
// buffer underflow/underrun -- in that case the video will stop as soon as it
// encounters the gap, as audio stalls are more noticeable/jarring to a user than
// video stalls). The player's time will reflect the playthrough of audio, so the
// time will appear as if we are in a buffered region, even if we are stuck in a
// "gap."
//
// Example:
// video buffer: 0 => 10.1, 10.2 => 20
// audio buffer: 0 => 20
// overall buffer: 0 => 10.1, 10.2 => 20
// current time: 13
//
// Chrome's video froze at 10 seconds, where the video buffer encountered the gap,
// however, the audio continued playing until it reached ~3 seconds past the gap
// (13 seconds), at which point it stops as well. Since current time is past the
// gap, findNextRange will return no ranges.
//
// To check for this issue, we see if there is a gap that starts somewhere within
// a 3 second range (3 seconds +/- 1 second) back from our current time.
const gaps = Ranges.findGaps(buffered);
for (let i = 0; i < gaps.length; i++) {
const start = gaps.start(i);
const end = gaps.end(i);
// gap is starts no more than 4 seconds back
if (currentTime - start < 4 && currentTime - start > 2) {
return {
start,
end
};
}
}
return null;
}
}

View file

@ -0,0 +1,862 @@
/**
* @file playlist-loader.js
*
* A state machine that manages the loading, caching, and updating of
* M3U8 playlists.
*
*/
import { resolveUrl, resolveManifestRedirect } from './resolve-url';
import videojs from 'video.js';
import window from 'global/window';
import logger from './util/logger';
import {
parseManifest,
addPropertiesToMaster,
masterForMedia,
setupMediaPlaylist,
forEachMediaGroup
} from './manifest';
import {getKnownPartCount} from './playlist.js';
const { mergeOptions, EventTarget } = videojs;
const addLLHLSQueryDirectives = (uri, media) => {
if (media.endList) {
return uri;
}
const query = [];
if (media.serverControl && media.serverControl.canBlockReload) {
const {preloadSegment} = media;
// next msn is a zero based value, length is not.
let nextMSN = media.mediaSequence + media.segments.length;
// If preload segment has parts then it is likely
// that we are going to request a part of that preload segment.
// the logic below is used to determine that.
if (preloadSegment) {
const parts = preloadSegment.parts || [];
// _HLS_part is a zero based index
const nextPart = getKnownPartCount(media) - 1;
// if nextPart is > -1 and not equal to just the
// length of parts, then we know we had part preload hints
// and we need to add the _HLS_part= query
if (nextPart > -1 && nextPart !== (parts.length - 1)) {
// add existing parts to our preload hints
query.push(`_HLS_part=${nextPart}`);
}
// this if statement makes sure that we request the msn
// of the preload segment if:
// 1. the preload segment had parts (and was not yet a full segment)
// but was added to our segments array
// 2. the preload segment had preload hints for parts that are not in
// the manifest yet.
// in all other cases we want the segment after the preload segment
// which will be given by using media.segments.length because it is 1 based
// rather than 0 based.
if (nextPart > -1 || parts.length) {
nextMSN--;
}
}
// add _HLS_msn= in front of any _HLS_part query
query.unshift(`_HLS_msn=${nextMSN}`);
}
if (media.serverControl && media.serverControl.canSkipUntil) {
// add _HLS_skip= infront of all other queries.
query.unshift('_HLS_skip=' + (media.serverControl.canSkipDateranges ? 'v2' : 'YES'));
}
query.forEach(function(str, i) {
const symbol = i === 0 ? '?' : '&';
uri += `${symbol}${str}`;
});
return uri;
};
/**
* Returns a new segment object with properties and
* the parts array merged.
*
* @param {Object} a the old segment
* @param {Object} b the new segment
*
* @return {Object} the merged segment
*/
export const updateSegment = (a, b) => {
if (!a) {
return b;
}
const result = mergeOptions(a, b);
// if only the old segment has preload hints
// and the new one does not, remove preload hints.
if (a.preloadHints && !b.preloadHints) {
delete result.preloadHints;
}
// if only the old segment has parts
// then the parts are no longer valid
if (a.parts && !b.parts) {
delete result.parts;
// if both segments have parts
// copy part propeties from the old segment
// to the new one.
} else if (a.parts && b.parts) {
for (let i = 0; i < b.parts.length; i++) {
if (a.parts && a.parts[i]) {
result.parts[i] = mergeOptions(a.parts[i], b.parts[i]);
}
}
}
// set skipped to false for segments that have
// have had information merged from the old segment.
if (!a.skipped && b.skipped) {
result.skipped = false;
}
// set preload to false for segments that have
// had information added in the new segment.
if (a.preload && !b.preload) {
result.preload = false;
}
return result;
};
/**
* Returns a new array of segments that is the result of merging
* properties from an older list of segments onto an updated
* list. No properties on the updated playlist will be ovewritten.
*
* @param {Array} original the outdated list of segments
* @param {Array} update the updated list of segments
* @param {number=} offset the index of the first update
* segment in the original segment list. For non-live playlists,
* this should always be zero and does not need to be
* specified. For live playlists, it should be the difference
* between the media sequence numbers in the original and updated
* playlists.
* @return {Array} a list of merged segment objects
*/
export const updateSegments = (original, update, offset) => {
const oldSegments = original.slice();
const newSegments = update.slice();
offset = offset || 0;
const result = [];
let currentMap;
for (let newIndex = 0; newIndex < newSegments.length; newIndex++) {
const oldSegment = oldSegments[newIndex + offset];
const newSegment = newSegments[newIndex];
if (oldSegment) {
currentMap = oldSegment.map || currentMap;
result.push(updateSegment(oldSegment, newSegment));
} else {
// carry over map to new segment if it is missing
if (currentMap && !newSegment.map) {
newSegment.map = currentMap;
}
result.push(newSegment);
}
}
return result;
};
export const resolveSegmentUris = (segment, baseUri) => {
// preloadSegment will not have a uri at all
// as the segment isn't actually in the manifest yet, only parts
if (!segment.resolvedUri && segment.uri) {
segment.resolvedUri = resolveUrl(baseUri, segment.uri);
}
if (segment.key && !segment.key.resolvedUri) {
segment.key.resolvedUri = resolveUrl(baseUri, segment.key.uri);
}
if (segment.map && !segment.map.resolvedUri) {
segment.map.resolvedUri = resolveUrl(baseUri, segment.map.uri);
}
if (segment.map && segment.map.key && !segment.map.key.resolvedUri) {
segment.map.key.resolvedUri = resolveUrl(baseUri, segment.map.key.uri);
}
if (segment.parts && segment.parts.length) {
segment.parts.forEach((p) => {
if (p.resolvedUri) {
return;
}
p.resolvedUri = resolveUrl(baseUri, p.uri);
});
}
if (segment.preloadHints && segment.preloadHints.length) {
segment.preloadHints.forEach((p) => {
if (p.resolvedUri) {
return;
}
p.resolvedUri = resolveUrl(baseUri, p.uri);
});
}
};
const getAllSegments = function(media) {
const segments = media.segments || [];
const preloadSegment = media.preloadSegment;
// a preloadSegment with only preloadHints is not currently
// a usable segment, only include a preloadSegment that has
// parts.
if (preloadSegment && preloadSegment.parts && preloadSegment.parts.length) {
// if preloadHints has a MAP that means that the
// init segment is going to change. We cannot use any of the parts
// from this preload segment.
if (preloadSegment.preloadHints) {
for (let i = 0; i < preloadSegment.preloadHints.length; i++) {
if (preloadSegment.preloadHints[i].type === 'MAP') {
return segments;
}
}
}
// set the duration for our preload segment to target duration.
preloadSegment.duration = media.targetDuration;
preloadSegment.preload = true;
segments.push(preloadSegment);
}
return segments;
};
// consider the playlist unchanged if the playlist object is the same or
// the number of segments is equal, the media sequence number is unchanged,
// and this playlist hasn't become the end of the playlist
export const isPlaylistUnchanged = (a, b) => a === b ||
(a.segments && b.segments && a.segments.length === b.segments.length &&
a.endList === b.endList &&
a.mediaSequence === b.mediaSequence);
/**
* Returns a new master playlist that is the result of merging an
* updated media playlist into the original version. If the
* updated media playlist does not match any of the playlist
* entries in the original master playlist, null is returned.
*
* @param {Object} master a parsed master M3U8 object
* @param {Object} media a parsed media M3U8 object
* @return {Object} a new object that represents the original
* master playlist with the updated media playlist merged in, or
* null if the merge produced no change.
*/
export const updateMaster = (master, newMedia, unchangedCheck = isPlaylistUnchanged) => {
const result = mergeOptions(master, {});
const oldMedia = result.playlists[newMedia.id];
if (!oldMedia) {
return null;
}
if (unchangedCheck(oldMedia, newMedia)) {
return null;
}
newMedia.segments = getAllSegments(newMedia);
const mergedPlaylist = mergeOptions(oldMedia, newMedia);
// always use the new media's preload segment
if (mergedPlaylist.preloadSegment && !newMedia.preloadSegment) {
delete mergedPlaylist.preloadSegment;
}
// if the update could overlap existing segment information, merge the two segment lists
if (oldMedia.segments) {
if (newMedia.skip) {
newMedia.segments = newMedia.segments || [];
// add back in objects for skipped segments, so that we merge
// old properties into the new segments
for (let i = 0; i < newMedia.skip.skippedSegments; i++) {
newMedia.segments.unshift({skipped: true});
}
}
mergedPlaylist.segments = updateSegments(
oldMedia.segments,
newMedia.segments,
newMedia.mediaSequence - oldMedia.mediaSequence
);
}
// resolve any segment URIs to prevent us from having to do it later
mergedPlaylist.segments.forEach((segment) => {
resolveSegmentUris(segment, mergedPlaylist.resolvedUri);
});
// TODO Right now in the playlists array there are two references to each playlist, one
// that is referenced by index, and one by URI. The index reference may no longer be
// necessary.
for (let i = 0; i < result.playlists.length; i++) {
if (result.playlists[i].id === newMedia.id) {
result.playlists[i] = mergedPlaylist;
}
}
result.playlists[newMedia.id] = mergedPlaylist;
// URI reference added for backwards compatibility
result.playlists[newMedia.uri] = mergedPlaylist;
// update media group playlist references.
forEachMediaGroup(master, (properties, mediaType, groupKey, labelKey) => {
if (!properties.playlists) {
return;
}
for (let i = 0; i < properties.playlists.length; i++) {
if (newMedia.id === properties.playlists[i].id) {
properties.playlists[i] = newMedia;
}
}
});
return result;
};
/**
* Calculates the time to wait before refreshing a live playlist
*
* @param {Object} media
* The current media
* @param {boolean} update
* True if there were any updates from the last refresh, false otherwise
* @return {number}
* The time in ms to wait before refreshing the live playlist
*/
export const refreshDelay = (media, update) => {
const lastSegment = media.segments[media.segments.length - 1];
const lastPart = lastSegment && lastSegment.parts && lastSegment.parts[lastSegment.parts.length - 1];
const lastDuration = lastPart && lastPart.duration || lastSegment && lastSegment.duration;
if (update && lastDuration) {
return lastDuration * 1000;
}
// if the playlist is unchanged since the last reload or last segment duration
// cannot be determined, try again after half the target duration
return (media.partTargetDuration || media.targetDuration || 10) * 500;
};
/**
* Load a playlist from a remote location
*
* @class PlaylistLoader
* @extends Stream
* @param {string|Object} src url or object of manifest
* @param {boolean} withCredentials the withCredentials xhr option
* @class
*/
export default class PlaylistLoader extends EventTarget {
constructor(src, vhs, options = { }) {
super();
if (!src) {
throw new Error('A non-empty playlist URL or object is required');
}
this.logger_ = logger('PlaylistLoader');
const { withCredentials = false, handleManifestRedirects = false } = options;
this.src = src;
this.vhs_ = vhs;
this.withCredentials = withCredentials;
this.handleManifestRedirects = handleManifestRedirects;
const vhsOptions = vhs.options_;
this.customTagParsers = (vhsOptions && vhsOptions.customTagParsers) || [];
this.customTagMappers = (vhsOptions && vhsOptions.customTagMappers) || [];
this.experimentalLLHLS = (vhsOptions && vhsOptions.experimentalLLHLS) || false;
// initialize the loader state
this.state = 'HAVE_NOTHING';
// live playlist staleness timeout
this.handleMediaupdatetimeout_ = this.handleMediaupdatetimeout_.bind(this);
this.on('mediaupdatetimeout', this.handleMediaupdatetimeout_);
}
handleMediaupdatetimeout_() {
if (this.state !== 'HAVE_METADATA') {
// only refresh the media playlist if no other activity is going on
return;
}
const media = this.media();
let uri = resolveUrl(this.master.uri, media.uri);
if (this.experimentalLLHLS) {
uri = addLLHLSQueryDirectives(uri, media);
}
this.state = 'HAVE_CURRENT_METADATA';
this.request = this.vhs_.xhr({
uri,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
if (error) {
return this.playlistRequestError(this.request, this.media(), 'HAVE_METADATA');
}
this.haveMetadata({
playlistString: this.request.responseText,
url: this.media().uri,
id: this.media().id
});
});
}
playlistRequestError(xhr, playlist, startingState) {
const {
uri,
id
} = playlist;
// any in-flight request is now finished
this.request = null;
if (startingState) {
this.state = startingState;
}
this.error = {
playlist: this.master.playlists[id],
status: xhr.status,
message: `HLS playlist request error at URL: ${uri}.`,
responseText: xhr.responseText,
code: (xhr.status >= 500) ? 4 : 2
};
this.trigger('error');
}
parseManifest_({url, manifestString}) {
return parseManifest({
onwarn: ({message}) => this.logger_(`m3u8-parser warn for ${url}: ${message}`),
oninfo: ({message}) => this.logger_(`m3u8-parser info for ${url}: ${message}`),
manifestString,
customTagParsers: this.customTagParsers,
customTagMappers: this.customTagMappers,
experimentalLLHLS: this.experimentalLLHLS
});
}
/**
* Update the playlist loader's state in response to a new or updated playlist.
*
* @param {string} [playlistString]
* Playlist string (if playlistObject is not provided)
* @param {Object} [playlistObject]
* Playlist object (if playlistString is not provided)
* @param {string} url
* URL of playlist
* @param {string} id
* ID to use for playlist
*/
haveMetadata({ playlistString, playlistObject, url, id }) {
// any in-flight request is now finished
this.request = null;
this.state = 'HAVE_METADATA';
const playlist = playlistObject || this.parseManifest_({
url,
manifestString: playlistString
});
playlist.lastRequest = Date.now();
setupMediaPlaylist({
playlist,
uri: url,
id
});
// merge this playlist into the master
const update = updateMaster(this.master, playlist);
this.targetDuration = playlist.partTargetDuration || playlist.targetDuration;
if (update) {
this.master = update;
this.media_ = this.master.playlists[id];
} else {
this.trigger('playlistunchanged');
}
// refresh live playlists after a target duration passes
if (!this.media().endList) {
window.clearTimeout(this.mediaUpdateTimeout);
this.mediaUpdateTimeout = window.setTimeout(() => {
this.trigger('mediaupdatetimeout');
}, refreshDelay(this.media(), !!update));
}
this.trigger('loadedplaylist');
}
/**
* Abort any outstanding work and clean up.
*/
dispose() {
this.trigger('dispose');
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
window.clearTimeout(this.finalRenditionTimeout);
this.off();
}
stopRequest() {
if (this.request) {
const oldRequest = this.request;
this.request = null;
oldRequest.onreadystatechange = null;
oldRequest.abort();
}
}
/**
* When called without any arguments, returns the currently
* active media playlist. When called with a single argument,
* triggers the playlist loader to asynchronously switch to the
* specified media playlist. Calling this method while the
* loader is in the HAVE_NOTHING causes an error to be emitted
* but otherwise has no effect.
*
* @param {Object=} playlist the parsed media playlist
* object to switch to
* @param {boolean=} shouldDelay whether we should delay the request by half target duration
*
* @return {Playlist} the current loaded media
*/
media(playlist, shouldDelay) {
// getter
if (!playlist) {
return this.media_;
}
// setter
if (this.state === 'HAVE_NOTHING') {
throw new Error('Cannot switch media playlist from ' + this.state);
}
// find the playlist object if the target playlist has been
// specified by URI
if (typeof playlist === 'string') {
if (!this.master.playlists[playlist]) {
throw new Error('Unknown playlist URI: ' + playlist);
}
playlist = this.master.playlists[playlist];
}
window.clearTimeout(this.finalRenditionTimeout);
if (shouldDelay) {
const delay = ((playlist.partTargetDuration || playlist.targetDuration) / 2) * 1000 || 5 * 1000;
this.finalRenditionTimeout =
window.setTimeout(this.media.bind(this, playlist, false), delay);
return;
}
const startingState = this.state;
const mediaChange = !this.media_ || playlist.id !== this.media_.id;
const masterPlaylistRef = this.master.playlists[playlist.id];
// switch to fully loaded playlists immediately
if (masterPlaylistRef && masterPlaylistRef.endList ||
// handle the case of a playlist object (e.g., if using vhs-json with a resolved
// media playlist or, for the case of demuxed audio, a resolved audio media group)
(playlist.endList && playlist.segments.length)) {
// abort outstanding playlist requests
if (this.request) {
this.request.onreadystatechange = null;
this.request.abort();
this.request = null;
}
this.state = 'HAVE_METADATA';
this.media_ = playlist;
// trigger media change if the active media has been updated
if (mediaChange) {
this.trigger('mediachanging');
if (startingState === 'HAVE_MASTER') {
// The initial playlist was a master manifest, and the first media selected was
// also provided (in the form of a resolved playlist object) as part of the
// source object (rather than just a URL). Therefore, since the media playlist
// doesn't need to be requested, loadedmetadata won't trigger as part of the
// normal flow, and needs an explicit trigger here.
this.trigger('loadedmetadata');
} else {
this.trigger('mediachange');
}
}
return;
}
// switching to the active playlist is a no-op
if (!mediaChange) {
return;
}
this.state = 'SWITCHING_MEDIA';
// there is already an outstanding playlist request
if (this.request) {
if (playlist.resolvedUri === this.request.url) {
// requesting to switch to the same playlist multiple times
// has no effect after the first
return;
}
this.request.onreadystatechange = null;
this.request.abort();
this.request = null;
}
// request the new playlist
if (this.media_) {
this.trigger('mediachanging');
}
this.request = this.vhs_.xhr({
uri: playlist.resolvedUri,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
playlist.lastRequest = Date.now();
playlist.resolvedUri = resolveManifestRedirect(this.handleManifestRedirects, playlist.resolvedUri, req);
if (error) {
return this.playlistRequestError(this.request, playlist, startingState);
}
this.haveMetadata({
playlistString: req.responseText,
url: playlist.uri,
id: playlist.id
});
// fire loadedmetadata the first time a media playlist is loaded
if (startingState === 'HAVE_MASTER') {
this.trigger('loadedmetadata');
} else {
this.trigger('mediachange');
}
});
}
/**
* pause loading of the playlist
*/
pause() {
this.stopRequest();
window.clearTimeout(this.mediaUpdateTimeout);
if (this.state === 'HAVE_NOTHING') {
// If we pause the loader before any data has been retrieved, its as if we never
// started, so reset to an unstarted state.
this.started = false;
}
// Need to restore state now that no activity is happening
if (this.state === 'SWITCHING_MEDIA') {
// if the loader was in the process of switching media, it should either return to
// HAVE_MASTER or HAVE_METADATA depending on if the loader has loaded a media
// playlist yet. This is determined by the existence of loader.media_
if (this.media_) {
this.state = 'HAVE_METADATA';
} else {
this.state = 'HAVE_MASTER';
}
} else if (this.state === 'HAVE_CURRENT_METADATA') {
this.state = 'HAVE_METADATA';
}
}
/**
* start loading of the playlist
*/
load(shouldDelay) {
window.clearTimeout(this.mediaUpdateTimeout);
const media = this.media();
if (shouldDelay) {
const delay = media ? ((media.partTargetDuration || media.targetDuration) / 2) * 1000 : 5 * 1000;
this.mediaUpdateTimeout = window.setTimeout(() => this.load(), delay);
return;
}
if (!this.started) {
this.start();
return;
}
if (media && !media.endList) {
this.trigger('mediaupdatetimeout');
} else {
this.trigger('loadedplaylist');
}
}
/**
* start loading of the playlist
*/
start() {
this.started = true;
if (typeof this.src === 'object') {
// in the case of an entirely constructed manifest object (meaning there's no actual
// manifest on a server), default the uri to the page's href
if (!this.src.uri) {
this.src.uri = window.location.href;
}
// resolvedUri is added on internally after the initial request. Since there's no
// request for pre-resolved manifests, add on resolvedUri here.
this.src.resolvedUri = this.src.uri;
// Since a manifest object was passed in as the source (instead of a URL), the first
// request can be skipped (since the top level of the manifest, at a minimum, is
// already available as a parsed manifest object). However, if the manifest object
// represents a master playlist, some media playlists may need to be resolved before
// the starting segment list is available. Therefore, go directly to setup of the
// initial playlist, and let the normal flow continue from there.
//
// Note that the call to setup is asynchronous, as other sections of VHS may assume
// that the first request is asynchronous.
setTimeout(() => {
this.setupInitialPlaylist(this.src);
}, 0);
return;
}
// request the specified URL
this.request = this.vhs_.xhr({
uri: this.src,
withCredentials: this.withCredentials
}, (error, req) => {
// disposed
if (!this.request) {
return;
}
// clear the loader's request reference
this.request = null;
if (error) {
this.error = {
status: req.status,
message: `HLS playlist request error at URL: ${this.src}.`,
responseText: req.responseText,
// MEDIA_ERR_NETWORK
code: 2
};
if (this.state === 'HAVE_NOTHING') {
this.started = false;
}
return this.trigger('error');
}
this.src = resolveManifestRedirect(this.handleManifestRedirects, this.src, req);
const manifest = this.parseManifest_({
manifestString: req.responseText,
url: this.src
});
this.setupInitialPlaylist(manifest);
});
}
srcUri() {
return typeof this.src === 'string' ? this.src : this.src.uri;
}
/**
* Given a manifest object that's either a master or media playlist, trigger the proper
* events and set the state of the playlist loader.
*
* If the manifest object represents a master playlist, `loadedplaylist` will be
* triggered to allow listeners to select a playlist. If none is selected, the loader
* will default to the first one in the playlists array.
*
* If the manifest object represents a media playlist, `loadedplaylist` will be
* triggered followed by `loadedmetadata`, as the only available playlist is loaded.
*
* In the case of a media playlist, a master playlist object wrapper with one playlist
* will be created so that all logic can handle playlists in the same fashion (as an
* assumed manifest object schema).
*
* @param {Object} manifest
* The parsed manifest object
*/
setupInitialPlaylist(manifest) {
this.state = 'HAVE_MASTER';
if (manifest.playlists) {
this.master = manifest;
addPropertiesToMaster(this.master, this.srcUri());
// If the initial master playlist has playlists wtih segments already resolved,
// then resolve URIs in advance, as they are usually done after a playlist request,
// which may not happen if the playlist is resolved.
manifest.playlists.forEach((playlist) => {
playlist.segments = getAllSegments(playlist);
playlist.segments.forEach((segment) => {
resolveSegmentUris(segment, playlist.resolvedUri);
});
});
this.trigger('loadedplaylist');
if (!this.request) {
// no media playlist was specifically selected so start
// from the first listed one
this.media(this.master.playlists[0]);
}
return;
}
// In order to support media playlists passed in as vhs-json, the case where the uri
// is not provided as part of the manifest should be considered, and an appropriate
// default used.
const uri = this.srcUri() || window.location.href;
this.master = masterForMedia(manifest, uri);
this.haveMetadata({
playlistObject: manifest,
url: uri,
id: this.master.playlists[0].id
});
this.trigger('loadedmetadata');
}
}

View file

@ -0,0 +1,531 @@
import window from 'global/window';
import Config from './config';
import Playlist from './playlist';
import { codecsForPlaylist } from './util/codecs.js';
import logger from './util/logger';
const logFn = logger('PlaylistSelector');
const representationToString = function(representation) {
if (!representation || !representation.playlist) {
return;
}
const playlist = representation.playlist;
return JSON.stringify({
id: playlist.id,
bandwidth: representation.bandwidth,
width: representation.width,
height: representation.height,
codecs: playlist.attributes && playlist.attributes.CODECS || ''
});
};
// Utilities
/**
* Returns the CSS value for the specified property on an element
* using `getComputedStyle`. Firefox has a long-standing issue where
* getComputedStyle() may return null when running in an iframe with
* `display: none`.
*
* @see https://bugzilla.mozilla.org/show_bug.cgi?id=548397
* @param {HTMLElement} el the htmlelement to work on
* @param {string} the proprety to get the style for
*/
const safeGetComputedStyle = function(el, property) {
if (!el) {
return '';
}
const result = window.getComputedStyle(el);
if (!result) {
return '';
}
return result[property];
};
/**
* Resuable stable sort function
*
* @param {Playlists} array
* @param {Function} sortFn Different comparators
* @function stableSort
*/
const stableSort = function(array, sortFn) {
const newArray = array.slice();
array.sort(function(left, right) {
const cmp = sortFn(left, right);
if (cmp === 0) {
return newArray.indexOf(left) - newArray.indexOf(right);
}
return cmp;
});
};
/**
* A comparator function to sort two playlist object by bandwidth.
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the bandwidth attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the bandwidth of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistBandwidth = function(left, right) {
let leftBandwidth;
let rightBandwidth;
if (left.attributes.BANDWIDTH) {
leftBandwidth = left.attributes.BANDWIDTH;
}
leftBandwidth = leftBandwidth || window.Number.MAX_VALUE;
if (right.attributes.BANDWIDTH) {
rightBandwidth = right.attributes.BANDWIDTH;
}
rightBandwidth = rightBandwidth || window.Number.MAX_VALUE;
return leftBandwidth - rightBandwidth;
};
/**
* A comparator function to sort two playlist object by resolution (width).
*
* @param {Object} left a media playlist object
* @param {Object} right a media playlist object
* @return {number} Greater than zero if the resolution.width attribute of
* left is greater than the corresponding attribute of right. Less
* than zero if the resolution.width of right is greater than left and
* exactly zero if the two are equal.
*/
export const comparePlaylistResolution = function(left, right) {
let leftWidth;
let rightWidth;
if (left.attributes.RESOLUTION &&
left.attributes.RESOLUTION.width) {
leftWidth = left.attributes.RESOLUTION.width;
}
leftWidth = leftWidth || window.Number.MAX_VALUE;
if (right.attributes.RESOLUTION &&
right.attributes.RESOLUTION.width) {
rightWidth = right.attributes.RESOLUTION.width;
}
rightWidth = rightWidth || window.Number.MAX_VALUE;
// NOTE - Fallback to bandwidth sort as appropriate in cases where multiple renditions
// have the same media dimensions/ resolution
if (leftWidth === rightWidth &&
left.attributes.BANDWIDTH &&
right.attributes.BANDWIDTH) {
return left.attributes.BANDWIDTH - right.attributes.BANDWIDTH;
}
return leftWidth - rightWidth;
};
/**
* Chooses the appropriate media playlist based on bandwidth and player size
*
* @param {Object} master
* Object representation of the master manifest
* @param {number} playerBandwidth
* Current calculated bandwidth of the player
* @param {number} playerWidth
* Current width of the player element (should account for the device pixel ratio)
* @param {number} playerHeight
* Current height of the player element (should account for the device pixel ratio)
* @param {boolean} limitRenditionByPlayerDimensions
* True if the player width and height should be used during the selection, false otherwise
* @param {Object} masterPlaylistController
* the current masterPlaylistController object
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export let simpleSelector = function(
master,
playerBandwidth,
playerWidth,
playerHeight,
limitRenditionByPlayerDimensions,
masterPlaylistController
) {
// If we end up getting called before `master` is available, exit early
if (!master) {
return;
}
const options = {
bandwidth: playerBandwidth,
width: playerWidth,
height: playerHeight,
limitRenditionByPlayerDimensions
};
let playlists = master.playlists;
// if playlist is audio only, select between currently active audio group playlists.
if (Playlist.isAudioOnly(master)) {
playlists = masterPlaylistController.getAudioTrackPlaylists_();
// add audioOnly to options so that we log audioOnly: true
// at the buttom of this function for debugging.
options.audioOnly = true;
}
// convert the playlists to an intermediary representation to make comparisons easier
let sortedPlaylistReps = playlists.map((playlist) => {
let bandwidth;
const width = playlist.attributes && playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.width;
const height = playlist.attributes && playlist.attributes.RESOLUTION && playlist.attributes.RESOLUTION.height;
bandwidth = playlist.attributes && playlist.attributes.BANDWIDTH;
bandwidth = bandwidth || window.Number.MAX_VALUE;
return {
bandwidth,
width,
height,
playlist
};
});
stableSort(sortedPlaylistReps, (left, right) => left.bandwidth - right.bandwidth);
// filter out any playlists that have been excluded due to
// incompatible configurations
sortedPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isIncompatible(rep.playlist));
// filter out any playlists that have been disabled manually through the representations
// api or blacklisted temporarily due to playback errors.
let enabledPlaylistReps = sortedPlaylistReps.filter((rep) => Playlist.isEnabled(rep.playlist));
if (!enabledPlaylistReps.length) {
// if there are no enabled playlists, then they have all been blacklisted or disabled
// by the user through the representations api. In this case, ignore blacklisting and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylistReps = sortedPlaylistReps.filter((rep) => !Playlist.isDisabled(rep.playlist));
}
// filter out any variant that has greater effective bitrate
// than the current estimated bandwidth
const bandwidthPlaylistReps = enabledPlaylistReps.filter((rep) => rep.bandwidth * Config.BANDWIDTH_VARIANCE < playerBandwidth);
let highestRemainingBandwidthRep =
bandwidthPlaylistReps[bandwidthPlaylistReps.length - 1];
// get all of the renditions with the same (highest) bandwidth
// and then taking the very first element
const bandwidthBestRep = bandwidthPlaylistReps.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
// if we're not going to limit renditions by player size, make an early decision.
if (limitRenditionByPlayerDimensions === false) {
const chosenRep = (
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (bandwidthBestRep) {
type = 'bandwidthBestRep';
}
if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
}
// filter out playlists without resolution information
const haveResolution = bandwidthPlaylistReps.filter((rep) => rep.width && rep.height);
// sort variants by resolution
stableSort(haveResolution, (left, right) => left.width - right.width);
// if we have the exact resolution as the player use it
const resolutionBestRepList = haveResolution.filter((rep) => rep.width === playerWidth && rep.height === playerHeight);
highestRemainingBandwidthRep = resolutionBestRepList[resolutionBestRepList.length - 1];
// ensure that we pick the highest bandwidth variant that have exact resolution
const resolutionBestRep = resolutionBestRepList.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
let resolutionPlusOneList;
let resolutionPlusOneSmallest;
let resolutionPlusOneRep;
// find the smallest variant that is larger than the player
// if there is no match of exact resolution
if (!resolutionBestRep) {
resolutionPlusOneList = haveResolution.filter((rep) => rep.width > playerWidth || rep.height > playerHeight);
// find all the variants have the same smallest resolution
resolutionPlusOneSmallest = resolutionPlusOneList.filter((rep) => rep.width === resolutionPlusOneList[0].width &&
rep.height === resolutionPlusOneList[0].height);
// ensure that we also pick the highest bandwidth variant that
// is just-larger-than the video player
highestRemainingBandwidthRep =
resolutionPlusOneSmallest[resolutionPlusOneSmallest.length - 1];
resolutionPlusOneRep = resolutionPlusOneSmallest.filter((rep) => rep.bandwidth === highestRemainingBandwidthRep.bandwidth)[0];
}
// fallback chain of variants
const chosenRep = (
resolutionPlusOneRep ||
resolutionBestRep ||
bandwidthBestRep ||
enabledPlaylistReps[0] ||
sortedPlaylistReps[0]
);
if (chosenRep && chosenRep.playlist) {
let type = 'sortedPlaylistReps';
if (resolutionPlusOneRep) {
type = 'resolutionPlusOneRep';
} else if (resolutionBestRep) {
type = 'resolutionBestRep';
} else if (bandwidthBestRep) {
type = 'bandwidthBestRep';
} else if (enabledPlaylistReps[0]) {
type = 'enabledPlaylistReps';
}
logFn(`choosing ${representationToString(chosenRep)} using ${type} with options`, options);
return chosenRep.playlist;
}
logFn('could not choose a playlist with options', options);
return null;
};
export const TEST_ONLY_SIMPLE_SELECTOR = (newSimpleSelector) => {
const oldSimpleSelector = simpleSelector;
simpleSelector = newSimpleSelector;
return function resetSimpleSelector() {
simpleSelector = oldSimpleSelector;
};
};
// Playlist Selectors
/**
* Chooses the appropriate media playlist based on the most recent
* bandwidth estimate and the player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Playlist} the highest bitrate playlist less than the
* currently detected bandwidth, accounting for some amount of
* bandwidth variance
*/
export const lastBandwidthSelector = function() {
const pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
return simpleSelector(
this.playlists.master,
this.systemBandwidth,
parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
this.limitRenditionByPlayerDimensions,
this.masterPlaylistController_
);
};
/**
* Chooses the appropriate media playlist based on an
* exponential-weighted moving average of the bandwidth after
* filtering for player size.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @param {number} decay - a number between 0 and 1. Higher values of
* this parameter will cause previous bandwidth estimates to lose
* significance more quickly.
* @return {Function} a function which can be invoked to create a new
* playlist selector function.
* @see https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average
*/
export const movingAverageBandwidthSelector = function(decay) {
let average = -1;
let lastSystemBandwidth = -1;
if (decay < 0 || decay > 1) {
throw new Error('Moving average bandwidth decay must be between 0 and 1.');
}
return function() {
const pixelRatio = this.useDevicePixelRatio ? window.devicePixelRatio || 1 : 1;
if (average < 0) {
average = this.systemBandwidth;
lastSystemBandwidth = this.systemBandwidth;
}
// stop the average value from decaying for every 250ms
// when the systemBandwidth is constant
// and
// stop average from setting to a very low value when the
// systemBandwidth becomes 0 in case of chunk cancellation
if (this.systemBandwidth > 0 && this.systemBandwidth !== lastSystemBandwidth) {
average = decay * this.systemBandwidth + (1 - decay) * average;
lastSystemBandwidth = this.systemBandwidth;
}
return simpleSelector(
this.playlists.master,
average,
parseInt(safeGetComputedStyle(this.tech_.el(), 'width'), 10) * pixelRatio,
parseInt(safeGetComputedStyle(this.tech_.el(), 'height'), 10) * pixelRatio,
this.limitRenditionByPlayerDimensions,
this.masterPlaylistController_
);
};
};
/**
* Chooses the appropriate media playlist based on the potential to rebuffer
*
* @param {Object} settings
* Object of information required to use this selector
* @param {Object} settings.master
* Object representation of the master manifest
* @param {number} settings.currentTime
* The current time of the player
* @param {number} settings.bandwidth
* Current measured bandwidth
* @param {number} settings.duration
* Duration of the media
* @param {number} settings.segmentDuration
* Segment duration to be used in round trip time calculations
* @param {number} settings.timeUntilRebuffer
* Time left in seconds until the player has to rebuffer
* @param {number} settings.currentTimeline
* The current timeline segments are being loaded from
* @param {SyncController} settings.syncController
* SyncController for determining if we have a sync point for a given playlist
* @return {Object|null}
* {Object} return.playlist
* The highest bandwidth playlist with the least amount of rebuffering
* {Number} return.rebufferingImpact
* The amount of time in seconds switching to this playlist will rebuffer. A
* negative value means that switching will cause zero rebuffering.
*/
export const minRebufferMaxBandwidthSelector = function(settings) {
const {
master,
currentTime,
bandwidth,
duration,
segmentDuration,
timeUntilRebuffer,
currentTimeline,
syncController
} = settings;
// filter out any playlists that have been excluded due to
// incompatible configurations
const compatiblePlaylists = master.playlists.filter(playlist => !Playlist.isIncompatible(playlist));
// filter out any playlists that have been disabled manually through the representations
// api or blacklisted temporarily due to playback errors.
let enabledPlaylists = compatiblePlaylists.filter(Playlist.isEnabled);
if (!enabledPlaylists.length) {
// if there are no enabled playlists, then they have all been blacklisted or disabled
// by the user through the representations api. In this case, ignore blacklisting and
// fallback to what the user wants by using playlists the user has not disabled.
enabledPlaylists = compatiblePlaylists.filter(playlist => !Playlist.isDisabled(playlist));
}
const bandwidthPlaylists =
enabledPlaylists.filter(Playlist.hasAttribute.bind(null, 'BANDWIDTH'));
const rebufferingEstimates = bandwidthPlaylists.map((playlist) => {
const syncPoint = syncController.getSyncPoint(
playlist,
duration,
currentTimeline,
currentTime
);
// If there is no sync point for this playlist, switching to it will require a
// sync request first. This will double the request time
const numRequests = syncPoint ? 1 : 2;
const requestTimeEstimate = Playlist.estimateSegmentRequestTime(
segmentDuration,
bandwidth,
playlist
);
const rebufferingImpact = (requestTimeEstimate * numRequests) - timeUntilRebuffer;
return {
playlist,
rebufferingImpact
};
});
const noRebufferingPlaylists = rebufferingEstimates.filter((estimate) => estimate.rebufferingImpact <= 0);
// Sort by bandwidth DESC
stableSort(
noRebufferingPlaylists,
(a, b) => comparePlaylistBandwidth(b.playlist, a.playlist)
);
if (noRebufferingPlaylists.length) {
return noRebufferingPlaylists[0];
}
stableSort(rebufferingEstimates, (a, b) => a.rebufferingImpact - b.rebufferingImpact);
return rebufferingEstimates[0] || null;
};
/**
* Chooses the appropriate media playlist, which in this case is the lowest bitrate
* one with video. If no renditions with video exist, return the lowest audio rendition.
*
* Expects to be called within the context of an instance of VhsHandler
*
* @return {Object|null}
* {Object} return.playlist
* The lowest bitrate playlist that contains a video codec. If no such rendition
* exists pick the lowest audio rendition.
*/
export const lowestBitrateCompatibleVariantSelector = function() {
// filter out any playlists that have been excluded due to
// incompatible configurations or playback errors
const playlists = this.playlists.master.playlists.filter(Playlist.isEnabled);
// Sort ascending by bitrate
stableSort(
playlists,
(a, b) => comparePlaylistBandwidth(a, b)
);
// Parse and assume that playlists with no video codec have no video
// (this is not necessarily true, although it is generally true).
//
// If an entire manifest has no valid videos everything will get filtered
// out.
const playlistsWithVideo = playlists.filter(playlist => !!codecsForPlaylist(this.playlists.master, playlist).video);
return playlistsWithVideo[0] || null;
};

730
node_modules/@videojs/http-streaming/src/playlist.js generated vendored Normal file
View file

@ -0,0 +1,730 @@
/**
* @file playlist.js
*
* Playlist related utilities.
*/
import videojs from 'video.js';
import window from 'global/window';
import {isAudioCodec} from '@videojs/vhs-utils/es/codecs.js';
import {TIME_FUDGE_FACTOR} from './ranges.js';
const {createTimeRange} = videojs;
/**
* A function to get a combined list of parts and segments with durations
* and indexes.
*
* @param {Playlist} playlist the playlist to get the list for.
*
* @return {Array} The part/segment list.
*/
export const getPartsAndSegments = (playlist) => (playlist.segments || []).reduce((acc, segment, si) => {
if (segment.parts) {
segment.parts.forEach(function(part, pi) {
acc.push({duration: part.duration, segmentIndex: si, partIndex: pi, part, segment});
});
} else {
acc.push({duration: segment.duration, segmentIndex: si, partIndex: null, segment, part: null});
}
return acc;
}, []);
export const getLastParts = (media) => {
const lastSegment = media.segments && media.segments.length && media.segments[media.segments.length - 1];
return lastSegment && lastSegment.parts || [];
};
export const getKnownPartCount = ({preloadSegment}) => {
if (!preloadSegment) {
return;
}
const {parts, preloadHints} = preloadSegment;
let partCount = (preloadHints || [])
.reduce((count, hint) => count + (hint.type === 'PART' ? 1 : 0), 0);
partCount += (parts && parts.length) ? parts.length : 0;
return partCount;
};
/**
* Get the number of seconds to delay from the end of a
* live playlist.
*
* @param {Playlist} master the master playlist
* @param {Playlist} media the media playlist
* @return {number} the hold back in seconds.
*/
export const liveEdgeDelay = (master, media) => {
if (media.endList) {
return 0;
}
// dash suggestedPresentationDelay trumps everything
if (master && master.suggestedPresentationDelay) {
return master.suggestedPresentationDelay;
}
const hasParts = getLastParts(media).length > 0;
// look for "part" delays from ll-hls first
if (hasParts && media.serverControl && media.serverControl.partHoldBack) {
return media.serverControl.partHoldBack;
} else if (hasParts && media.partTargetDuration) {
return media.partTargetDuration * 3;
// finally look for full segment delays
} else if (media.serverControl && media.serverControl.holdBack) {
return media.serverControl.holdBack;
} else if (media.targetDuration) {
return media.targetDuration * 3;
}
return 0;
};
/**
* walk backward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {Number} endSequence the mediaSequence to stop walking on
*/
const backwardDuration = function(playlist, endSequence) {
let result = 0;
let i = endSequence - playlist.mediaSequence;
// if a start time is available for segment immediately following
// the interval, use it
let segment = playlist.segments[i];
// Walk backward until we find the latest segment with timeline
// information that is earlier than endSequence
if (segment) {
if (typeof segment.start !== 'undefined') {
return { result: segment.start, precise: true };
}
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - segment.duration,
precise: true
};
}
}
while (i--) {
segment = playlist.segments[i];
if (typeof segment.end !== 'undefined') {
return { result: result + segment.end, precise: true };
}
result += segment.duration;
if (typeof segment.start !== 'undefined') {
return { result: result + segment.start, precise: true };
}
}
return { result, precise: false };
};
/**
* walk forward until we find a duration we can use
* or return a failure
*
* @param {Playlist} playlist the playlist to walk through
* @param {number} endSequence the mediaSequence to stop walking on
*/
const forwardDuration = function(playlist, endSequence) {
let result = 0;
let segment;
let i = endSequence - playlist.mediaSequence;
// Walk forward until we find the earliest segment with timeline
// information
for (; i < playlist.segments.length; i++) {
segment = playlist.segments[i];
if (typeof segment.start !== 'undefined') {
return {
result: segment.start - result,
precise: true
};
}
result += segment.duration;
if (typeof segment.end !== 'undefined') {
return {
result: segment.end - result,
precise: true
};
}
}
// indicate we didn't find a useful duration estimate
return { result: -1, precise: false };
};
/**
* Calculate the media duration from the segments associated with a
* playlist. The duration of a subinterval of the available segments
* may be calculated by specifying an end index.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper boundary
* for the playlist. Defaults to playlist length.
* @param {number} expired the amount of time that has dropped
* off the front of the playlist in a live scenario
* @return {number} the duration between the first available segment
* and end index.
*/
const intervalDuration = function(playlist, endSequence, expired) {
if (typeof endSequence === 'undefined') {
endSequence = playlist.mediaSequence + playlist.segments.length;
}
if (endSequence < playlist.mediaSequence) {
return 0;
}
// do a backward walk to estimate the duration
const backward = backwardDuration(playlist, endSequence);
if (backward.precise) {
// if we were able to base our duration estimate on timing
// information provided directly from the Media Source, return
// it
return backward.result;
}
// walk forward to see if a precise duration estimate can be made
// that way
const forward = forwardDuration(playlist, endSequence);
if (forward.precise) {
// we found a segment that has been buffered and so it's
// position is known precisely
return forward.result;
}
// return the less-precise, playlist-based duration estimate
return backward.result + expired;
};
/**
* Calculates the duration of a playlist. If a start and end index
* are specified, the duration will be for the subset of the media
* timeline between those two indices. The total duration for live
* playlists is always Infinity.
*
* @param {Object} playlist a media playlist object
* @param {number=} endSequence an exclusive upper
* boundary for the playlist. Defaults to the playlist media
* sequence number plus its length.
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @return {number} the duration between the start index and end
* index.
*/
export const duration = function(playlist, endSequence, expired) {
if (!playlist) {
return 0;
}
if (typeof expired !== 'number') {
expired = 0;
}
// if a slice of the total duration is not requested, use
// playlist-level duration indicators when they're present
if (typeof endSequence === 'undefined') {
// if present, use the duration specified in the playlist
if (playlist.totalDuration) {
return playlist.totalDuration;
}
// duration should be Infinity for live playlists
if (!playlist.endList) {
return window.Infinity;
}
}
// calculate the total duration based on the segment durations
return intervalDuration(
playlist,
endSequence,
expired
);
};
/**
* Calculate the time between two indexes in the current playlist
* neight the start- nor the end-index need to be within the current
* playlist in which case, the targetDuration of the playlist is used
* to approximate the durations of the segments
*
* @param {Array} options.durationList list to iterate over for durations.
* @param {number} options.defaultDuration duration to use for elements before or after the durationList
* @param {number} options.startIndex partsAndSegments index to start
* @param {number} options.endIndex partsAndSegments index to end.
* @return {number} the number of seconds between startIndex and endIndex
*/
export const sumDurations = function({defaultDuration, durationList, startIndex, endIndex}) {
let durations = 0;
if (startIndex > endIndex) {
[startIndex, endIndex] = [endIndex, startIndex];
}
if (startIndex < 0) {
for (let i = startIndex; i < Math.min(0, endIndex); i++) {
durations += defaultDuration;
}
startIndex = 0;
}
for (let i = startIndex; i < endIndex; i++) {
durations += durationList[i].duration;
}
return durations;
};
/**
* Calculates the playlist end time
*
* @param {Object} playlist a media playlist object
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {boolean|false} useSafeLiveEnd a boolean value indicating whether or not the
* playlist end calculation should consider the safe live end
* (truncate the playlist end by three segments). This is normally
* used for calculating the end of the playlist's seekable range.
* This takes into account the value of liveEdgePadding.
* Setting liveEdgePadding to 0 is equivalent to setting this to false.
* @param {number} liveEdgePadding a number indicating how far from the end of the playlist we should be in seconds.
* If this is provided, it is used in the safe live end calculation.
* Setting useSafeLiveEnd=false or liveEdgePadding=0 are equivalent.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {number} the end time of playlist
* @function playlistEnd
*/
export const playlistEnd = function(playlist, expired, useSafeLiveEnd, liveEdgePadding) {
if (!playlist || !playlist.segments) {
return null;
}
if (playlist.endList) {
return duration(playlist);
}
if (expired === null) {
return null;
}
expired = expired || 0;
let lastSegmentTime = intervalDuration(
playlist,
playlist.mediaSequence + playlist.segments.length,
expired
);
if (useSafeLiveEnd) {
liveEdgePadding = typeof liveEdgePadding === 'number' ? liveEdgePadding : liveEdgeDelay(null, playlist);
lastSegmentTime -= liveEdgePadding;
}
// don't return a time less than zero
return Math.max(0, lastSegmentTime);
};
/**
* Calculates the interval of time that is currently seekable in a
* playlist. The returned time ranges are relative to the earliest
* moment in the specified playlist that is still available. A full
* seekable implementation for live streams would need to offset
* these values by the duration of content that has expired from the
* stream.
*
* @param {Object} playlist a media playlist object
* dropped off the front of the playlist in a live scenario
* @param {number=} expired the amount of time that has
* dropped off the front of the playlist in a live scenario
* @param {number} liveEdgePadding how far from the end of the playlist we should be in seconds.
* Corresponds to suggestedPresentationDelay in DASH manifests.
* @return {TimeRanges} the periods of time that are valid targets
* for seeking
*/
export const seekable = function(playlist, expired, liveEdgePadding) {
const useSafeLiveEnd = true;
const seekableStart = expired || 0;
const seekableEnd = playlistEnd(playlist, expired, useSafeLiveEnd, liveEdgePadding);
if (seekableEnd === null) {
return createTimeRange();
}
return createTimeRange(seekableStart, seekableEnd);
};
/**
* Determine the index and estimated starting time of the segment that
* contains a specified playback position in a media playlist.
*
* @param {Object} options.playlist the media playlist to query
* @param {number} options.currentTime The number of seconds since the earliest
* possible position to determine the containing segment for
* @param {number} options.startTime the time when the segment/part starts
* @param {number} options.startingSegmentIndex the segment index to start looking at.
* @param {number?} [options.startingPartIndex] the part index to look at within the segment.
*
* @return {Object} an object with partIndex, segmentIndex, and startTime.
*/
export const getMediaInfoForTime = function({
playlist,
currentTime,
startingSegmentIndex,
startingPartIndex,
startTime
}) {
let time = currentTime - startTime;
const partsAndSegments = getPartsAndSegments(playlist);
let startIndex = 0;
for (let i = 0; i < partsAndSegments.length; i++) {
const partAndSegment = partsAndSegments[i];
if (startingSegmentIndex !== partAndSegment.segmentIndex) {
continue;
}
// skip this if part index does not match.
if (typeof startingPartIndex === 'number' && typeof partAndSegment.partIndex === 'number' && startingPartIndex !== partAndSegment.partIndex) {
continue;
}
startIndex = i;
break;
}
if (time < 0) {
// Walk backward from startIndex in the playlist, adding durations
// until we find a segment that contains `time` and return it
if (startIndex > 0) {
for (let i = startIndex - 1; i >= 0; i--) {
const partAndSegment = partsAndSegments[i];
time += partAndSegment.duration;
// TODO: consider not using TIME_FUDGE_FACTOR at all here
if ((time + TIME_FUDGE_FACTOR) > 0) {
return {
partIndex: partAndSegment.partIndex,
segmentIndex: partAndSegment.segmentIndex,
startTime: startTime - sumDurations({
defaultDuration: playlist.targetDuration,
durationList: partsAndSegments,
startIndex,
endIndex: i
})
};
}
}
}
// We were unable to find a good segment within the playlist
// so select the first segment
return {
partIndex: partsAndSegments[0] && partsAndSegments[0].partIndex || null,
segmentIndex: partsAndSegments[0] && partsAndSegments[0].segmentIndex || 0,
startTime: currentTime
};
}
// When startIndex is negative, we first walk forward to first segment
// adding target durations. If we "run out of time" before getting to
// the first segment, return the first segment
if (startIndex < 0) {
for (let i = startIndex; i < 0; i++) {
time -= playlist.targetDuration;
if (time < 0) {
return {
partIndex: partsAndSegments[0] && partsAndSegments[0].partIndex || null,
segmentIndex: partsAndSegments[0] && partsAndSegments[0].segmentIndex || 0,
startTime: currentTime
};
}
}
startIndex = 0;
}
// Walk forward from startIndex in the playlist, subtracting durations
// until we find a segment that contains `time` and return it
for (let i = startIndex; i < partsAndSegments.length; i++) {
const partAndSegment = partsAndSegments[i];
time -= partAndSegment.duration;
// TODO: consider not using TIME_FUDGE_FACTOR at all here
if ((time - TIME_FUDGE_FACTOR) < 0) {
return {
partIndex: partAndSegment.partIndex,
segmentIndex: partAndSegment.segmentIndex,
startTime: startTime + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: partsAndSegments,
startIndex,
endIndex: i
})
};
}
}
// We are out of possible candidates so load the last one...
return {
segmentIndex: partsAndSegments[partsAndSegments.length - 1].segmentIndex,
partIndex: partsAndSegments[partsAndSegments.length - 1].partIndex,
startTime: currentTime
};
};
/**
* Check whether the playlist is blacklisted or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is blacklisted or not
* @function isBlacklisted
*/
export const isBlacklisted = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil > Date.now();
};
/**
* Check whether the playlist is compatible with current playback configuration or has
* been blacklisted permanently for being incompatible.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is incompatible or not
* @function isIncompatible
*/
export const isIncompatible = function(playlist) {
return playlist.excludeUntil && playlist.excludeUntil === Infinity;
};
/**
* Check whether the playlist is enabled or not.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is enabled or not
* @function isEnabled
*/
export const isEnabled = function(playlist) {
const blacklisted = isBlacklisted(playlist);
return (!playlist.disabled && !blacklisted);
};
/**
* Check whether the playlist has been manually disabled through the representations api.
*
* @param {Object} playlist the media playlist object
* @return {boolean} whether the playlist is disabled manually or not
* @function isDisabled
*/
export const isDisabled = function(playlist) {
return playlist.disabled;
};
/**
* Returns whether the current playlist is an AES encrypted HLS stream
*
* @return {boolean} true if it's an AES encrypted HLS stream
*/
export const isAes = function(media) {
for (let i = 0; i < media.segments.length; i++) {
if (media.segments[i].key) {
return true;
}
}
return false;
};
/**
* Checks if the playlist has a value for the specified attribute
*
* @param {string} attr
* Attribute to check for
* @param {Object} playlist
* The media playlist object
* @return {boolean}
* Whether the playlist contains a value for the attribute or not
* @function hasAttribute
*/
export const hasAttribute = function(attr, playlist) {
return playlist.attributes && playlist.attributes[attr];
};
/**
* Estimates the time required to complete a segment download from the specified playlist
*
* @param {number} segmentDuration
* Duration of requested segment
* @param {number} bandwidth
* Current measured bandwidth of the player
* @param {Object} playlist
* The media playlist object
* @param {number=} bytesReceived
* Number of bytes already received for the request. Defaults to 0
* @return {number|NaN}
* The estimated time to request the segment. NaN if bandwidth information for
* the given playlist is unavailable
* @function estimateSegmentRequestTime
*/
export const estimateSegmentRequestTime = function(
segmentDuration,
bandwidth,
playlist,
bytesReceived = 0
) {
if (!hasAttribute('BANDWIDTH', playlist)) {
return NaN;
}
const size = segmentDuration * playlist.attributes.BANDWIDTH;
return (size - (bytesReceived * 8)) / bandwidth;
};
/*
* Returns whether the current playlist is the lowest rendition
*
* @return {Boolean} true if on lowest rendition
*/
export const isLowestEnabledRendition = (master, media) => {
if (master.playlists.length === 1) {
return true;
}
const currentBandwidth = media.attributes.BANDWIDTH || Number.MAX_VALUE;
return (master.playlists.filter((playlist) => {
if (!isEnabled(playlist)) {
return false;
}
return (playlist.attributes.BANDWIDTH || 0) < currentBandwidth;
}).length === 0);
};
export const playlistMatch = (a, b) => {
// both playlits are null
// or only one playlist is non-null
// no match
if (!a && !b || (!a && b) || (a && !b)) {
return false;
}
// playlist objects are the same, match
if (a === b) {
return true;
}
// first try to use id as it should be the most
// accurate
if (a.id && b.id && a.id === b.id) {
return true;
}
// next try to use reslovedUri as it should be the
// second most accurate.
if (a.resolvedUri && b.resolvedUri && a.resolvedUri === b.resolvedUri) {
return true;
}
// finally try to use uri as it should be accurate
// but might miss a few cases for relative uris
if (a.uri && b.uri && a.uri === b.uri) {
return true;
}
return false;
};
const someAudioVariant = function(master, callback) {
const AUDIO = master && master.mediaGroups && master.mediaGroups.AUDIO || {};
let found = false;
for (const groupName in AUDIO) {
for (const label in AUDIO[groupName]) {
found = callback(AUDIO[groupName][label]);
if (found) {
break;
}
}
if (found) {
break;
}
}
return !!found;
};
export const isAudioOnly = (master) => {
// we are audio only if we have no main playlists but do
// have media group playlists.
if (!master || !master.playlists || !master.playlists.length) {
// without audio variants or playlists this
// is not an audio only master.
const found = someAudioVariant(master, (variant) =>
(variant.playlists && variant.playlists.length) || variant.uri);
return found;
}
// if every playlist has only an audio codec it is audio only
for (let i = 0; i < master.playlists.length; i++) {
const playlist = master.playlists[i];
const CODECS = playlist.attributes && playlist.attributes.CODECS;
// all codecs are audio, this is an audio playlist.
if (CODECS && CODECS.split(',').every((c) => isAudioCodec(c))) {
continue;
}
// playlist is in an audio group it is audio only
const found = someAudioVariant(master, (variant) => playlistMatch(playlist, variant));
if (found) {
continue;
}
// if we make it here this playlist isn't audio and we
// are not audio only
return false;
}
// if we make it past every playlist without returning, then
// this is an audio only playlist.
return true;
};
// exports
export default {
liveEdgeDelay,
duration,
seekable,
getMediaInfoForTime,
isEnabled,
isDisabled,
isBlacklisted,
isIncompatible,
playlistEnd,
isAes,
hasAttribute,
estimateSegmentRequestTime,
isLowestEnabledRendition,
isAudioOnly,
playlistMatch
};

447
node_modules/@videojs/http-streaming/src/ranges.js generated vendored Normal file
View file

@ -0,0 +1,447 @@
/**
* ranges
*
* Utilities for working with TimeRanges.
*
*/
import videojs from 'video.js';
// Fudge factor to account for TimeRanges rounding
export const TIME_FUDGE_FACTOR = 1 / 30;
// Comparisons between time values such as current time and the end of the buffered range
// can be misleading because of precision differences or when the current media has poorly
// aligned audio and video, which can cause values to be slightly off from what you would
// expect. This value is what we consider to be safe to use in such comparisons to account
// for these scenarios.
export const SAFE_TIME_DELTA = TIME_FUDGE_FACTOR * 3;
/**
* Clamps a value to within a range
*
* @param {number} num - the value to clamp
* @param {number} start - the start of the range to clamp within, inclusive
* @param {number} end - the end of the range to clamp within, inclusive
* @return {number}
*/
const clamp = function(num, [start, end]) {
return Math.min(Math.max(start, num), end);
};
const filterRanges = function(timeRanges, predicate) {
const results = [];
let i;
if (timeRanges && timeRanges.length) {
// Search for ranges that match the predicate
for (i = 0; i < timeRanges.length; i++) {
if (predicate(timeRanges.start(i), timeRanges.end(i))) {
results.push([timeRanges.start(i), timeRanges.end(i)]);
}
}
}
return videojs.createTimeRanges(results);
};
/**
* Attempts to find the buffered TimeRange that contains the specified
* time.
*
* @param {TimeRanges} buffered - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object
*/
export const findRange = function(buffered, time) {
return filterRanges(buffered, function(start, end) {
return start - SAFE_TIME_DELTA <= time &&
end + SAFE_TIME_DELTA >= time;
});
};
/**
* Returns the TimeRanges that begin later than the specified time.
*
* @param {TimeRanges} timeRanges - the TimeRanges object to query
* @param {number} time - the time to filter on.
* @return {TimeRanges} a new TimeRanges object.
*/
export const findNextRange = function(timeRanges, time) {
return filterRanges(timeRanges, function(start) {
return start - TIME_FUDGE_FACTOR >= time;
});
};
/**
* Returns gaps within a list of TimeRanges
*
* @param {TimeRanges} buffered - the TimeRanges object
* @return {TimeRanges} a TimeRanges object of gaps
*/
export const findGaps = function(buffered) {
if (buffered.length < 2) {
return videojs.createTimeRanges();
}
const ranges = [];
for (let i = 1; i < buffered.length; i++) {
const start = buffered.end(i - 1);
const end = buffered.start(i);
ranges.push([start, end]);
}
return videojs.createTimeRanges(ranges);
};
/**
* Search for a likely end time for the segment that was just appened
* based on the state of the `buffered` property before and after the
* append. If we fin only one such uncommon end-point return it.
*
* @param {TimeRanges} original - the buffered time ranges before the update
* @param {TimeRanges} update - the buffered time ranges after the update
* @return {number|null} the end time added between `original` and `update`,
* or null if one cannot be unambiguously determined.
*/
export const findSoleUncommonTimeRangesEnd = function(original, update) {
let i;
let start;
let end;
const result = [];
const edges = [];
// In order to qualify as a possible candidate, the end point must:
// 1) Not have already existed in the `original` ranges
// 2) Not result from the shrinking of a range that already existed
// in the `original` ranges
// 3) Not be contained inside of a range that existed in `original`
const overlapsCurrentEnd = function(span) {
return (span[0] <= end && span[1] >= end);
};
if (original) {
// Save all the edges in the `original` TimeRanges object
for (i = 0; i < original.length; i++) {
start = original.start(i);
end = original.end(i);
edges.push([start, end]);
}
}
if (update) {
// Save any end-points in `update` that are not in the `original`
// TimeRanges object
for (i = 0; i < update.length; i++) {
start = update.start(i);
end = update.end(i);
if (edges.some(overlapsCurrentEnd)) {
continue;
}
// at this point it must be a unique non-shrinking end edge
result.push(end);
}
}
// we err on the side of caution and return null if didn't find
// exactly *one* differing end edge in the search above
if (result.length !== 1) {
return null;
}
return result[0];
};
/**
* Calculate the intersection of two TimeRanges
*
* @param {TimeRanges} bufferA
* @param {TimeRanges} bufferB
* @return {TimeRanges} The interesection of `bufferA` with `bufferB`
*/
export const bufferIntersection = function(bufferA, bufferB) {
let start = null;
let end = null;
let arity = 0;
const extents = [];
const ranges = [];
if (!bufferA || !bufferA.length || !bufferB || !bufferB.length) {
return videojs.createTimeRange();
}
// Handle the case where we have both buffers and create an
// intersection of the two
let count = bufferA.length;
// A) Gather up all start and end times
while (count--) {
extents.push({time: bufferA.start(count), type: 'start'});
extents.push({time: bufferA.end(count), type: 'end'});
}
count = bufferB.length;
while (count--) {
extents.push({time: bufferB.start(count), type: 'start'});
extents.push({time: bufferB.end(count), type: 'end'});
}
// B) Sort them by time
extents.sort(function(a, b) {
return a.time - b.time;
});
// C) Go along one by one incrementing arity for start and decrementing
// arity for ends
for (count = 0; count < extents.length; count++) {
if (extents[count].type === 'start') {
arity++;
// D) If arity is ever incremented to 2 we are entering an
// overlapping range
if (arity === 2) {
start = extents[count].time;
}
} else if (extents[count].type === 'end') {
arity--;
// E) If arity is ever decremented to 1 we leaving an
// overlapping range
if (arity === 1) {
end = extents[count].time;
}
}
// F) Record overlapping ranges
if (start !== null && end !== null) {
ranges.push([start, end]);
start = null;
end = null;
}
}
return videojs.createTimeRanges(ranges);
};
/**
* Calculates the percentage of `segmentRange` that overlaps the
* `buffered` time ranges.
*
* @param {TimeRanges} segmentRange - the time range that the segment
* covers adjusted according to currentTime
* @param {TimeRanges} referenceRange - the original time range that the
* segment covers
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the currently buffered time ranges
* @return {number} percent of the segment currently buffered
*/
const calculateBufferedPercent = function(
adjustedRange,
referenceRange,
currentTime,
buffered
) {
const referenceDuration = referenceRange.end(0) - referenceRange.start(0);
const adjustedDuration = adjustedRange.end(0) - adjustedRange.start(0);
const bufferMissingFromAdjusted = referenceDuration - adjustedDuration;
const adjustedIntersection = bufferIntersection(adjustedRange, buffered);
const referenceIntersection = bufferIntersection(referenceRange, buffered);
let adjustedOverlap = 0;
let referenceOverlap = 0;
let count = adjustedIntersection.length;
while (count--) {
adjustedOverlap += adjustedIntersection.end(count) -
adjustedIntersection.start(count);
// If the current overlap segment starts at currentTime, then increase the
// overlap duration so that it actually starts at the beginning of referenceRange
// by including the difference between the two Range's durations
// This is a work around for the way Flash has no buffer before currentTime
// TODO: see if this is still necessary since Flash isn't included
if (adjustedIntersection.start(count) === currentTime) {
adjustedOverlap += bufferMissingFromAdjusted;
}
}
count = referenceIntersection.length;
while (count--) {
referenceOverlap += referenceIntersection.end(count) -
referenceIntersection.start(count);
}
// Use whichever value is larger for the percentage-buffered since that value
// is likely more accurate because the only way
return Math.max(adjustedOverlap, referenceOverlap) / referenceDuration * 100;
};
/**
* Return the amount of a range specified by the startOfSegment and segmentDuration
* overlaps the current buffered content.
*
* @param {number} startOfSegment - the time where the segment begins
* @param {number} segmentDuration - the duration of the segment in seconds
* @param {number} currentTime - time in seconds where the current playback
* is at
* @param {TimeRanges} buffered - the state of the buffer
* @return {number} percentage of the segment's time range that is
* already in `buffered`
*/
export const getSegmentBufferedPercent = function(
startOfSegment,
segmentDuration,
currentTime,
buffered
) {
const endOfSegment = startOfSegment + segmentDuration;
// The entire time range of the segment
const originalSegmentRange = videojs.createTimeRanges([[
startOfSegment,
endOfSegment
]]);
// The adjusted segment time range that is setup such that it starts
// no earlier than currentTime
// Flash has no notion of a back-buffer so adjustedSegmentRange adjusts
// for that and the function will still return 100% if a only half of a
// segment is actually in the buffer as long as the currentTime is also
// half-way through the segment
const adjustedSegmentRange = videojs.createTimeRanges([[
clamp(startOfSegment, [currentTime, endOfSegment]),
endOfSegment
]]);
// This condition happens when the currentTime is beyond the segment's
// end time
if (adjustedSegmentRange.start(0) === adjustedSegmentRange.end(0)) {
return 0;
}
const percent = calculateBufferedPercent(
adjustedSegmentRange,
originalSegmentRange,
currentTime,
buffered
);
// If the segment is reported as having a zero duration, return 0%
// since it is likely that we will need to fetch the segment
if (isNaN(percent) || percent === Infinity || percent === -Infinity) {
return 0;
}
return percent;
};
/**
* Gets a human readable string for a TimeRange
*
* @param {TimeRange} range
* @return {string} a human readable string
*/
export const printableRange = (range) => {
const strArr = [];
if (!range || !range.length) {
return '';
}
for (let i = 0; i < range.length; i++) {
strArr.push(range.start(i) + ' => ' + range.end(i));
}
return strArr.join(', ');
};
/**
* Calculates the amount of time left in seconds until the player hits the end of the
* buffer and causes a rebuffer
*
* @param {TimeRange} buffered
* The state of the buffer
* @param {Numnber} currentTime
* The current time of the player
* @param {number} playbackRate
* The current playback rate of the player. Defaults to 1.
* @return {number}
* Time until the player has to start rebuffering in seconds.
* @function timeUntilRebuffer
*/
export const timeUntilRebuffer = function(buffered, currentTime, playbackRate = 1) {
const bufferedEnd = buffered.length ? buffered.end(buffered.length - 1) : 0;
return (bufferedEnd - currentTime) / playbackRate;
};
/**
* Converts a TimeRanges object into an array representation
*
* @param {TimeRanges} timeRanges
* @return {Array}
*/
export const timeRangesToArray = (timeRanges) => {
const timeRangesList = [];
for (let i = 0; i < timeRanges.length; i++) {
timeRangesList.push({
start: timeRanges.start(i),
end: timeRanges.end(i)
});
}
return timeRangesList;
};
/**
* Determines if two time range objects are different.
*
* @param {TimeRange} a
* the first time range object to check
*
* @param {TimeRange} b
* the second time range object to check
*
* @return {Boolean}
* Whether the time range objects differ
*/
export const isRangeDifferent = function(a, b) {
// same object
if (a === b) {
return false;
}
// one or the other is undefined
if (!a && b || (!b && a)) {
return true;
}
// length is different
if (a.length !== b.length) {
return true;
}
// see if any start/end pair is different
for (let i = 0; i < a.length; i++) {
if (a.start(i) !== b.start(i) || a.end(i) !== b.end(i)) {
return true;
}
}
// if the length and every pair is the same
// this is the same time range
return false;
};
export const lastBufferedEnd = function(a) {
if (!a || !a.length || !a.end) {
return;
}
return a.end(a.length - 1);
};

View file

@ -0,0 +1,127 @@
import videojs from 'video.js';
const defaultOptions = {
errorInterval: 30,
getSource(next) {
const tech = this.tech({ IWillNotUseThisInPlugins: true });
const sourceObj = tech.currentSource_ || this.currentSource();
return next(sourceObj);
}
};
/**
* Main entry point for the plugin
*
* @param {Player} player a reference to a videojs Player instance
* @param {Object} [options] an object with plugin options
* @private
*/
const initPlugin = function(player, options) {
let lastCalled = 0;
let seekTo = 0;
const localOptions = videojs.mergeOptions(defaultOptions, options);
player.ready(() => {
player.trigger({type: 'usage', name: 'vhs-error-reload-initialized'});
player.trigger({type: 'usage', name: 'hls-error-reload-initialized'});
});
/**
* Player modifications to perform that must wait until `loadedmetadata`
* has been triggered
*
* @private
*/
const loadedMetadataHandler = function() {
if (seekTo) {
player.currentTime(seekTo);
}
};
/**
* Set the source on the player element, play, and seek if necessary
*
* @param {Object} sourceObj An object specifying the source url and mime-type to play
* @private
*/
const setSource = function(sourceObj) {
if (sourceObj === null || sourceObj === undefined) {
return;
}
seekTo = (player.duration() !== Infinity && player.currentTime()) || 0;
player.one('loadedmetadata', loadedMetadataHandler);
player.src(sourceObj);
player.trigger({type: 'usage', name: 'vhs-error-reload'});
player.trigger({type: 'usage', name: 'hls-error-reload'});
player.play();
};
/**
* Attempt to get a source from either the built-in getSource function
* or a custom function provided via the options
*
* @private
*/
const errorHandler = function() {
// Do not attempt to reload the source if a source-reload occurred before
// 'errorInterval' time has elapsed since the last source-reload
if (Date.now() - lastCalled < localOptions.errorInterval * 1000) {
player.trigger({type: 'usage', name: 'vhs-error-reload-canceled'});
player.trigger({type: 'usage', name: 'hls-error-reload-canceled'});
return;
}
if (!localOptions.getSource ||
typeof localOptions.getSource !== 'function') {
videojs.log.error('ERROR: reloadSourceOnError - The option getSource must be a function!');
return;
}
lastCalled = Date.now();
return localOptions.getSource.call(player, setSource);
};
/**
* Unbind any event handlers that were bound by the plugin
*
* @private
*/
const cleanupEvents = function() {
player.off('loadedmetadata', loadedMetadataHandler);
player.off('error', errorHandler);
player.off('dispose', cleanupEvents);
};
/**
* Cleanup before re-initializing the plugin
*
* @param {Object} [newOptions] an object with plugin options
* @private
*/
const reinitPlugin = function(newOptions) {
cleanupEvents();
initPlugin(player, newOptions);
};
player.on('error', errorHandler);
player.on('dispose', cleanupEvents);
// Overwrite the plugin function so that we can correctly cleanup before
// initializing the plugin
player.reloadSourceOnError = reinitPlugin;
};
/**
* Reload the source when an error is detected as long as there
* wasn't an error previously within the last 30 seconds
*
* @param {Object} [options] an object with plugin options
*/
const reloadSourceOnError = function(options) {
initPlugin(this, options);
};
export default reloadSourceOnError;

View file

@ -0,0 +1,113 @@
import { isIncompatible, isEnabled, isAudioOnly } from './playlist.js';
import { codecsForPlaylist } from './util/codecs.js';
/**
* Returns a function that acts as the Enable/disable playlist function.
*
* @param {PlaylistLoader} loader - The master playlist loader
* @param {string} playlistID - id of the playlist
* @param {Function} changePlaylistFn - A function to be called after a
* playlist's enabled-state has been changed. Will NOT be called if a
* playlist's enabled-state is unchanged
* @param {boolean=} enable - Value to set the playlist enabled-state to
* or if undefined returns the current enabled-state for the playlist
* @return {Function} Function for setting/getting enabled
*/
const enableFunction = (loader, playlistID, changePlaylistFn) => (enable) => {
const playlist = loader.master.playlists[playlistID];
const incompatible = isIncompatible(playlist);
const currentlyEnabled = isEnabled(playlist);
if (typeof enable === 'undefined') {
return currentlyEnabled;
}
if (enable) {
delete playlist.disabled;
} else {
playlist.disabled = true;
}
if (enable !== currentlyEnabled && !incompatible) {
// Ensure the outside world knows about our changes
changePlaylistFn();
if (enable) {
loader.trigger('renditionenabled');
} else {
loader.trigger('renditiondisabled');
}
}
return enable;
};
/**
* The representation object encapsulates the publicly visible information
* in a media playlist along with a setter/getter-type function (enabled)
* for changing the enabled-state of a particular playlist entry
*
* @class Representation
*/
class Representation {
constructor(vhsHandler, playlist, id) {
const {
masterPlaylistController_: mpc,
options_: { smoothQualityChange }
} = vhsHandler;
// Get a reference to a bound version of the quality change function
const changeType = smoothQualityChange ? 'smooth' : 'fast';
const qualityChangeFunction = mpc[`${changeType}QualityChange_`].bind(mpc);
// some playlist attributes are optional
if (playlist.attributes) {
const resolution = playlist.attributes.RESOLUTION;
this.width = resolution && resolution.width;
this.height = resolution && resolution.height;
this.bandwidth = playlist.attributes.BANDWIDTH;
}
this.codecs = codecsForPlaylist(mpc.master(), playlist);
this.playlist = playlist;
// The id is simply the ordinality of the media playlist
// within the master playlist
this.id = id;
// Partially-apply the enableFunction to create a playlist-
// specific variant
this.enabled = enableFunction(
vhsHandler.playlists,
playlist.id,
qualityChangeFunction
);
}
}
/**
* A mixin function that adds the `representations` api to an instance
* of the VhsHandler class
*
* @param {VhsHandler} vhsHandler - An instance of VhsHandler to add the
* representation API into
*/
const renditionSelectionMixin = function(vhsHandler) {
// Add a single API-specific function to the VhsHandler instance
vhsHandler.representations = () => {
const master = vhsHandler.masterPlaylistController_.master();
const playlists = isAudioOnly(master) ?
vhsHandler.masterPlaylistController_.getAudioTrackPlaylists_() :
master.playlists;
if (!playlists) {
return [];
}
return playlists
.filter((media) => !isIncompatible(media))
.map((e, i) => new Representation(vhsHandler, e, e.id));
};
};
export default renditionSelectionMixin;

View file

@ -0,0 +1,36 @@
/**
* @file resolve-url.js - Handling how URLs are resolved and manipulated
*/
import _resolveUrl from '@videojs/vhs-utils/es/resolve-url.js';
export const resolveUrl = _resolveUrl;
/**
* Checks whether xhr request was redirected and returns correct url depending
* on `handleManifestRedirects` option
*
* @api private
*
* @param {string} url - an url being requested
* @param {XMLHttpRequest} req - xhr request result
*
* @return {string}
*/
export const resolveManifestRedirect = (handleManifestRedirect, url, req) => {
// To understand how the responseURL below is set and generated:
// - https://fetch.spec.whatwg.org/#concept-response-url
// - https://fetch.spec.whatwg.org/#atomic-http-redirect-handling
if (
handleManifestRedirect &&
req &&
req.responseURL &&
url !== req.responseURL
) {
return req.responseURL;
}
return url;
};
export default resolveUrl;

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,271 @@
import TransmuxWorker from 'worker!./transmuxer-worker.js';
export const handleData_ = (event, transmuxedData, callback) => {
const {
type,
initSegment,
captions,
captionStreams,
metadata,
videoFrameDtsTime,
videoFramePtsTime
} = event.data.segment;
transmuxedData.buffer.push({
captions,
captionStreams,
metadata
});
const boxes = event.data.segment.boxes || {
data: event.data.segment.data
};
const result = {
type,
// cast ArrayBuffer to TypedArray
data: new Uint8Array(
boxes.data,
boxes.data.byteOffset,
boxes.data.byteLength
),
initSegment: new Uint8Array(
initSegment.data,
initSegment.byteOffset,
initSegment.byteLength
)
};
if (typeof videoFrameDtsTime !== 'undefined') {
result.videoFrameDtsTime = videoFrameDtsTime;
}
if (typeof videoFramePtsTime !== 'undefined') {
result.videoFramePtsTime = videoFramePtsTime;
}
callback(result);
};
export const handleDone_ = ({
transmuxedData,
callback
}) => {
// Previously we only returned data on data events,
// not on done events. Clear out the buffer to keep that consistent.
transmuxedData.buffer = [];
// all buffers should have been flushed from the muxer, so start processing anything we
// have received
callback(transmuxedData);
};
export const handleGopInfo_ = (event, transmuxedData) => {
transmuxedData.gopInfo = event.data.gopInfo;
};
export const processTransmux = (options) => {
const {
transmuxer,
bytes,
audioAppendStart,
gopsToAlignWith,
remux,
onData,
onTrackInfo,
onAudioTimingInfo,
onVideoTimingInfo,
onVideoSegmentTimingInfo,
onAudioSegmentTimingInfo,
onId3,
onCaptions,
onDone,
onEndedTimeline,
isEndOfTimeline
} = options;
const transmuxedData = {
buffer: []
};
let waitForEndedTimelineEvent = isEndOfTimeline;
const handleMessage = (event) => {
if (transmuxer.currentTransmux !== options) {
// disposed
return;
}
if (event.data.action === 'data') {
handleData_(event, transmuxedData, onData);
}
if (event.data.action === 'trackinfo') {
onTrackInfo(event.data.trackInfo);
}
if (event.data.action === 'gopInfo') {
handleGopInfo_(event, transmuxedData);
}
if (event.data.action === 'audioTimingInfo') {
onAudioTimingInfo(event.data.audioTimingInfo);
}
if (event.data.action === 'videoTimingInfo') {
onVideoTimingInfo(event.data.videoTimingInfo);
}
if (event.data.action === 'videoSegmentTimingInfo') {
onVideoSegmentTimingInfo(event.data.videoSegmentTimingInfo);
}
if (event.data.action === 'audioSegmentTimingInfo') {
onAudioSegmentTimingInfo(event.data.audioSegmentTimingInfo);
}
if (event.data.action === 'id3Frame') {
onId3([event.data.id3Frame], event.data.id3Frame.dispatchType);
}
if (event.data.action === 'caption') {
onCaptions(event.data.caption);
}
if (event.data.action === 'endedtimeline') {
waitForEndedTimelineEvent = false;
onEndedTimeline();
}
// wait for the transmuxed event since we may have audio and video
if (event.data.type !== 'transmuxed') {
return;
}
// If the "endedtimeline" event has not yet fired, and this segment represents the end
// of a timeline, that means there may still be data events before the segment
// processing can be considerred complete. In that case, the final event should be
// an "endedtimeline" event with the type "transmuxed."
if (waitForEndedTimelineEvent) {
return;
}
transmuxer.onmessage = null;
handleDone_({
transmuxedData,
callback: onDone
});
/* eslint-disable no-use-before-define */
dequeue(transmuxer);
/* eslint-enable */
};
transmuxer.onmessage = handleMessage;
if (audioAppendStart) {
transmuxer.postMessage({
action: 'setAudioAppendStart',
appendStart: audioAppendStart
});
}
// allow empty arrays to be passed to clear out GOPs
if (Array.isArray(gopsToAlignWith)) {
transmuxer.postMessage({
action: 'alignGopsWith',
gopsToAlignWith
});
}
if (typeof remux !== 'undefined') {
transmuxer.postMessage({
action: 'setRemux',
remux
});
}
if (bytes.byteLength) {
const buffer = bytes instanceof ArrayBuffer ? bytes : bytes.buffer;
const byteOffset = bytes instanceof ArrayBuffer ? 0 : bytes.byteOffset;
transmuxer.postMessage(
{
action: 'push',
// Send the typed-array of data as an ArrayBuffer so that
// it can be sent as a "Transferable" and avoid the costly
// memory copy
data: buffer,
// To recreate the original typed-array, we need information
// about what portion of the ArrayBuffer it was a view into
byteOffset,
byteLength: bytes.byteLength
},
[ buffer ]
);
}
if (isEndOfTimeline) {
transmuxer.postMessage({ action: 'endTimeline' });
}
// even if we didn't push any bytes, we have to make sure we flush in case we reached
// the end of the segment
transmuxer.postMessage({ action: 'flush' });
};
export const dequeue = (transmuxer) => {
transmuxer.currentTransmux = null;
if (transmuxer.transmuxQueue.length) {
transmuxer.currentTransmux = transmuxer.transmuxQueue.shift();
if (typeof transmuxer.currentTransmux === 'function') {
transmuxer.currentTransmux();
} else {
processTransmux(transmuxer.currentTransmux);
}
}
};
export const processAction = (transmuxer, action) => {
transmuxer.postMessage({ action });
dequeue(transmuxer);
};
export const enqueueAction = (action, transmuxer) => {
if (!transmuxer.currentTransmux) {
transmuxer.currentTransmux = action;
processAction(transmuxer, action);
return;
}
transmuxer.transmuxQueue.push(processAction.bind(null, transmuxer, action));
};
export const reset = (transmuxer) => {
enqueueAction('reset', transmuxer);
};
export const endTimeline = (transmuxer) => {
enqueueAction('endTimeline', transmuxer);
};
export const transmux = (options) => {
if (!options.transmuxer.currentTransmux) {
options.transmuxer.currentTransmux = options;
processTransmux(options);
return;
}
options.transmuxer.transmuxQueue.push(options);
};
export const createTransmuxer = (options) => {
const transmuxer = new TransmuxWorker();
transmuxer.currentTransmux = null;
transmuxer.transmuxQueue = [];
const term = transmuxer.terminate;
transmuxer.terminate = () => {
transmuxer.currentTransmux = null;
transmuxer.transmuxQueue.length = 0;
return term.call(transmuxer);
};
transmuxer.postMessage({action: 'init', options});
return transmuxer;
};
export default {
reset,
endTimeline,
transmux,
createTransmuxer
};

View file

@ -0,0 +1,867 @@
/**
* @file source-updater.js
*/
import videojs from 'video.js';
import logger from './util/logger';
import noop from './util/noop';
import { bufferIntersection } from './ranges.js';
import {getMimeForCodec} from '@videojs/vhs-utils/es/codecs.js';
import window from 'global/window';
import toTitleCase from './util/to-title-case.js';
import { QUOTA_EXCEEDED_ERR } from './error-codes';
const bufferTypes = [
'video',
'audio'
];
const updating = (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
return (sourceBuffer && sourceBuffer.updating) || sourceUpdater.queuePending[type];
};
const nextQueueIndexOfType = (type, queue) => {
for (let i = 0; i < queue.length; i++) {
const queueEntry = queue[i];
if (queueEntry.type === 'mediaSource') {
// If the next entry is a media source entry (uses multiple source buffers), block
// processing to allow it to go through first.
return null;
}
if (queueEntry.type === type) {
return i;
}
}
return null;
};
const shiftQueue = (type, sourceUpdater) => {
if (sourceUpdater.queue.length === 0) {
return;
}
let queueIndex = 0;
let queueEntry = sourceUpdater.queue[queueIndex];
if (queueEntry.type === 'mediaSource') {
if (!sourceUpdater.updating() && sourceUpdater.mediaSource.readyState !== 'closed') {
sourceUpdater.queue.shift();
queueEntry.action(sourceUpdater);
if (queueEntry.doneFn) {
queueEntry.doneFn();
}
// Only specific source buffer actions must wait for async updateend events. Media
// Source actions process synchronously. Therefore, both audio and video source
// buffers are now clear to process the next queue entries.
shiftQueue('audio', sourceUpdater);
shiftQueue('video', sourceUpdater);
}
// Media Source actions require both source buffers, so if the media source action
// couldn't process yet (because one or both source buffers are busy), block other
// queue actions until both are available and the media source action can process.
return;
}
if (type === 'mediaSource') {
// If the queue was shifted by a media source action (this happens when pushing a
// media source action onto the queue), then it wasn't from an updateend event from an
// audio or video source buffer, so there's no change from previous state, and no
// processing should be done.
return;
}
// Media source queue entries don't need to consider whether the source updater is
// started (i.e., source buffers are created) as they don't need the source buffers, but
// source buffer queue entries do.
if (
!sourceUpdater.ready() ||
sourceUpdater.mediaSource.readyState === 'closed' ||
updating(type, sourceUpdater)
) {
return;
}
if (queueEntry.type !== type) {
queueIndex = nextQueueIndexOfType(type, sourceUpdater.queue);
if (queueIndex === null) {
// Either there's no queue entry that uses this source buffer type in the queue, or
// there's a media source queue entry before the next entry of this type, in which
// case wait for that action to process first.
return;
}
queueEntry = sourceUpdater.queue[queueIndex];
}
sourceUpdater.queue.splice(queueIndex, 1);
// Keep a record that this source buffer type is in use.
//
// The queue pending operation must be set before the action is performed in the event
// that the action results in a synchronous event that is acted upon. For instance, if
// an exception is thrown that can be handled, it's possible that new actions will be
// appended to an empty queue and immediately executed, but would not have the correct
// pending information if this property was set after the action was performed.
sourceUpdater.queuePending[type] = queueEntry;
queueEntry.action(type, sourceUpdater);
if (!queueEntry.doneFn) {
// synchronous operation, process next entry
sourceUpdater.queuePending[type] = null;
shiftQueue(type, sourceUpdater);
return;
}
};
const cleanupBuffer = (type, sourceUpdater) => {
const buffer = sourceUpdater[`${type}Buffer`];
const titleType = toTitleCase(type);
if (!buffer) {
return;
}
buffer.removeEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
buffer.removeEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = null;
sourceUpdater[`${type}Buffer`] = null;
};
const inSourceBuffers = (mediaSource, sourceBuffer) => mediaSource && sourceBuffer &&
Array.prototype.indexOf.call(mediaSource.sourceBuffers, sourceBuffer) !== -1;
const actions = {
appendBuffer: (bytes, segmentInfo, onError) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Appending segment ${segmentInfo.mediaIndex}'s ${bytes.length} bytes to ${type}Buffer`);
try {
sourceBuffer.appendBuffer(bytes);
} catch (e) {
sourceUpdater.logger_(`Error with code ${e.code} ` +
(e.code === QUOTA_EXCEEDED_ERR ? '(QUOTA_EXCEEDED_ERR) ' : '') +
`when appending segment ${segmentInfo.mediaIndex} to ${type}Buffer`);
sourceUpdater.queuePending[type] = null;
onError(e);
}
},
remove: (start, end) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${start} to ${end} from ${type}Buffer`);
try {
sourceBuffer.remove(start, end);
} catch (e) {
sourceUpdater.logger_(`Remove ${start} to ${end} from ${type}Buffer failed`);
}
},
timestampOffset: (offset) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Setting ${type}timestampOffset to ${offset}`);
sourceBuffer.timestampOffset = offset;
},
callback: (callback) => (type, sourceUpdater) => {
callback();
},
endOfStream: (error) => (sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
sourceUpdater.logger_(`Calling mediaSource endOfStream(${error || ''})`);
try {
sourceUpdater.mediaSource.endOfStream(error);
} catch (e) {
videojs.log.warn('Failed to call media source endOfStream', e);
}
},
duration: (duration) => (sourceUpdater) => {
sourceUpdater.logger_(`Setting mediaSource duration to ${duration}`);
try {
sourceUpdater.mediaSource.duration = duration;
} catch (e) {
videojs.log.warn('Failed to set media source duration', e);
}
},
abort: () => (type, sourceUpdater) => {
if (sourceUpdater.mediaSource.readyState !== 'open') {
return;
}
const sourceBuffer = sourceUpdater[`${type}Buffer`];
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`calling abort on ${type}Buffer`);
try {
sourceBuffer.abort();
} catch (e) {
videojs.log.warn(`Failed to abort on ${type}Buffer`, e);
}
},
addSourceBuffer: (type, codec) => (sourceUpdater) => {
const titleType = toTitleCase(type);
const mime = getMimeForCodec(codec);
sourceUpdater.logger_(`Adding ${type}Buffer with codec ${codec} to mediaSource`);
const sourceBuffer = sourceUpdater.mediaSource.addSourceBuffer(mime);
sourceBuffer.addEventListener('updateend', sourceUpdater[`on${titleType}UpdateEnd_`]);
sourceBuffer.addEventListener('error', sourceUpdater[`on${titleType}Error_`]);
sourceUpdater.codecs[type] = codec;
sourceUpdater[`${type}Buffer`] = sourceBuffer;
},
removeSourceBuffer: (type) => (sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
cleanupBuffer(type, sourceUpdater);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
sourceUpdater.logger_(`Removing ${type}Buffer with codec ${sourceUpdater.codecs[type]} from mediaSource`);
try {
sourceUpdater.mediaSource.removeSourceBuffer(sourceBuffer);
} catch (e) {
videojs.log.warn(`Failed to removeSourceBuffer ${type}Buffer`, e);
}
},
changeType: (codec) => (type, sourceUpdater) => {
const sourceBuffer = sourceUpdater[`${type}Buffer`];
const mime = getMimeForCodec(codec);
// can't do anything if the media source / source buffer is null
// or the media source does not contain this source buffer.
if (!inSourceBuffers(sourceUpdater.mediaSource, sourceBuffer)) {
return;
}
// do not update codec if we don't need to.
if (sourceUpdater.codecs[type] === codec) {
return;
}
sourceUpdater.logger_(`changing ${type}Buffer codec from ${sourceUpdater.codecs[type]} to ${codec}`);
sourceBuffer.changeType(mime);
sourceUpdater.codecs[type] = codec;
}
};
const pushQueue = ({type, sourceUpdater, action, doneFn, name}) => {
sourceUpdater.queue.push({
type,
action,
doneFn,
name
});
shiftQueue(type, sourceUpdater);
};
const onUpdateend = (type, sourceUpdater) => (e) => {
// Although there should, in theory, be a pending action for any updateend receieved,
// there are some actions that may trigger updateend events without set definitions in
// the w3c spec. For instance, setting the duration on the media source may trigger
// updateend events on source buffers. This does not appear to be in the spec. As such,
// if we encounter an updateend without a corresponding pending action from our queue
// for that source buffer type, process the next action.
if (sourceUpdater.queuePending[type]) {
const doneFn = sourceUpdater.queuePending[type].doneFn;
sourceUpdater.queuePending[type] = null;
if (doneFn) {
// if there's an error, report it
doneFn(sourceUpdater[`${type}Error_`]);
}
}
shiftQueue(type, sourceUpdater);
};
/**
* A queue of callbacks to be serialized and applied when a
* MediaSource and its associated SourceBuffers are not in the
* updating state. It is used by the segment loader to update the
* underlying SourceBuffers when new data is loaded, for instance.
*
* @class SourceUpdater
* @param {MediaSource} mediaSource the MediaSource to create the SourceBuffer from
* @param {string} mimeType the desired MIME type of the underlying SourceBuffer
*/
export default class SourceUpdater extends videojs.EventTarget {
constructor(mediaSource) {
super();
this.mediaSource = mediaSource;
this.sourceopenListener_ = () => shiftQueue('mediaSource', this);
this.mediaSource.addEventListener('sourceopen', this.sourceopenListener_);
this.logger_ = logger('SourceUpdater');
// initial timestamp offset is 0
this.audioTimestampOffset_ = 0;
this.videoTimestampOffset_ = 0;
this.queue = [];
this.queuePending = {
audio: null,
video: null
};
this.delayedAudioAppendQueue_ = [];
this.videoAppendQueued_ = false;
this.codecs = {};
this.onVideoUpdateEnd_ = onUpdateend('video', this);
this.onAudioUpdateEnd_ = onUpdateend('audio', this);
this.onVideoError_ = (e) => {
// used for debugging
this.videoError_ = e;
};
this.onAudioError_ = (e) => {
// used for debugging
this.audioError_ = e;
};
this.createdSourceBuffers_ = false;
this.initializedEme_ = false;
this.triggeredReady_ = false;
}
initializedEme() {
this.initializedEme_ = true;
this.triggerReady();
}
hasCreatedSourceBuffers() {
// if false, likely waiting on one of the segment loaders to get enough data to create
// source buffers
return this.createdSourceBuffers_;
}
hasInitializedAnyEme() {
return this.initializedEme_;
}
ready() {
return this.hasCreatedSourceBuffers() && this.hasInitializedAnyEme();
}
createSourceBuffers(codecs) {
if (this.hasCreatedSourceBuffers()) {
// already created them before
return;
}
// the intial addOrChangeSourceBuffers will always be
// two add buffers.
this.addOrChangeSourceBuffers(codecs);
this.createdSourceBuffers_ = true;
this.trigger('createdsourcebuffers');
this.triggerReady();
}
triggerReady() {
// only allow ready to be triggered once, this prevents the case
// where:
// 1. we trigger createdsourcebuffers
// 2. ie 11 synchronously initializates eme
// 3. the synchronous initialization causes us to trigger ready
// 4. We go back to the ready check in createSourceBuffers and ready is triggered again.
if (this.ready() && !this.triggeredReady_) {
this.triggeredReady_ = true;
this.trigger('ready');
}
}
/**
* Add a type of source buffer to the media source.
*
* @param {string} type
* The type of source buffer to add.
*
* @param {string} codec
* The codec to add the source buffer with.
*/
addSourceBuffer(type, codec) {
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.addSourceBuffer(type, codec),
name: 'addSourceBuffer'
});
}
/**
* call abort on a source buffer.
*
* @param {string} type
* The type of source buffer to call abort on.
*/
abort(type) {
pushQueue({
type,
sourceUpdater: this,
action: actions.abort(type),
name: 'abort'
});
}
/**
* Call removeSourceBuffer and remove a specific type
* of source buffer on the mediaSource.
*
* @param {string} type
* The type of source buffer to remove.
*/
removeSourceBuffer(type) {
if (!this.canRemoveSourceBuffer()) {
videojs.log.error('removeSourceBuffer is not supported!');
return;
}
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.removeSourceBuffer(type),
name: 'removeSourceBuffer'
});
}
/**
* Whether or not the removeSourceBuffer function is supported
* on the mediaSource.
*
* @return {boolean}
* if removeSourceBuffer can be called.
*/
canRemoveSourceBuffer() {
// IE reports that it supports removeSourceBuffer, but often throws
// errors when attempting to use the function. So we report that it
// does not support removeSourceBuffer. As of Firefox 83 removeSourceBuffer
// throws errors, so we report that it does not support this as well.
return !videojs.browser.IE_VERSION && !videojs.browser.IS_FIREFOX && window.MediaSource &&
window.MediaSource.prototype &&
typeof window.MediaSource.prototype.removeSourceBuffer === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
static canChangeType() {
return window.SourceBuffer &&
window.SourceBuffer.prototype &&
typeof window.SourceBuffer.prototype.changeType === 'function';
}
/**
* Whether or not the changeType function is supported
* on our SourceBuffers.
*
* @return {boolean}
* if changeType can be called.
*/
canChangeType() {
return this.constructor.canChangeType();
}
/**
* Call the changeType function on a source buffer, given the code and type.
*
* @param {string} type
* The type of source buffer to call changeType on.
*
* @param {string} codec
* The codec string to change type with on the source buffer.
*/
changeType(type, codec) {
if (!this.canChangeType()) {
videojs.log.error('changeType is not supported!');
return;
}
pushQueue({
type,
sourceUpdater: this,
action: actions.changeType(codec),
name: 'changeType'
});
}
/**
* Add source buffers with a codec or, if they are already created,
* call changeType on source buffers using changeType.
*
* @param {Object} codecs
* Codecs to switch to
*/
addOrChangeSourceBuffers(codecs) {
if (!codecs || typeof codecs !== 'object' || Object.keys(codecs).length === 0) {
throw new Error('Cannot addOrChangeSourceBuffers to undefined codecs');
}
Object.keys(codecs).forEach((type) => {
const codec = codecs[type];
if (!this.hasCreatedSourceBuffers()) {
return this.addSourceBuffer(type, codec);
}
if (this.canChangeType()) {
this.changeType(type, codec);
}
});
}
/**
* Queue an update to append an ArrayBuffer.
*
* @param {MediaObject} object containing audioBytes and/or videoBytes
* @param {Function} done the function to call when done
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-appendBuffer-void-ArrayBuffer-data
*/
appendBuffer(options, doneFn) {
const {segmentInfo, type, bytes} = options;
this.processedAppend_ = true;
if (type === 'audio' && this.videoBuffer && !this.videoAppendQueued_) {
this.delayedAudioAppendQueue_.push([options, doneFn]);
this.logger_(`delayed audio append of ${bytes.length} until video append`);
return;
}
// In the case of certain errors, for instance, QUOTA_EXCEEDED_ERR, updateend will
// not be fired. This means that the queue will be blocked until the next action
// taken by the segment-loader. Provide a mechanism for segment-loader to handle
// these errors by calling the doneFn with the specific error.
const onError = doneFn;
pushQueue({
type,
sourceUpdater: this,
action: actions.appendBuffer(bytes, segmentInfo || {mediaIndex: -1}, onError),
doneFn,
name: 'appendBuffer'
});
if (type === 'video') {
this.videoAppendQueued_ = true;
if (!this.delayedAudioAppendQueue_.length) {
return;
}
const queue = this.delayedAudioAppendQueue_.slice();
this.logger_(`queuing delayed audio ${queue.length} appendBuffers`);
this.delayedAudioAppendQueue_.length = 0;
queue.forEach((que) => {
this.appendBuffer.apply(this, que);
});
}
}
/**
* Get the audio buffer's buffered timerange.
*
* @return {TimeRange}
* The audio buffer's buffered time range
*/
audioBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.audioBuffer)) {
return videojs.createTimeRange();
}
return this.audioBuffer.buffered ? this.audioBuffer.buffered :
videojs.createTimeRange();
}
/**
* Get the video buffer's buffered timerange.
*
* @return {TimeRange}
* The video buffer's buffered time range
*/
videoBuffered() {
// no media source/source buffer or it isn't in the media sources
// source buffer list
if (!inSourceBuffers(this.mediaSource, this.videoBuffer)) {
return videojs.createTimeRange();
}
return this.videoBuffer.buffered ? this.videoBuffer.buffered :
videojs.createTimeRange();
}
/**
* Get a combined video/audio buffer's buffered timerange.
*
* @return {TimeRange}
* the combined time range
*/
buffered() {
const video = inSourceBuffers(this.mediaSource, this.videoBuffer) ? this.videoBuffer : null;
const audio = inSourceBuffers(this.mediaSource, this.audioBuffer) ? this.audioBuffer : null;
if (audio && !video) {
return this.audioBuffered();
}
if (video && !audio) {
return this.videoBuffered();
}
return bufferIntersection(this.audioBuffered(), this.videoBuffered());
}
/**
* Add a callback to the queue that will set duration on the mediaSource.
*
* @param {number} duration
* The duration to set
*
* @param {Function} [doneFn]
* function to run after duration has been set.
*/
setDuration(duration, doneFn = noop) {
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.duration(duration),
name: 'duration',
doneFn
});
}
/**
* Add a mediaSource endOfStream call to the queue
*
* @param {Error} [error]
* Call endOfStream with an error
*
* @param {Function} [doneFn]
* A function that should be called when the
* endOfStream call has finished.
*/
endOfStream(error = null, doneFn = noop) {
if (typeof error !== 'string') {
error = undefined;
}
// In order to set the duration on the media source, it's necessary to wait for all
// source buffers to no longer be updating. "If the updating attribute equals true on
// any SourceBuffer in sourceBuffers, then throw an InvalidStateError exception and
// abort these steps." (source: https://www.w3.org/TR/media-source/#attributes).
pushQueue({
type: 'mediaSource',
sourceUpdater: this,
action: actions.endOfStream(error),
name: 'endOfStream',
doneFn
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeAudio(start, end, done = noop) {
if (!this.audioBuffered().length || this.audioBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Queue an update to remove a time range from the buffer.
*
* @param {number} start where to start the removal
* @param {number} end where to end the removal
* @param {Function} [done=noop] optional callback to be executed when the remove
* operation is complete
* @see http://www.w3.org/TR/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
*/
removeVideo(start, end, done = noop) {
if (!this.videoBuffered().length || this.videoBuffered().end(0) === 0) {
done();
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.remove(start, end),
doneFn: done,
name: 'remove'
});
}
/**
* Whether the underlying sourceBuffer is updating or not
*
* @return {boolean} the updating status of the SourceBuffer
*/
updating() {
// the audio/video source buffer is updating
if (updating('audio', this) || updating('video', this)) {
return true;
}
return false;
}
/**
* Set/get the timestampoffset on the audio SourceBuffer
*
* @return {number} the timestamp offset
*/
audioTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.audioBuffer &&
// no point in updating if it's the same
this.audioTimestampOffset_ !== offset) {
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.audioTimestampOffset_ = offset;
}
return this.audioTimestampOffset_;
}
/**
* Set/get the timestampoffset on the video SourceBuffer
*
* @return {number} the timestamp offset
*/
videoTimestampOffset(offset) {
if (typeof offset !== 'undefined' &&
this.videoBuffer &&
// no point in updating if it's the same
this.videoTimestampOffset !== offset) {
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.timestampOffset(offset),
name: 'timestampOffset'
});
this.videoTimestampOffset_ = offset;
}
return this.videoTimestampOffset_;
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the audio queue.
*
* @param {Function} callback
* The callback to queue.
*/
audioQueueCallback(callback) {
if (!this.audioBuffer) {
return;
}
pushQueue({
type: 'audio',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* Add a function to the queue that will be called
* when it is its turn to run in the video queue.
*
* @param {Function} callback
* The callback to queue.
*/
videoQueueCallback(callback) {
if (!this.videoBuffer) {
return;
}
pushQueue({
type: 'video',
sourceUpdater: this,
action: actions.callback(callback),
name: 'callback'
});
}
/**
* dispose of the source updater and the underlying sourceBuffer
*/
dispose() {
this.trigger('dispose');
bufferTypes.forEach((type) => {
this.abort(type);
if (this.canRemoveSourceBuffer()) {
this.removeSourceBuffer(type);
} else {
this[`${type}QueueCallback`](() => cleanupBuffer(type, this));
}
});
this.videoAppendQueued_ = false;
this.delayedAudioAppendQueue_.length = 0;
if (this.sourceopenListener_) {
this.mediaSource.removeEventListener('sourceopen', this.sourceopenListener_);
}
this.off();
}
}

View file

@ -0,0 +1,588 @@
/**
* @file sync-controller.js
*/
import {sumDurations, getPartsAndSegments} from './playlist';
import videojs from 'video.js';
import logger from './util/logger';
export const syncPointStrategies = [
// Stategy "VOD": Handle the VOD-case where the sync-point is *always*
// the equivalence display-time 0 === segment-index 0
{
name: 'VOD',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (duration !== Infinity) {
const syncPoint = {
time: 0,
segmentIndex: 0,
partIndex: null
};
return syncPoint;
}
return null;
}
},
// Stategy "ProgramDateTime": We have a program-date-time tag in this playlist
{
name: 'ProgramDateTime',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (!Object.keys(syncController.timelineToDatetimeMappings).length) {
return null;
}
let syncPoint = null;
let lastDistance = null;
const partsAndSegments = getPartsAndSegments(playlist);
currentTime = currentTime || 0;
for (let i = 0; i < partsAndSegments.length; i++) {
// start from the end and loop backwards for live
// or start from the front and loop forwards for non-live
const index = (playlist.endList || currentTime === 0) ? i : partsAndSegments.length - (i + 1);
const partAndSegment = partsAndSegments[index];
const segment = partAndSegment.segment;
const datetimeMapping =
syncController.timelineToDatetimeMappings[segment.timeline];
if (!datetimeMapping) {
continue;
}
if (segment.dateTimeObject) {
const segmentTime = segment.dateTimeObject.getTime() / 1000;
let start = segmentTime + datetimeMapping;
// take part duration into account.
if (segment.parts && typeof partAndSegment.partIndex === 'number') {
for (let z = 0; z < partAndSegment.partIndex; z++) {
start += segment.parts[z].duration;
}
}
const distance = Math.abs(currentTime - start);
// Once the distance begins to increase, or if distance is 0, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && (distance === 0 || lastDistance < distance)) {
break;
}
lastDistance = distance;
syncPoint = {
time: start,
segmentIndex: partAndSegment.segmentIndex,
partIndex: partAndSegment.partIndex
};
}
}
return syncPoint;
}
},
// Stategy "Segment": We have a known time mapping for a timeline and a
// segment in the current timeline with timing data
{
name: 'Segment',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
let syncPoint = null;
let lastDistance = null;
currentTime = currentTime || 0;
const partsAndSegments = getPartsAndSegments(playlist);
for (let i = 0; i < partsAndSegments.length; i++) {
// start from the end and loop backwards for live
// or start from the front and loop forwards for non-live
const index = (playlist.endList || currentTime === 0) ? i : partsAndSegments.length - (i + 1);
const partAndSegment = partsAndSegments[index];
const segment = partAndSegment.segment;
const start = partAndSegment.part && partAndSegment.part.start || segment && segment.start;
if (segment.timeline === currentTimeline && typeof start !== 'undefined') {
const distance = Math.abs(currentTime - start);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: start,
segmentIndex: partAndSegment.segmentIndex,
partIndex: partAndSegment.partIndex
};
}
}
}
return syncPoint;
}
},
// Stategy "Discontinuity": We have a discontinuity with a known
// display-time
{
name: 'Discontinuity',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
let syncPoint = null;
currentTime = currentTime || 0;
if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
let lastDistance = null;
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const discontinuitySync = syncController.discontinuities[discontinuity];
if (discontinuitySync) {
const distance = Math.abs(currentTime - discontinuitySync.time);
// Once the distance begins to increase, we have passed
// currentTime and can stop looking for better candidates
if (lastDistance !== null && lastDistance < distance) {
break;
}
if (!syncPoint || lastDistance === null || lastDistance >= distance) {
lastDistance = distance;
syncPoint = {
time: discontinuitySync.time,
segmentIndex,
partIndex: null
};
}
}
}
}
return syncPoint;
}
},
// Stategy "Playlist": We have a playlist with a known mapping of
// segment index to display time
{
name: 'Playlist',
run: (syncController, playlist, duration, currentTimeline, currentTime) => {
if (playlist.syncInfo) {
const syncPoint = {
time: playlist.syncInfo.time,
segmentIndex: playlist.syncInfo.mediaSequence - playlist.mediaSequence,
partIndex: null
};
return syncPoint;
}
return null;
}
}
];
export default class SyncController extends videojs.EventTarget {
constructor(options = {}) {
super();
// ...for synching across variants
this.timelines = [];
this.discontinuities = [];
this.timelineToDatetimeMappings = {};
this.logger_ = logger('SyncController');
}
/**
* Find a sync-point for the playlist specified
*
* A sync-point is defined as a known mapping from display-time to
* a segment-index in the current playlist.
*
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinite if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @return {Object}
* A sync-point object
*/
getSyncPoint(playlist, duration, currentTimeline, currentTime) {
const syncPoints = this.runStrategies_(
playlist,
duration,
currentTimeline,
currentTime
);
if (!syncPoints.length) {
// Signal that we need to attempt to get a sync-point manually
// by fetching a segment in the playlist and constructing
// a sync-point from that information
return null;
}
// Now find the sync-point that is closest to the currentTime because
// that should result in the most accurate guess about which segment
// to fetch
return this.selectSyncPoint_(syncPoints, { key: 'time', value: currentTime });
}
/**
* Calculate the amount of time that has expired off the playlist during playback
*
* @param {Playlist} playlist
* Playlist object to calculate expired from
* @param {number} duration
* Duration of the MediaSource (Infinity if playling a live source)
* @return {number|null}
* The amount of time that has expired off the playlist during playback. Null
* if no sync-points for the playlist can be found.
*/
getExpiredTime(playlist, duration) {
if (!playlist || !playlist.segments) {
return null;
}
const syncPoints = this.runStrategies_(
playlist,
duration,
playlist.discontinuitySequence,
0
);
// Without sync-points, there is not enough information to determine the expired time
if (!syncPoints.length) {
return null;
}
const syncPoint = this.selectSyncPoint_(syncPoints, {
key: 'segmentIndex',
value: 0
});
// If the sync-point is beyond the start of the playlist, we want to subtract the
// duration from index 0 to syncPoint.segmentIndex instead of adding.
if (syncPoint.segmentIndex > 0) {
syncPoint.time *= -1;
}
return Math.abs(syncPoint.time + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: syncPoint.segmentIndex,
endIndex: 0
}));
}
/**
* Runs each sync-point strategy and returns a list of sync-points returned by the
* strategies
*
* @private
* @param {Playlist} playlist
* The playlist that needs a sync-point
* @param {number} duration
* Duration of the MediaSource (Infinity if playing a live source)
* @param {number} currentTimeline
* The last timeline from which a segment was loaded
* @return {Array}
* A list of sync-point objects
*/
runStrategies_(playlist, duration, currentTimeline, currentTime) {
const syncPoints = [];
// Try to find a sync-point in by utilizing various strategies...
for (let i = 0; i < syncPointStrategies.length; i++) {
const strategy = syncPointStrategies[i];
const syncPoint = strategy.run(
this,
playlist,
duration,
currentTimeline,
currentTime
);
if (syncPoint) {
syncPoint.strategy = strategy.name;
syncPoints.push({
strategy: strategy.name,
syncPoint
});
}
}
return syncPoints;
}
/**
* Selects the sync-point nearest the specified target
*
* @private
* @param {Array} syncPoints
* List of sync-points to select from
* @param {Object} target
* Object specifying the property and value we are targeting
* @param {string} target.key
* Specifies the property to target. Must be either 'time' or 'segmentIndex'
* @param {number} target.value
* The value to target for the specified key.
* @return {Object}
* The sync-point nearest the target
*/
selectSyncPoint_(syncPoints, target) {
let bestSyncPoint = syncPoints[0].syncPoint;
let bestDistance = Math.abs(syncPoints[0].syncPoint[target.key] - target.value);
let bestStrategy = syncPoints[0].strategy;
for (let i = 1; i < syncPoints.length; i++) {
const newDistance = Math.abs(syncPoints[i].syncPoint[target.key] - target.value);
if (newDistance < bestDistance) {
bestDistance = newDistance;
bestSyncPoint = syncPoints[i].syncPoint;
bestStrategy = syncPoints[i].strategy;
}
}
this.logger_(`syncPoint for [${target.key}: ${target.value}] chosen with strategy` +
` [${bestStrategy}]: [time:${bestSyncPoint.time},` +
` segmentIndex:${bestSyncPoint.segmentIndex}` +
(typeof bestSyncPoint.partIndex === 'number' ? `,partIndex:${bestSyncPoint.partIndex}` : '') +
']');
return bestSyncPoint;
}
/**
* Save any meta-data present on the segments when segments leave
* the live window to the playlist to allow for synchronization at the
* playlist level later.
*
* @param {Playlist} oldPlaylist - The previous active playlist
* @param {Playlist} newPlaylist - The updated and most current playlist
*/
saveExpiredSegmentInfo(oldPlaylist, newPlaylist) {
const mediaSequenceDiff = newPlaylist.mediaSequence - oldPlaylist.mediaSequence;
// When a segment expires from the playlist and it has a start time
// save that information as a possible sync-point reference in future
for (let i = mediaSequenceDiff - 1; i >= 0; i--) {
const lastRemovedSegment = oldPlaylist.segments[i];
if (lastRemovedSegment && typeof lastRemovedSegment.start !== 'undefined') {
newPlaylist.syncInfo = {
mediaSequence: oldPlaylist.mediaSequence + i,
time: lastRemovedSegment.start
};
this.logger_(`playlist refresh sync: [time:${newPlaylist.syncInfo.time},` +
` mediaSequence: ${newPlaylist.syncInfo.mediaSequence}]`);
this.trigger('syncinfoupdate');
break;
}
}
}
/**
* Save the mapping from playlist's ProgramDateTime to display. This should only happen
* before segments start to load.
*
* @param {Playlist} playlist - The currently active playlist
*/
setDateTimeMappingForStart(playlist) {
// It's possible for the playlist to be updated before playback starts, meaning time
// zero is not yet set. If, during these playlist refreshes, a discontinuity is
// crossed, then the old time zero mapping (for the prior timeline) would be retained
// unless the mappings are cleared.
this.timelineToDatetimeMappings = {};
if (playlist.segments &&
playlist.segments.length &&
playlist.segments[0].dateTimeObject) {
const firstSegment = playlist.segments[0];
const playlistTimestamp = firstSegment.dateTimeObject.getTime() / 1000;
this.timelineToDatetimeMappings[firstSegment.timeline] = -playlistTimestamp;
}
}
/**
* Calculates and saves timeline mappings, playlist sync info, and segment timing values
* based on the latest timing information.
*
* @param {Object} options
* Options object
* @param {SegmentInfo} options.segmentInfo
* The current active request information
* @param {boolean} options.shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved for timeline mapping and program date time mappings.
*/
saveSegmentTimingInfo({ segmentInfo, shouldSaveTimelineMapping }) {
const didCalculateSegmentTimeMapping = this.calculateSegmentTimeMapping_(
segmentInfo,
segmentInfo.timingInfo,
shouldSaveTimelineMapping
);
const segment = segmentInfo.segment;
if (didCalculateSegmentTimeMapping) {
this.saveDiscontinuitySyncInfo_(segmentInfo);
// If the playlist does not have sync information yet, record that information
// now with segment timing information
if (!segmentInfo.playlist.syncInfo) {
segmentInfo.playlist.syncInfo = {
mediaSequence: segmentInfo.playlist.mediaSequence + segmentInfo.mediaIndex,
time: segment.start
};
}
}
const dateTime = segment.dateTimeObject;
if (segment.discontinuity && shouldSaveTimelineMapping && dateTime) {
this.timelineToDatetimeMappings[segment.timeline] = -(dateTime.getTime() / 1000);
}
}
timestampOffsetForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].time;
}
mappingForTimeline(timeline) {
if (typeof this.timelines[timeline] === 'undefined') {
return null;
}
return this.timelines[timeline].mapping;
}
/**
* Use the "media time" for a segment to generate a mapping to "display time" and
* save that display time to the segment.
*
* @private
* @param {SegmentInfo} segmentInfo
* The current active request information
* @param {Object} timingInfo
* The start and end time of the current segment in "media time"
* @param {boolean} shouldSaveTimelineMapping
* If there's a timeline change, determines if the timeline mapping should be
* saved in timelines.
* @return {boolean}
* Returns false if segment time mapping could not be calculated
*/
calculateSegmentTimeMapping_(segmentInfo, timingInfo, shouldSaveTimelineMapping) {
// TODO: remove side effects
const segment = segmentInfo.segment;
const part = segmentInfo.part;
let mappingObj = this.timelines[segmentInfo.timeline];
let start;
let end;
if (typeof segmentInfo.timestampOffset === 'number') {
mappingObj = {
time: segmentInfo.startOfSegment,
mapping: segmentInfo.startOfSegment - timingInfo.start
};
if (shouldSaveTimelineMapping) {
this.timelines[segmentInfo.timeline] = mappingObj;
this.trigger('timestampoffset');
this.logger_(`time mapping for timeline ${segmentInfo.timeline}: ` +
`[time: ${mappingObj.time}] [mapping: ${mappingObj.mapping}]`);
}
start = segmentInfo.startOfSegment;
end = timingInfo.end + mappingObj.mapping;
} else if (mappingObj) {
start = timingInfo.start + mappingObj.mapping;
end = timingInfo.end + mappingObj.mapping;
} else {
return false;
}
if (part) {
part.start = start;
part.end = end;
}
// If we don't have a segment start yet or the start value we got
// is less than our current segment.start value, save a new start value.
// We have to do this because parts will have segment timing info saved
// multiple times and we want segment start to be the earliest part start
// value for that segment.
if (!segment.start || start < segment.start) {
segment.start = start;
}
segment.end = end;
return true;
}
/**
* Each time we have discontinuity in the playlist, attempt to calculate the location
* in display of the start of the discontinuity and save that. We also save an accuracy
* value so that we save values with the most accuracy (closest to 0.)
*
* @private
* @param {SegmentInfo} segmentInfo - The current active request information
*/
saveDiscontinuitySyncInfo_(segmentInfo) {
const playlist = segmentInfo.playlist;
const segment = segmentInfo.segment;
// If the current segment is a discontinuity then we know exactly where
// the start of the range and it's accuracy is 0 (greater accuracy values
// mean more approximation)
if (segment.discontinuity) {
this.discontinuities[segment.timeline] = {
time: segment.start,
accuracy: 0
};
} else if (playlist.discontinuityStarts && playlist.discontinuityStarts.length) {
// Search for future discontinuities that we can provide better timing
// information for and save that information for sync purposes
for (let i = 0; i < playlist.discontinuityStarts.length; i++) {
const segmentIndex = playlist.discontinuityStarts[i];
const discontinuity = playlist.discontinuitySequence + i + 1;
const mediaIndexDiff = segmentIndex - segmentInfo.mediaIndex;
const accuracy = Math.abs(mediaIndexDiff);
if (!this.discontinuities[discontinuity] ||
this.discontinuities[discontinuity].accuracy > accuracy) {
let time;
if (mediaIndexDiff < 0) {
time = segment.start - sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: segmentInfo.mediaIndex,
endIndex: segmentIndex
});
} else {
time = segment.end + sumDurations({
defaultDuration: playlist.targetDuration,
durationList: playlist.segments,
startIndex: segmentInfo.mediaIndex + 1,
endIndex: segmentIndex
});
}
this.discontinuities[discontinuity] = {
time,
accuracy
};
}
}
}
}
dispose() {
this.trigger('dispose');
this.off();
}
}

View file

@ -0,0 +1,48 @@
import videojs from 'video.js';
/**
* The TimelineChangeController acts as a source for segment loaders to listen for and
* keep track of latest and pending timeline changes. This is useful to ensure proper
* sync, as each loader may need to make a consideration for what timeline the other
* loader is on before making changes which could impact the other loader's media.
*
* @class TimelineChangeController
* @extends videojs.EventTarget
*/
export default class TimelineChangeController extends videojs.EventTarget {
constructor() {
super();
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
}
clearPendingTimelineChange(type) {
this.pendingTimelineChanges_[type] = null;
this.trigger('pendingtimelinechange');
}
pendingTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.pendingTimelineChanges_[type] = { type, from, to };
this.trigger('pendingtimelinechange');
}
return this.pendingTimelineChanges_[type];
}
lastTimelineChange({ type, from, to }) {
if (typeof from === 'number' && typeof to === 'number') {
this.lastTimelineChanges_[type] = { type, from, to };
delete this.pendingTimelineChanges_[type];
this.trigger('timelinechange');
}
return this.lastTimelineChanges_[type];
}
dispose() {
this.trigger('dispose');
this.pendingTimelineChanges_ = {};
this.lastTimelineChanges_ = {};
this.off();
}
}

View file

@ -0,0 +1,373 @@
/* global self */
/**
* @file transmuxer-worker.js
*/
/**
* videojs-contrib-media-sources
*
* Copyright (c) 2015 Brightcove
* All rights reserved.
*
* Handles communication between the browser-world and the mux.js
* transmuxer running inside of a WebWorker by exposing a simple
* message-based interface to a Transmuxer object.
*/
import {Transmuxer} from 'mux.js/lib/mp4/transmuxer';
import CaptionParser from 'mux.js/lib/mp4/caption-parser';
import mp4probe from 'mux.js/lib/mp4/probe';
import tsInspector from 'mux.js/lib/tools/ts-inspector.js';
import {
ONE_SECOND_IN_TS,
secondsToVideoTs,
videoTsToSeconds
} from 'mux.js/lib/utils/clock';
/**
* Re-emits transmuxer events by converting them into messages to the
* world outside the worker.
*
* @param {Object} transmuxer the transmuxer to wire events on
* @private
*/
const wireTransmuxerEvents = function(self, transmuxer) {
transmuxer.on('data', function(segment) {
// transfer ownership of the underlying ArrayBuffer
// instead of doing a copy to save memory
// ArrayBuffers are transferable but generic TypedArrays are not
// @link https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Passing_data_by_transferring_ownership_(transferable_objects)
const initArray = segment.initSegment;
segment.initSegment = {
data: initArray.buffer,
byteOffset: initArray.byteOffset,
byteLength: initArray.byteLength
};
const typedArray = segment.data;
segment.data = typedArray.buffer;
self.postMessage({
action: 'data',
segment,
byteOffset: typedArray.byteOffset,
byteLength: typedArray.byteLength
}, [segment.data]);
});
transmuxer.on('done', function(data) {
self.postMessage({ action: 'done' });
});
transmuxer.on('gopInfo', function(gopInfo) {
self.postMessage({
action: 'gopInfo',
gopInfo
});
});
transmuxer.on('videoSegmentTimingInfo', function(timingInfo) {
const videoSegmentTimingInfo = {
start: {
decode: videoTsToSeconds(timingInfo.start.dts),
presentation: videoTsToSeconds(timingInfo.start.pts)
},
end: {
decode: videoTsToSeconds(timingInfo.end.dts),
presentation: videoTsToSeconds(timingInfo.end.pts)
},
baseMediaDecodeTime: videoTsToSeconds(timingInfo.baseMediaDecodeTime)
};
if (timingInfo.prependedContentDuration) {
videoSegmentTimingInfo.prependedContentDuration = videoTsToSeconds(timingInfo.prependedContentDuration);
}
self.postMessage({
action: 'videoSegmentTimingInfo',
videoSegmentTimingInfo
});
});
transmuxer.on('audioSegmentTimingInfo', function(timingInfo) {
// Note that all times for [audio/video]SegmentTimingInfo events are in video clock
const audioSegmentTimingInfo = {
start: {
decode: videoTsToSeconds(timingInfo.start.dts),
presentation: videoTsToSeconds(timingInfo.start.pts)
},
end: {
decode: videoTsToSeconds(timingInfo.end.dts),
presentation: videoTsToSeconds(timingInfo.end.pts)
},
baseMediaDecodeTime: videoTsToSeconds(timingInfo.baseMediaDecodeTime)
};
if (timingInfo.prependedContentDuration) {
audioSegmentTimingInfo.prependedContentDuration =
videoTsToSeconds(timingInfo.prependedContentDuration);
}
self.postMessage({
action: 'audioSegmentTimingInfo',
audioSegmentTimingInfo
});
});
transmuxer.on('id3Frame', function(id3Frame) {
self.postMessage({
action: 'id3Frame',
id3Frame
});
});
transmuxer.on('caption', function(caption) {
self.postMessage({
action: 'caption',
caption
});
});
transmuxer.on('trackinfo', function(trackInfo) {
self.postMessage({
action: 'trackinfo',
trackInfo
});
});
transmuxer.on('audioTimingInfo', function(audioTimingInfo) {
// convert to video TS since we prioritize video time over audio
self.postMessage({
action: 'audioTimingInfo',
audioTimingInfo: {
start: videoTsToSeconds(audioTimingInfo.start),
end: videoTsToSeconds(audioTimingInfo.end)
}
});
});
transmuxer.on('videoTimingInfo', function(videoTimingInfo) {
self.postMessage({
action: 'videoTimingInfo',
videoTimingInfo: {
start: videoTsToSeconds(videoTimingInfo.start),
end: videoTsToSeconds(videoTimingInfo.end)
}
});
});
};
/**
* All incoming messages route through this hash. If no function exists
* to handle an incoming message, then we ignore the message.
*
* @class MessageHandlers
* @param {Object} options the options to initialize with
*/
class MessageHandlers {
constructor(self, options) {
this.options = options || {};
this.self = self;
this.init();
}
/**
* initialize our web worker and wire all the events.
*/
init() {
if (this.transmuxer) {
this.transmuxer.dispose();
}
this.transmuxer = new Transmuxer(this.options);
wireTransmuxerEvents(this.self, this.transmuxer);
}
pushMp4Captions(data) {
if (!this.captionParser) {
this.captionParser = new CaptionParser();
this.captionParser.init();
}
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
const parsed = this.captionParser.parse(
segment,
data.trackIds,
data.timescales
);
this.self.postMessage({
action: 'mp4Captions',
captions: parsed && parsed.captions || [],
data: segment.buffer
}, [segment.buffer]);
}
probeMp4StartTime({timescales, data}) {
const startTime = mp4probe.startTime(timescales, data);
this.self.postMessage({
action: 'probeMp4StartTime',
startTime,
data
}, [data.buffer]);
}
probeMp4Tracks({data}) {
const tracks = mp4probe.tracks(data);
this.self.postMessage({
action: 'probeMp4Tracks',
tracks,
data
}, [data.buffer]);
}
/**
* Probe an mpeg2-ts segment to determine the start time of the segment in it's
* internal "media time," as well as whether it contains video and/or audio.
*
* @private
* @param {Uint8Array} bytes - segment bytes
* @param {number} baseStartTime
* Relative reference timestamp used when adjusting frame timestamps for rollover.
* This value should be in seconds, as it's converted to a 90khz clock within the
* function body.
* @return {Object} The start time of the current segment in "media time" as well as
* whether it contains video and/or audio
*/
probeTs({data, baseStartTime}) {
const tsStartTime = (typeof baseStartTime === 'number' && !isNaN(baseStartTime)) ?
(baseStartTime * ONE_SECOND_IN_TS) :
void 0;
const timeInfo = tsInspector.inspect(data, tsStartTime);
let result = null;
if (timeInfo) {
result = {
// each type's time info comes back as an array of 2 times, start and end
hasVideo: timeInfo.video && timeInfo.video.length === 2 || false,
hasAudio: timeInfo.audio && timeInfo.audio.length === 2 || false
};
if (result.hasVideo) {
result.videoStart = timeInfo.video[0].ptsTime;
}
if (result.hasAudio) {
result.audioStart = timeInfo.audio[0].ptsTime;
}
}
this.self.postMessage({
action: 'probeTs',
result,
data
}, [data.buffer]);
}
clearAllMp4Captions() {
if (this.captionParser) {
this.captionParser.clearAllCaptions();
}
}
clearParsedMp4Captions() {
if (this.captionParser) {
this.captionParser.clearParsedCaptions();
}
}
/**
* Adds data (a ts segment) to the start of the transmuxer pipeline for
* processing.
*
* @param {ArrayBuffer} data data to push into the muxer
*/
push(data) {
// Cast array buffer to correct type for transmuxer
const segment = new Uint8Array(data.data, data.byteOffset, data.byteLength);
this.transmuxer.push(segment);
}
/**
* Recreate the transmuxer so that the next segment added via `push`
* start with a fresh transmuxer.
*/
reset() {
this.transmuxer.reset();
}
/**
* Set the value that will be used as the `baseMediaDecodeTime` time for the
* next segment pushed in. Subsequent segments will have their `baseMediaDecodeTime`
* set relative to the first based on the PTS values.
*
* @param {Object} data used to set the timestamp offset in the muxer
*/
setTimestampOffset(data) {
const timestampOffset = data.timestampOffset || 0;
this.transmuxer.setBaseMediaDecodeTime(Math.round(secondsToVideoTs(timestampOffset)));
}
setAudioAppendStart(data) {
this.transmuxer.setAudioAppendStart(Math.ceil(secondsToVideoTs(data.appendStart)));
}
setRemux(data) {
this.transmuxer.setRemux(data.remux);
}
/**
* Forces the pipeline to finish processing the last segment and emit it's
* results.
*
* @param {Object} data event data, not really used
*/
flush(data) {
this.transmuxer.flush();
// transmuxed done action is fired after both audio/video pipelines are flushed
self.postMessage({
action: 'done',
type: 'transmuxed'
});
}
endTimeline() {
this.transmuxer.endTimeline();
// transmuxed endedtimeline action is fired after both audio/video pipelines end their
// timelines
self.postMessage({
action: 'endedtimeline',
type: 'transmuxed'
});
}
alignGopsWith(data) {
this.transmuxer.alignGopsWith(data.gopsToAlignWith.slice());
}
}
/**
* Our web worker interface so that things can talk to mux.js
* that will be running in a web worker. the scope is passed to this by
* webworkify.
*
* @param {Object} self the scope for the web worker
*/
self.onmessage = function(event) {
if (event.data.action === 'init' && event.data.options) {
this.messageHandlers = new MessageHandlers(self, event.data.options);
return;
}
if (!this.messageHandlers) {
this.messageHandlers = new MessageHandlers(self);
}
if (event.data && event.data.action && event.data.action !== 'init') {
if (this.messageHandlers[event.data.action]) {
this.messageHandlers[event.data.action](event.data);
}
}
};

File diff suppressed because it is too large Load diff

128
node_modules/@videojs/http-streaming/src/util/codecs.js generated vendored Normal file
View file

@ -0,0 +1,128 @@
/**
* @file - codecs.js - Handles tasks regarding codec strings such as translating them to
* codec strings, or translating codec strings into objects that can be examined.
*/
import {
translateLegacyCodec,
parseCodecs,
codecsFromDefault
} from '@videojs/vhs-utils/es/codecs.js';
import logger from './logger.js';
const logFn = logger('CodecUtils');
/**
* Returns a set of codec strings parsed from the playlist or the default
* codec strings if no codecs were specified in the playlist
*
* @param {Playlist} media the current media playlist
* @return {Object} an object with the video and audio codecs
*/
const getCodecs = function(media) {
// if the codecs were explicitly specified, use them instead of the
// defaults
const mediaAttributes = media.attributes || {};
if (mediaAttributes.CODECS) {
return parseCodecs(mediaAttributes.CODECS);
}
};
export const isMaat = (master, media) => {
const mediaAttributes = media.attributes || {};
return master && master.mediaGroups && master.mediaGroups.AUDIO &&
mediaAttributes.AUDIO &&
master.mediaGroups.AUDIO[mediaAttributes.AUDIO];
};
export const isMuxed = (master, media) => {
if (!isMaat(master, media)) {
return true;
}
const mediaAttributes = media.attributes || {};
const audioGroup = master.mediaGroups.AUDIO[mediaAttributes.AUDIO];
for (const groupId in audioGroup) {
// If an audio group has a URI (the case for HLS, as HLS will use external playlists),
// or there are listed playlists (the case for DASH, as the manifest will have already
// provided all of the details necessary to generate the audio playlist, as opposed to
// HLS' externally requested playlists), then the content is demuxed.
if (!audioGroup[groupId].uri && !audioGroup[groupId].playlists) {
return true;
}
}
return false;
};
export const unwrapCodecList = function(codecList) {
const codecs = {};
codecList.forEach(({mediaType, type, details}) => {
codecs[mediaType] = codecs[mediaType] || [];
codecs[mediaType].push(translateLegacyCodec(`${type}${details}`));
});
Object.keys(codecs).forEach(function(mediaType) {
if (codecs[mediaType].length > 1) {
logFn(`multiple ${mediaType} codecs found as attributes: ${codecs[mediaType].join(', ')}. Setting playlist codecs to null so that we wait for mux.js to probe segments for real codecs.`);
codecs[mediaType] = null;
return;
}
codecs[mediaType] = codecs[mediaType][0];
});
return codecs;
};
export const codecCount = function(codecObj) {
let count = 0;
if (codecObj.audio) {
count++;
}
if (codecObj.video) {
count++;
}
return count;
};
/**
* Calculates the codec strings for a working configuration of
* SourceBuffers to play variant streams in a master playlist. If
* there is no possible working configuration, an empty object will be
* returned.
*
* @param master {Object} the m3u8 object for the master playlist
* @param media {Object} the m3u8 object for the variant playlist
* @return {Object} the codec strings.
*
* @private
*/
export const codecsForPlaylist = function(master, media) {
const mediaAttributes = media.attributes || {};
const codecInfo = unwrapCodecList(getCodecs(media) || []);
// HLS with multiple-audio tracks must always get an audio codec.
// Put another way, there is no way to have a video-only multiple-audio HLS!
if (isMaat(master, media) && !codecInfo.audio) {
if (!isMuxed(master, media)) {
// It is possible for codecs to be specified on the audio media group playlist but
// not on the rendition playlist. This is mostly the case for DASH, where audio and
// video are always separate (and separately specified).
const defaultCodecs = unwrapCodecList(codecsFromDefault(master, mediaAttributes.AUDIO) || []);
if (defaultCodecs.audio) {
codecInfo.audio = defaultCodecs.audio;
}
}
}
return codecInfo;
};

View file

@ -0,0 +1,86 @@
import {getId3Offset} from '@videojs/vhs-utils/es/id3-helpers';
import {detectContainerForBytes} from '@videojs/vhs-utils/es/containers';
import {stringToBytes, concatTypedArrays} from '@videojs/vhs-utils/es/byte-helpers';
import {callbackWrapper} from '../xhr';
// calls back if the request is readyState DONE
// which will only happen if the request is complete.
const callbackOnCompleted = (request, cb) => {
if (request.readyState === 4) {
return cb();
}
return;
};
const containerRequest = (uri, xhr, cb) => {
let bytes = [];
let id3Offset;
let finished = false;
const endRequestAndCallback = function(err, req, type, _bytes) {
req.abort();
finished = true;
return cb(err, req, type, _bytes);
};
const progressListener = function(error, request) {
if (finished) {
return;
}
if (error) {
return endRequestAndCallback(error, request, '', bytes);
}
// grap the new part of content that was just downloaded
const newPart = request.responseText.substring(
bytes && bytes.byteLength || 0,
request.responseText.length
);
// add that onto bytes
bytes = concatTypedArrays(bytes, stringToBytes(newPart, true));
id3Offset = id3Offset || getId3Offset(bytes);
// we need at least 10 bytes to determine a type
// or we need at least two bytes after an id3Offset
if (bytes.length < 10 || (id3Offset && bytes.length < id3Offset + 2)) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
const type = detectContainerForBytes(bytes);
// if this looks like a ts segment but we don't have enough data
// to see the second sync byte, wait until we have enough data
// before declaring it ts
if (type === 'ts' && bytes.length < 188) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
// this may be an unsynced ts segment
// wait for 376 bytes before detecting no container
if (!type && bytes.length < 376) {
return callbackOnCompleted(request, () => endRequestAndCallback(error, request, '', bytes));
}
return endRequestAndCallback(null, request, type, bytes);
};
const options = {
uri,
beforeSend(request) {
// this forces the browser to pass the bytes to us unprocessed
request.overrideMimeType('text/plain; charset=x-user-defined');
request.addEventListener('progress', function({total, loaded}) {
return callbackWrapper(request, null, {statusCode: request.status}, progressListener);
});
}
};
const request = xhr(options, function(error, response) {
return callbackWrapper(request, error, response, progressListener);
});
return request;
};
export default containerRequest;

119
node_modules/@videojs/http-streaming/src/util/gops.js generated vendored Normal file
View file

@ -0,0 +1,119 @@
import { ONE_SECOND_IN_TS } from 'mux.js/lib/utils/clock';
/**
* Returns a list of gops in the buffer that have a pts value of 3 seconds or more in
* front of current time.
*
* @param {Array} buffer
* The current buffer of gop information
* @param {number} currentTime
* The current time
* @param {Double} mapping
* Offset to map display time to stream presentation time
* @return {Array}
* List of gops considered safe to append over
*/
export const gopsSafeToAlignWith = (buffer, currentTime, mapping) => {
if (typeof currentTime === 'undefined' || currentTime === null || !buffer.length) {
return [];
}
// pts value for current time + 3 seconds to give a bit more wiggle room
const currentTimePts = Math.ceil((currentTime - mapping + 3) * ONE_SECOND_IN_TS);
let i;
for (i = 0; i < buffer.length; i++) {
if (buffer[i].pts > currentTimePts) {
break;
}
}
return buffer.slice(i);
};
/**
* Appends gop information (timing and byteLength) received by the transmuxer for the
* gops appended in the last call to appendBuffer
*
* @param {Array} buffer
* The current buffer of gop information
* @param {Array} gops
* List of new gop information
* @param {boolean} replace
* If true, replace the buffer with the new gop information. If false, append the
* new gop information to the buffer in the right location of time.
* @return {Array}
* Updated list of gop information
*/
export const updateGopBuffer = (buffer, gops, replace) => {
if (!gops.length) {
return buffer;
}
if (replace) {
// If we are in safe append mode, then completely overwrite the gop buffer
// with the most recent appeneded data. This will make sure that when appending
// future segments, we only try to align with gops that are both ahead of current
// time and in the last segment appended.
return gops.slice();
}
const start = gops[0].pts;
let i = 0;
for (i; i < buffer.length; i++) {
if (buffer[i].pts >= start) {
break;
}
}
return buffer.slice(0, i).concat(gops);
};
/**
* Removes gop information in buffer that overlaps with provided start and end
*
* @param {Array} buffer
* The current buffer of gop information
* @param {Double} start
* position to start the remove at
* @param {Double} end
* position to end the remove at
* @param {Double} mapping
* Offset to map display time to stream presentation time
*/
export const removeGopBuffer = (buffer, start, end, mapping) => {
const startPts = Math.ceil((start - mapping) * ONE_SECOND_IN_TS);
const endPts = Math.ceil((end - mapping) * ONE_SECOND_IN_TS);
const updatedBuffer = buffer.slice();
let i = buffer.length;
while (i--) {
if (buffer[i].pts <= endPts) {
break;
}
}
if (i === -1) {
// no removal because end of remove range is before start of buffer
return updatedBuffer;
}
let j = i + 1;
while (j--) {
if (buffer[j].pts <= startPts) {
break;
}
}
// clamp remove range start to 0 index
j = Math.max(j, 0);
updatedBuffer.splice(j, i - j + 1);
return updatedBuffer;
};

Some files were not shown because too many files have changed in this diff Show more