First release

This commit is contained in:
Owen Quinlan 2021-07-02 19:29:34 +10:00
commit fa6c85266e
2339 changed files with 761050 additions and 0 deletions

142
node_modules/@videojs/vhs-utils/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,142 @@
<a name="3.0.2"></a>
## [3.0.2](https://github.com/videojs/vhs-utils/compare/v3.0.1...v3.0.2) (2021-05-20)
### Bug Fixes
* properly handle data URIs ([#27](https://github.com/videojs/vhs-utils/issues/27)) ([9b10245](https://github.com/videojs/vhs-utils/commit/9b10245)), closes [videojs/video.js#7240](https://github.com/videojs/video.js/issues/7240)
<a name="3.0.1"></a>
## [3.0.1](https://github.com/videojs/vhs-utils/compare/v3.0.0...v3.0.1) (2021-04-29)
### Bug Fixes
* binary issues ([e9f5079](https://github.com/videojs/vhs-utils/commit/e9f5079))
### Chores
* update vjsverify ([105c26a](https://github.com/videojs/vhs-utils/commit/105c26a))
### Performance Improvements
* use native URL when available ([#26](https://github.com/videojs/vhs-utils/issues/26)) ([e7eaab9](https://github.com/videojs/vhs-utils/commit/e7eaab9))
<a name="3.0.0"></a>
# [3.0.0](https://github.com/videojs/vhs-utils/compare/v2.3.0...v3.0.0) (2020-12-18)
### Features
* Extend our current container parsing logic and add logic for parsing codecs from files ([#14](https://github.com/videojs/vhs-utils/issues/14)) ([d425956](https://github.com/videojs/vhs-utils/commit/d425956))
* parse any number of codecs rather than just the last audio or the last video codec. ([#23](https://github.com/videojs/vhs-utils/issues/23)) ([33ec9f5](https://github.com/videojs/vhs-utils/commit/33ec9f5))
* use [@videojs](https://github.com/videojs)/babel-config to transpile code to cjs/es for node ([#20](https://github.com/videojs/vhs-utils/issues/20)) ([c6dbd0b](https://github.com/videojs/vhs-utils/commit/c6dbd0b))
### Chores
* switch from travis to github ci ([#24](https://github.com/videojs/vhs-utils/issues/24)) ([cfee30b](https://github.com/videojs/vhs-utils/commit/cfee30b))
### BREAKING CHANGES
* cjs dist files changed from './dist' to './cjs'
* parseCodecs now returns an array of codecs that where parsed so that we can support any number of codecs instead of just two.
* toUint8 in byte-helpers functions slightly differently
* getId3Offset is exported from id3-helpers rather than containers
We can now parse the container for and many of the codecs within (where applicable) for mp4, avi, ts, mkv, webm, ogg, wav, aac, ac3 (and ec3 which is contained in ac3 files), mp3, flac, raw h265, and raw h264.
Codec parsing has also been extended to parse codec details in a file for vp09, avc (h264), hevc (h265), av1, and opus
Finally we have the following additional features to our parsing of codec/container information:
* skipping multiple id3 tags at the start of a file for flac, mp3, and aac
* discarding emulation prevention bits (in h264, h265)
* parsing raw h264/h265 to get codec params for ts, avi, and even raw h264/h265 files
<a name="2.3.0"></a>
# [2.3.0](https://github.com/videojs/vhs-utils/compare/v2.2.1...v2.3.0) (2020-12-03)
### Features
* parse unknown and text codecs ([#19](https://github.com/videojs/vhs-utils/issues/19)) ([9c90076](https://github.com/videojs/vhs-utils/commit/9c90076))
### Chores
* Add repository info to package.json ([#22](https://github.com/videojs/vhs-utils/issues/22)) ([a22ae78](https://github.com/videojs/vhs-utils/commit/a22ae78))
<a name="2.2.1"></a>
## [2.2.1](https://github.com/videojs/stream/compare/v2.2.0...v2.2.1) (2020-10-06)
### Bug Fixes
* check for multiple id3 sections in a file (#21) ([759a039](https://github.com/videojs/stream/commit/759a039)), closes [#21](https://github.com/videojs/stream/issues/21)
* parse unknown codecs as audio or video (#15) ([cd2c9bb](https://github.com/videojs/stream/commit/cd2c9bb)), closes [#15](https://github.com/videojs/stream/issues/15)
### Reverts
* "fix: parse unknown codecs as audio or video (#15)" (#18) ([9983be8](https://github.com/videojs/stream/commit/9983be8)), closes [#15](https://github.com/videojs/stream/issues/15) [#18](https://github.com/videojs/stream/issues/18)
<a name="2.2.0"></a>
# [2.2.0](https://github.com/videojs/stream/compare/v2.1.0...v2.2.0) (2020-05-01)
### Features
* Add a function to concat typed arrays into one Uint8Array (#13) ([e733509](https://github.com/videojs/stream/commit/e733509)), closes [#13](https://github.com/videojs/stream/issues/13)
<a name="2.1.0"></a>
# [2.1.0](https://github.com/videojs/stream/compare/v2.0.0...v2.1.0) (2020-04-27)
### Features
* Add functions for byte manipulation and segment container detection (#12) ([325f677](https://github.com/videojs/stream/commit/325f677)), closes [#12](https://github.com/videojs/stream/issues/12)
<a name="2.0.0"></a>
# [2.0.0](https://github.com/videojs/stream/compare/v1.3.0...v2.0.0) (2020-04-07)
### Features
* **codec:** changes to handle muxer/browser/video/audio support separately (#10) ([1f92865](https://github.com/videojs/stream/commit/1f92865)), closes [#10](https://github.com/videojs/stream/issues/10)
### Bug Fixes
* Allow VP9 and AV1 codecs through in VHS ([b32e35b](https://github.com/videojs/stream/commit/b32e35b))
### BREAKING CHANGES
* **codec:** parseCodecs output has been changed. It now returns an object that can have an audio or video property, depending on the codecs found. Those properties are object that contain type. and details. Type being the codec name and details being codec specific information usually with a leading period.
* **codec:** `audioProfileFromDefault` has been renamed to `codecsFromDefault` and now returns all output from `parseCodecs` not just audio or audio profile.
<a name="1.3.0"></a>
# [1.3.0](https://github.com/videojs/vhs-utils/compare/v1.2.1...v1.3.0) (2020-02-05)
### Features
* add forEachMediaGroup in media-groups module (#8) ([a1eacf4](https://github.com/videojs/vhs-utils/commit/a1eacf4)), closes [#8](https://github.com/videojs/vhs-utils/issues/8)
<a name="1.2.1"></a>
## [1.2.1](https://github.com/videojs/vhs-utils/compare/v1.2.0...v1.2.1) (2020-01-15)
### Bug Fixes
* include videojs in VHS JSON media type (#7) ([da072f0](https://github.com/videojs/vhs-utils/commit/da072f0)), closes [#7](https://github.com/videojs/vhs-utils/issues/7)
<a name="1.2.0"></a>
# [1.2.0](https://github.com/videojs/vhs-utils/compare/v1.1.0...v1.2.0) (2019-12-06)
### Features
* add media-types module with simpleTypeFromSourceType function (#4) ([d3ebd3f](https://github.com/videojs/vhs-utils/commit/d3ebd3f)), closes [#4](https://github.com/videojs/vhs-utils/issues/4)
* add VHS codec parsing and translation functions (#5) ([4fe0e22](https://github.com/videojs/vhs-utils/commit/4fe0e22)), closes [#5](https://github.com/videojs/vhs-utils/issues/5)
<a name="1.1.0"></a>
# [1.1.0](https://github.com/videojs/stream/compare/v1.0.0...v1.1.0) (2019-08-30)
### Features
* node support and more stream tests ([315ab8d](https://github.com/videojs/stream/commit/315ab8d))
<a name="1.0.0"></a>
# 1.0.0 (2019-08-21)
### Features
* clones from mpd-parser, m3u8-parser, mux.js, aes-decrypter, and vhs ([5e89042](https://github.com/videojs/stream/commit/5e89042))

30
node_modules/@videojs/vhs-utils/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,30 @@
# CONTRIBUTING
We welcome contributions from everyone!
## Getting Started
Make sure you have Node.js 8 or higher and npm installed.
1. Fork this repository and clone your fork
1. Install dependencies: `npm install`
1. Run a development server: `npm start`
### Making Changes
Refer to the [video.js plugin conventions][conventions] for more detail on best practices and tooling for video.js plugin authorship.
When you've made your changes, push your commit(s) to your fork and issue a pull request against the original repository.
### Running Tests
Testing is a crucial part of any software project. For all but the most trivial changes (typos, etc) test cases are expected. Tests are run in actual browsers using [Karma][karma].
- In all available and supported browsers: `npm test`
- In a specific browser: `npm run test:chrome`, `npm run test:firefox`, etc.
- While development server is running (`npm start`), navigate to [`http://localhost:9999/test/`][local]
[karma]: http://karma-runner.github.io/
[local]: http://localhost:9999/test/
[conventions]: https://github.com/videojs/generator-videojs-plugin/blob/master/docs/conventions.md

19
node_modules/@videojs/vhs-utils/LICENSE generated vendored Normal file
View file

@ -0,0 +1,19 @@
Copyright (c) brandonocasey <brandonocasey@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

41
node_modules/@videojs/vhs-utils/README.md generated vendored Normal file
View file

@ -0,0 +1,41 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [@videojs/vhs-utils](#videojsvhs-utils)
- [Installation](#installation)
- [Usage](#usage)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# @videojs/vhs-utils
vhs-utils serves two purposes:
1. It extracts objects and functions shared throughout @videojs/http-streaming code to save on package size. See [the original @videojs/http-streaming PR](https://github.com/videojs/http-streaming/pull/637) for details.
2. It exports generic functions from VHS that may be useful to plugin authors.
## Installation
```sh
npm install --save @videojs/vhs-utils
```
## Usage
All utility functions are published under dist and can be required/imported like so:
> es import using es dist
```js
import resolveUrl from '@videojs/vhs-utils/es/resolve-url';
```
> cjs import using cjs dist
```js
const resolveUrl = require('@videojs/vhs-utils/cjs/resolve-url');
```
> depricated cjs dist
```js
const resolveUrl = require('@videojs/vhs-utils/dist/resolve-url');
```

323
node_modules/@videojs/vhs-utils/cjs/byte-helpers.js generated vendored Normal file
View file

@ -0,0 +1,323 @@
"use strict";
var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.reverseBytes = exports.sliceBytes = exports.bytesMatch = exports.concatTypedArrays = exports.stringToBytes = exports.bytesToString = exports.numberToBytes = exports.bytesToNumber = exports.IS_LITTLE_ENDIAN = exports.IS_BIG_ENDIAN = exports.ENDIANNESS = exports.toBinaryString = exports.toHexString = exports.toUint8 = exports.isTypedArray = exports.padStart = exports.countBytes = exports.countBits = void 0;
var _window = _interopRequireDefault(require("global/window"));
// const log2 = Math.log2 ? Math.log2 : (x) => (Math.log(x) / Math.log(2));
var repeat = function repeat(str, len) {
var acc = '';
while (len--) {
acc += str;
}
return acc;
}; // count the number of bits it would take to represent a number
// we used to do this with log2 but BigInt does not support builtin math
// Math.ceil(log2(x));
var countBits = function countBits(x) {
return x.toString(2).length;
}; // count the number of whole bytes it would take to represent a number
exports.countBits = countBits;
var countBytes = function countBytes(x) {
return Math.ceil(countBits(x) / 8);
};
exports.countBytes = countBytes;
var padStart = function padStart(b, len, str) {
if (str === void 0) {
str = ' ';
}
return (repeat(str, len) + b.toString()).slice(-len);
};
exports.padStart = padStart;
var isTypedArray = function isTypedArray(obj) {
return ArrayBuffer.isView(obj);
};
exports.isTypedArray = isTypedArray;
var toUint8 = function toUint8(bytes) {
if (bytes instanceof Uint8Array) {
return bytes;
}
if (!Array.isArray(bytes) && !isTypedArray(bytes) && !(bytes instanceof ArrayBuffer)) {
// any non-number or NaN leads to empty uint8array
// eslint-disable-next-line
if (typeof bytes !== 'number' || typeof bytes === 'number' && bytes !== bytes) {
bytes = 0;
} else {
bytes = [bytes];
}
}
return new Uint8Array(bytes && bytes.buffer || bytes, bytes && bytes.byteOffset || 0, bytes && bytes.byteLength || 0);
};
exports.toUint8 = toUint8;
var toHexString = function toHexString(bytes) {
bytes = toUint8(bytes);
var str = '';
for (var i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(16), 2, '0');
}
return str;
};
exports.toHexString = toHexString;
var toBinaryString = function toBinaryString(bytes) {
bytes = toUint8(bytes);
var str = '';
for (var i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(2), 8, '0');
}
return str;
};
exports.toBinaryString = toBinaryString;
var BigInt = _window.default.BigInt || Number;
var BYTE_TABLE = [BigInt('0x1'), BigInt('0x100'), BigInt('0x10000'), BigInt('0x1000000'), BigInt('0x100000000'), BigInt('0x10000000000'), BigInt('0x1000000000000'), BigInt('0x100000000000000'), BigInt('0x10000000000000000')];
var ENDIANNESS = function () {
var a = new Uint16Array([0xFFCC]);
var b = new Uint8Array(a.buffer, a.byteOffset, a.byteLength);
if (b[0] === 0xFF) {
return 'big';
}
if (b[0] === 0xCC) {
return 'little';
}
return 'unknown';
}();
exports.ENDIANNESS = ENDIANNESS;
var IS_BIG_ENDIAN = ENDIANNESS === 'big';
exports.IS_BIG_ENDIAN = IS_BIG_ENDIAN;
var IS_LITTLE_ENDIAN = ENDIANNESS === 'little';
exports.IS_LITTLE_ENDIAN = IS_LITTLE_ENDIAN;
var bytesToNumber = function bytesToNumber(bytes, _temp) {
var _ref = _temp === void 0 ? {} : _temp,
_ref$signed = _ref.signed,
signed = _ref$signed === void 0 ? false : _ref$signed,
_ref$le = _ref.le,
le = _ref$le === void 0 ? false : _ref$le;
bytes = toUint8(bytes);
var fn = le ? 'reduce' : 'reduceRight';
var obj = bytes[fn] ? bytes[fn] : Array.prototype[fn];
var number = obj.call(bytes, function (total, byte, i) {
var exponent = le ? i : Math.abs(i + 1 - bytes.length);
return total + BigInt(byte) * BYTE_TABLE[exponent];
}, BigInt(0));
if (signed) {
var max = BYTE_TABLE[bytes.length] / BigInt(2) - BigInt(1);
number = BigInt(number);
if (number > max) {
number -= max;
number -= max;
number -= BigInt(2);
}
}
return Number(number);
};
exports.bytesToNumber = bytesToNumber;
var numberToBytes = function numberToBytes(number, _temp2) {
var _ref2 = _temp2 === void 0 ? {} : _temp2,
_ref2$le = _ref2.le,
le = _ref2$le === void 0 ? false : _ref2$le;
// eslint-disable-next-line
if (typeof number !== 'bigint' && typeof number !== 'number' || typeof number === 'number' && number !== number) {
number = 0;
}
number = BigInt(number);
var byteCount = countBytes(number);
var bytes = new Uint8Array(new ArrayBuffer(byteCount));
for (var i = 0; i < byteCount; i++) {
var byteIndex = le ? i : Math.abs(i + 1 - bytes.length);
bytes[byteIndex] = Number(number / BYTE_TABLE[i] & BigInt(0xFF));
if (number < 0) {
bytes[byteIndex] = Math.abs(~bytes[byteIndex]);
bytes[byteIndex] -= i === 0 ? 1 : 2;
}
}
return bytes;
};
exports.numberToBytes = numberToBytes;
var bytesToString = function bytesToString(bytes) {
if (!bytes) {
return '';
} // TODO: should toUint8 handle cases where we only have 8 bytes
// but report more since this is a Uint16+ Array?
bytes = Array.prototype.slice.call(bytes);
var string = String.fromCharCode.apply(null, toUint8(bytes));
try {
return decodeURIComponent(escape(string));
} catch (e) {// if decodeURIComponent/escape fails, we are dealing with partial
// or full non string data. Just return the potentially garbled string.
}
return string;
};
exports.bytesToString = bytesToString;
var stringToBytes = function stringToBytes(string, stringIsBytes) {
if (typeof string !== 'string' && string && typeof string.toString === 'function') {
string = string.toString();
}
if (typeof string !== 'string') {
return new Uint8Array();
} // If the string already is bytes, we don't have to do this
// otherwise we do this so that we split multi length characters
// into individual bytes
if (!stringIsBytes) {
string = unescape(encodeURIComponent(string));
}
var view = new Uint8Array(string.length);
for (var i = 0; i < string.length; i++) {
view[i] = string.charCodeAt(i);
}
return view;
};
exports.stringToBytes = stringToBytes;
var concatTypedArrays = function concatTypedArrays() {
for (var _len = arguments.length, buffers = new Array(_len), _key = 0; _key < _len; _key++) {
buffers[_key] = arguments[_key];
}
buffers = buffers.filter(function (b) {
return b && (b.byteLength || b.length) && typeof b !== 'string';
});
if (buffers.length <= 1) {
// for 0 length we will return empty uint8
// for 1 length we return the first uint8
return toUint8(buffers[0]);
}
var totalLen = buffers.reduce(function (total, buf, i) {
return total + (buf.byteLength || buf.length);
}, 0);
var tempBuffer = new Uint8Array(totalLen);
var offset = 0;
buffers.forEach(function (buf) {
buf = toUint8(buf);
tempBuffer.set(buf, offset);
offset += buf.byteLength;
});
return tempBuffer;
};
/**
* Check if the bytes "b" are contained within bytes "a".
*
* @param {Uint8Array|Array} a
* Bytes to check in
*
* @param {Uint8Array|Array} b
* Bytes to check for
*
* @param {Object} options
* options
*
* @param {Array|Uint8Array} [offset=0]
* offset to use when looking at bytes in a
*
* @param {Array|Uint8Array} [mask=[]]
* mask to use on bytes before comparison.
*
* @return {boolean}
* If all bytes in b are inside of a, taking into account
* bit masks.
*/
exports.concatTypedArrays = concatTypedArrays;
var bytesMatch = function bytesMatch(a, b, _temp3) {
var _ref3 = _temp3 === void 0 ? {} : _temp3,
_ref3$offset = _ref3.offset,
offset = _ref3$offset === void 0 ? 0 : _ref3$offset,
_ref3$mask = _ref3.mask,
mask = _ref3$mask === void 0 ? [] : _ref3$mask;
a = toUint8(a);
b = toUint8(b); // ie 11 does not support uint8 every
var fn = b.every ? b.every : Array.prototype.every;
return b.length && a.length - offset >= b.length && // ie 11 doesn't support every on uin8
fn.call(b, function (bByte, i) {
var aByte = mask[i] ? mask[i] & a[offset + i] : a[offset + i];
return bByte === aByte;
});
};
exports.bytesMatch = bytesMatch;
var sliceBytes = function sliceBytes(src, start, end) {
if (Uint8Array.prototype.slice) {
return Uint8Array.prototype.slice.call(src, start, end);
}
return new Uint8Array(Array.prototype.slice.call(src, start, end));
};
exports.sliceBytes = sliceBytes;
var reverseBytes = function reverseBytes(src) {
if (src.reverse) {
return src.reverse();
}
return Array.prototype.reverse.call(src);
};
exports.reverseBytes = reverseBytes;

112
node_modules/@videojs/vhs-utils/cjs/codec-helpers.js generated vendored Normal file
View file

@ -0,0 +1,112 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.getHvcCodec = exports.getAvcCodec = exports.getAv1Codec = void 0;
var _byteHelpers = require("./byte-helpers.js");
// https://aomediacodec.github.io/av1-isobmff/#av1codecconfigurationbox-syntax
// https://developer.mozilla.org/en-US/docs/Web/Media/Formats/codecs_parameter#AV1
var getAv1Codec = function getAv1Codec(bytes) {
var codec = '';
var profile = bytes[1] >>> 3;
var level = bytes[1] & 0x1F;
var tier = bytes[2] >>> 7;
var highBitDepth = (bytes[2] & 0x40) >> 6;
var twelveBit = (bytes[2] & 0x20) >> 5;
var monochrome = (bytes[2] & 0x10) >> 4;
var chromaSubsamplingX = (bytes[2] & 0x08) >> 3;
var chromaSubsamplingY = (bytes[2] & 0x04) >> 2;
var chromaSamplePosition = bytes[2] & 0x03;
codec += profile + "." + (0, _byteHelpers.padStart)(level, 2, '0');
if (tier === 0) {
codec += 'M';
} else if (tier === 1) {
codec += 'H';
}
var bitDepth;
if (profile === 2 && highBitDepth) {
bitDepth = twelveBit ? 12 : 10;
} else {
bitDepth = highBitDepth ? 10 : 8;
}
codec += "." + (0, _byteHelpers.padStart)(bitDepth, 2, '0'); // TODO: can we parse color range??
codec += "." + monochrome;
codec += "." + chromaSubsamplingX + chromaSubsamplingY + chromaSamplePosition;
return codec;
};
exports.getAv1Codec = getAv1Codec;
var getAvcCodec = function getAvcCodec(bytes) {
var profileId = (0, _byteHelpers.toHexString)(bytes[1]);
var constraintFlags = (0, _byteHelpers.toHexString)(bytes[2] & 0xFC);
var levelId = (0, _byteHelpers.toHexString)(bytes[3]);
return "" + profileId + constraintFlags + levelId;
};
exports.getAvcCodec = getAvcCodec;
var getHvcCodec = function getHvcCodec(bytes) {
var codec = '';
var profileSpace = bytes[1] >> 6;
var profileId = bytes[1] & 0x1F;
var tierFlag = (bytes[1] & 0x20) >> 5;
var profileCompat = bytes.subarray(2, 6);
var constraintIds = bytes.subarray(6, 12);
var levelId = bytes[12];
if (profileSpace === 1) {
codec += 'A';
} else if (profileSpace === 2) {
codec += 'B';
} else if (profileSpace === 3) {
codec += 'C';
}
codec += profileId + "."; // ffmpeg does this in big endian
var profileCompatVal = parseInt((0, _byteHelpers.toBinaryString)(profileCompat).split('').reverse().join(''), 2); // apple does this in little endian...
if (profileCompatVal > 255) {
profileCompatVal = parseInt((0, _byteHelpers.toBinaryString)(profileCompat), 2);
}
codec += profileCompatVal.toString(16) + ".";
if (tierFlag === 0) {
codec += 'L';
} else {
codec += 'H';
}
codec += levelId;
var constraints = '';
for (var i = 0; i < constraintIds.length; i++) {
var v = constraintIds[i];
if (v) {
if (constraints) {
constraints += '.';
}
constraints += v.toString(16);
}
}
if (constraints) {
codec += "." + constraints;
}
return codec;
};
exports.getHvcCodec = getHvcCodec;

297
node_modules/@videojs/vhs-utils/cjs/codecs.js generated vendored Normal file
View file

@ -0,0 +1,297 @@
"use strict";
var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.DEFAULT_VIDEO_CODEC = exports.DEFAULT_AUDIO_CODEC = exports.muxerSupportsCodec = exports.browserSupportsCodec = exports.getMimeForCodec = exports.isTextCodec = exports.isAudioCodec = exports.isVideoCodec = exports.codecsFromDefault = exports.parseCodecs = exports.mapLegacyAvcCodecs = exports.translateLegacyCodecs = exports.translateLegacyCodec = void 0;
var _window = _interopRequireDefault(require("global/window"));
var regexs = {
// to determine mime types
mp4: /^(av0?1|avc0?[1234]|vp0?9|flac|opus|mp3|mp4a|mp4v|stpp.ttml.im1t)/,
webm: /^(vp0?[89]|av0?1|opus|vorbis)/,
ogg: /^(vp0?[89]|theora|flac|opus|vorbis)/,
// to determine if a codec is audio or video
video: /^(av0?1|avc0?[1234]|vp0?[89]|hvc1|hev1|theora|mp4v)/,
audio: /^(mp4a|flac|vorbis|opus|ac-[34]|ec-3|alac|mp3|speex|aac)/,
text: /^(stpp.ttml.im1t)/,
// mux.js support regex
muxerVideo: /^(avc0?1)/,
muxerAudio: /^(mp4a)/,
// match nothing as muxer does not support text right now.
// there cannot never be a character before the start of a string
// so this matches nothing.
muxerText: /a^/
};
var mediaTypes = ['video', 'audio', 'text'];
var upperMediaTypes = ['Video', 'Audio', 'Text'];
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec string with the standard
* `avc1.<hhhhhh>`
*
* @param {string} codec
* Codec string to translate
* @return {string}
* The translated codec string
*/
var translateLegacyCodec = function translateLegacyCodec(codec) {
if (!codec) {
return codec;
}
return codec.replace(/avc1\.(\d+)\.(\d+)/i, function (orig, profile, avcLevel) {
var profileHex = ('00' + Number(profile).toString(16)).slice(-2);
var avcLevelHex = ('00' + Number(avcLevel).toString(16)).slice(-2);
return 'avc1.' + profileHex + '00' + avcLevelHex;
});
};
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec strings with the standard
* `avc1.<hhhhhh>`
*
* @param {string[]} codecs
* An array of codec strings to translate
* @return {string[]}
* The translated array of codec strings
*/
exports.translateLegacyCodec = translateLegacyCodec;
var translateLegacyCodecs = function translateLegacyCodecs(codecs) {
return codecs.map(translateLegacyCodec);
};
/**
* Replace codecs in the codec string with the old apple-style `avc1.<dd>.<dd>` to the
* standard `avc1.<hhhhhh>`.
*
* @param {string} codecString
* The codec string
* @return {string}
* The codec string with old apple-style codecs replaced
*
* @private
*/
exports.translateLegacyCodecs = translateLegacyCodecs;
var mapLegacyAvcCodecs = function mapLegacyAvcCodecs(codecString) {
return codecString.replace(/avc1\.(\d+)\.(\d+)/i, function (match) {
return translateLegacyCodecs([match])[0];
});
};
/**
* @typedef {Object} ParsedCodecInfo
* @property {number} codecCount
* Number of codecs parsed
* @property {string} [videoCodec]
* Parsed video codec (if found)
* @property {string} [videoObjectTypeIndicator]
* Video object type indicator (if found)
* @property {string|null} audioProfile
* Audio profile
*/
/**
* Parses a codec string to retrieve the number of codecs specified, the video codec and
* object type indicator, and the audio profile.
*
* @param {string} [codecString]
* The codec string to parse
* @return {ParsedCodecInfo}
* Parsed codec info
*/
exports.mapLegacyAvcCodecs = mapLegacyAvcCodecs;
var parseCodecs = function parseCodecs(codecString) {
if (codecString === void 0) {
codecString = '';
}
var codecs = codecString.split(',');
var result = [];
codecs.forEach(function (codec) {
codec = codec.trim();
var codecType;
mediaTypes.forEach(function (name) {
var match = regexs[name].exec(codec.toLowerCase());
if (!match || match.length <= 1) {
return;
}
codecType = name; // maintain codec case
var type = codec.substring(0, match[1].length);
var details = codec.replace(type, '');
result.push({
type: type,
details: details,
mediaType: name
});
});
if (!codecType) {
result.push({
type: codec,
details: '',
mediaType: 'unknown'
});
}
});
return result;
};
/**
* Returns a ParsedCodecInfo object for the default alternate audio playlist if there is
* a default alternate audio playlist for the provided audio group.
*
* @param {Object} master
* The master playlist
* @param {string} audioGroupId
* ID of the audio group for which to find the default codec info
* @return {ParsedCodecInfo}
* Parsed codec info
*/
exports.parseCodecs = parseCodecs;
var codecsFromDefault = function codecsFromDefault(master, audioGroupId) {
if (!master.mediaGroups.AUDIO || !audioGroupId) {
return null;
}
var audioGroup = master.mediaGroups.AUDIO[audioGroupId];
if (!audioGroup) {
return null;
}
for (var name in audioGroup) {
var audioType = audioGroup[name];
if (audioType.default && audioType.playlists) {
// codec should be the same for all playlists within the audio type
return parseCodecs(audioType.playlists[0].attributes.CODECS);
}
}
return null;
};
exports.codecsFromDefault = codecsFromDefault;
var isVideoCodec = function isVideoCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.video.test(codec.trim().toLowerCase());
};
exports.isVideoCodec = isVideoCodec;
var isAudioCodec = function isAudioCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.audio.test(codec.trim().toLowerCase());
};
exports.isAudioCodec = isAudioCodec;
var isTextCodec = function isTextCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.text.test(codec.trim().toLowerCase());
};
exports.isTextCodec = isTextCodec;
var getMimeForCodec = function getMimeForCodec(codecString) {
if (!codecString || typeof codecString !== 'string') {
return;
}
var codecs = codecString.toLowerCase().split(',').map(function (c) {
return translateLegacyCodec(c.trim());
}); // default to video type
var type = 'video'; // only change to audio type if the only codec we have is
// audio
if (codecs.length === 1 && isAudioCodec(codecs[0])) {
type = 'audio';
} else if (codecs.length === 1 && isTextCodec(codecs[0])) {
// text uses application/<container> for now
type = 'application';
} // default the container to mp4
var container = 'mp4'; // every codec must be able to go into the container
// for that container to be the correct one
if (codecs.every(function (c) {
return regexs.mp4.test(c);
})) {
container = 'mp4';
} else if (codecs.every(function (c) {
return regexs.webm.test(c);
})) {
container = 'webm';
} else if (codecs.every(function (c) {
return regexs.ogg.test(c);
})) {
container = 'ogg';
}
return type + "/" + container + ";codecs=\"" + codecString + "\"";
};
exports.getMimeForCodec = getMimeForCodec;
var browserSupportsCodec = function browserSupportsCodec(codecString) {
if (codecString === void 0) {
codecString = '';
}
return _window.default.MediaSource && _window.default.MediaSource.isTypeSupported && _window.default.MediaSource.isTypeSupported(getMimeForCodec(codecString)) || false;
};
exports.browserSupportsCodec = browserSupportsCodec;
var muxerSupportsCodec = function muxerSupportsCodec(codecString) {
if (codecString === void 0) {
codecString = '';
}
return codecString.toLowerCase().split(',').every(function (codec) {
codec = codec.trim(); // any match is supported.
for (var i = 0; i < upperMediaTypes.length; i++) {
var type = upperMediaTypes[i];
if (regexs["muxer" + type].test(codec)) {
return true;
}
}
return false;
});
};
exports.muxerSupportsCodec = muxerSupportsCodec;
var DEFAULT_AUDIO_CODEC = 'mp4a.40.2';
exports.DEFAULT_AUDIO_CODEC = DEFAULT_AUDIO_CODEC;
var DEFAULT_VIDEO_CODEC = 'avc1.4d400d';
exports.DEFAULT_VIDEO_CODEC = DEFAULT_VIDEO_CODEC;

181
node_modules/@videojs/vhs-utils/cjs/containers.js generated vendored Normal file
View file

@ -0,0 +1,181 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.isLikelyFmp4MediaSegment = exports.detectContainerForBytes = exports.isLikely = void 0;
var _byteHelpers = require("./byte-helpers.js");
var _mp4Helpers = require("./mp4-helpers.js");
var _ebmlHelpers = require("./ebml-helpers.js");
var _id3Helpers = require("./id3-helpers.js");
var _nalHelpers = require("./nal-helpers.js");
var CONSTANTS = {
// "webm" string literal in hex
'webm': (0, _byteHelpers.toUint8)([0x77, 0x65, 0x62, 0x6d]),
// "matroska" string literal in hex
'matroska': (0, _byteHelpers.toUint8)([0x6d, 0x61, 0x74, 0x72, 0x6f, 0x73, 0x6b, 0x61]),
// "fLaC" string literal in hex
'flac': (0, _byteHelpers.toUint8)([0x66, 0x4c, 0x61, 0x43]),
// "OggS" string literal in hex
'ogg': (0, _byteHelpers.toUint8)([0x4f, 0x67, 0x67, 0x53]),
// ac-3 sync byte, also works for ec-3 as that is simply a codec
// of ac-3
'ac3': (0, _byteHelpers.toUint8)([0x0b, 0x77]),
// "RIFF" string literal in hex used for wav and avi
'riff': (0, _byteHelpers.toUint8)([0x52, 0x49, 0x46, 0x46]),
// "AVI" string literal in hex
'avi': (0, _byteHelpers.toUint8)([0x41, 0x56, 0x49]),
// "WAVE" string literal in hex
'wav': (0, _byteHelpers.toUint8)([0x57, 0x41, 0x56, 0x45]),
// "ftyp3g" string literal in hex
'3gp': (0, _byteHelpers.toUint8)([0x66, 0x74, 0x79, 0x70, 0x33, 0x67]),
// "ftyp" string literal in hex
'mp4': (0, _byteHelpers.toUint8)([0x66, 0x74, 0x79, 0x70]),
// "styp" string literal in hex
'fmp4': (0, _byteHelpers.toUint8)([0x73, 0x74, 0x79, 0x70]),
// "ftyp" string literal in hex
'mov': (0, _byteHelpers.toUint8)([0x66, 0x74, 0x79, 0x70, 0x71, 0x74])
};
var _isLikely = {
aac: function aac(bytes) {
var offset = (0, _id3Helpers.getId3Offset)(bytes);
return (0, _byteHelpers.bytesMatch)(bytes, [0xFF, 0x10], {
offset: offset,
mask: [0xFF, 0x16]
});
},
mp3: function mp3(bytes) {
var offset = (0, _id3Helpers.getId3Offset)(bytes);
return (0, _byteHelpers.bytesMatch)(bytes, [0xFF, 0x02], {
offset: offset,
mask: [0xFF, 0x06]
});
},
webm: function webm(bytes) {
var docType = (0, _ebmlHelpers.findEbml)(bytes, [_ebmlHelpers.EBML_TAGS.EBML, _ebmlHelpers.EBML_TAGS.DocType])[0]; // check if DocType EBML tag is webm
return (0, _byteHelpers.bytesMatch)(docType, CONSTANTS.webm);
},
mkv: function mkv(bytes) {
var docType = (0, _ebmlHelpers.findEbml)(bytes, [_ebmlHelpers.EBML_TAGS.EBML, _ebmlHelpers.EBML_TAGS.DocType])[0]; // check if DocType EBML tag is matroska
return (0, _byteHelpers.bytesMatch)(docType, CONSTANTS.matroska);
},
mp4: function mp4(bytes) {
return !_isLikely['3gp'](bytes) && !_isLikely.mov(bytes) && ((0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.mp4, {
offset: 4
}) || (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.fmp4, {
offset: 4
}));
},
mov: function mov(bytes) {
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.mov, {
offset: 4
});
},
'3gp': function gp(bytes) {
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS['3gp'], {
offset: 4
});
},
ac3: function ac3(bytes) {
var offset = (0, _id3Helpers.getId3Offset)(bytes);
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.ac3, {
offset: offset
});
},
ts: function ts(bytes) {
if (bytes.length < 189 && bytes.length >= 1) {
return bytes[0] === 0x47;
}
var i = 0; // check the first 376 bytes for two matching sync bytes
while (i + 188 < bytes.length && i < 188) {
if (bytes[i] === 0x47 && bytes[i + 188] === 0x47) {
return true;
}
i += 1;
}
return false;
},
flac: function flac(bytes) {
var offset = (0, _id3Helpers.getId3Offset)(bytes);
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.flac, {
offset: offset
});
},
ogg: function ogg(bytes) {
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.ogg);
},
avi: function avi(bytes) {
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.riff) && (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.avi, {
offset: 8
});
},
wav: function wav(bytes) {
return (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.riff) && (0, _byteHelpers.bytesMatch)(bytes, CONSTANTS.wav, {
offset: 8
});
},
'h264': function h264(bytes) {
// find seq_parameter_set_rbsp
return (0, _nalHelpers.findH264Nal)(bytes, 7, 3).length;
},
'h265': function h265(bytes) {
// find video_parameter_set_rbsp or seq_parameter_set_rbsp
return (0, _nalHelpers.findH265Nal)(bytes, [32, 33], 3).length;
}
}; // get all the isLikely functions
// but make sure 'ts' is above h264 and h265
// but below everything else as it is the least specific
var isLikelyTypes = Object.keys(_isLikely) // remove ts, h264, h265
.filter(function (t) {
return t !== 'ts' && t !== 'h264' && t !== 'h265';
}) // add it back to the bottom
.concat(['ts', 'h264', 'h265']); // make sure we are dealing with uint8 data.
isLikelyTypes.forEach(function (type) {
var isLikelyFn = _isLikely[type];
_isLikely[type] = function (bytes) {
return isLikelyFn((0, _byteHelpers.toUint8)(bytes));
};
}); // export after wrapping
var isLikely = _isLikely; // A useful list of file signatures can be found here
// https://en.wikipedia.org/wiki/List_of_file_signatures
exports.isLikely = isLikely;
var detectContainerForBytes = function detectContainerForBytes(bytes) {
bytes = (0, _byteHelpers.toUint8)(bytes);
for (var i = 0; i < isLikelyTypes.length; i++) {
var type = isLikelyTypes[i];
if (isLikely[type](bytes)) {
return type;
}
}
return '';
}; // fmp4 is not a container
exports.detectContainerForBytes = detectContainerForBytes;
var isLikelyFmp4MediaSegment = function isLikelyFmp4MediaSegment(bytes) {
return (0, _mp4Helpers.findBox)(bytes, ['moof']).length > 0;
};
exports.isLikelyFmp4MediaSegment = isLikelyFmp4MediaSegment;

View file

@ -0,0 +1,27 @@
"use strict";
var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = decodeB64ToUint8Array;
var _window = _interopRequireDefault(require("global/window"));
var atob = function atob(s) {
return _window.default.atob ? _window.default.atob(s) : Buffer.from(s, 'base64').toString('binary');
};
function decodeB64ToUint8Array(b64Text) {
var decodedString = atob(b64Text);
var array = new Uint8Array(decodedString.length);
for (var i = 0; i < decodedString.length; i++) {
array[i] = decodedString.charCodeAt(i);
}
return array;
}
module.exports = exports.default;

518
node_modules/@videojs/vhs-utils/cjs/ebml-helpers.js generated vendored Normal file
View file

@ -0,0 +1,518 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.parseData = exports.parseTracks = exports.decodeBlock = exports.findEbml = exports.EBML_TAGS = void 0;
var _byteHelpers = require("./byte-helpers");
var _codecHelpers = require("./codec-helpers.js");
// relevant specs for this parser:
// https://matroska-org.github.io/libebml/specs.html
// https://www.matroska.org/technical/elements.html
// https://www.webmproject.org/docs/container/
var EBML_TAGS = {
EBML: (0, _byteHelpers.toUint8)([0x1A, 0x45, 0xDF, 0xA3]),
DocType: (0, _byteHelpers.toUint8)([0x42, 0x82]),
Segment: (0, _byteHelpers.toUint8)([0x18, 0x53, 0x80, 0x67]),
SegmentInfo: (0, _byteHelpers.toUint8)([0x15, 0x49, 0xA9, 0x66]),
Tracks: (0, _byteHelpers.toUint8)([0x16, 0x54, 0xAE, 0x6B]),
Track: (0, _byteHelpers.toUint8)([0xAE]),
TrackNumber: (0, _byteHelpers.toUint8)([0xd7]),
DefaultDuration: (0, _byteHelpers.toUint8)([0x23, 0xe3, 0x83]),
TrackEntry: (0, _byteHelpers.toUint8)([0xAE]),
TrackType: (0, _byteHelpers.toUint8)([0x83]),
FlagDefault: (0, _byteHelpers.toUint8)([0x88]),
CodecID: (0, _byteHelpers.toUint8)([0x86]),
CodecPrivate: (0, _byteHelpers.toUint8)([0x63, 0xA2]),
VideoTrack: (0, _byteHelpers.toUint8)([0xe0]),
AudioTrack: (0, _byteHelpers.toUint8)([0xe1]),
// Not used yet, but will be used for live webm/mkv
// see https://www.matroska.org/technical/basics.html#block-structure
// see https://www.matroska.org/technical/basics.html#simpleblock-structure
Cluster: (0, _byteHelpers.toUint8)([0x1F, 0x43, 0xB6, 0x75]),
Timestamp: (0, _byteHelpers.toUint8)([0xE7]),
TimestampScale: (0, _byteHelpers.toUint8)([0x2A, 0xD7, 0xB1]),
BlockGroup: (0, _byteHelpers.toUint8)([0xA0]),
BlockDuration: (0, _byteHelpers.toUint8)([0x9B]),
Block: (0, _byteHelpers.toUint8)([0xA1]),
SimpleBlock: (0, _byteHelpers.toUint8)([0xA3])
};
/**
* This is a simple table to determine the length
* of things in ebml. The length is one based (starts at 1,
* rather than zero) and for every zero bit before a one bit
* we add one to length. We also need this table because in some
* case we have to xor all the length bits from another value.
*/
exports.EBML_TAGS = EBML_TAGS;
var LENGTH_TABLE = [128, 64, 32, 16, 8, 4, 2, 1];
var getLength = function getLength(byte) {
var len = 1;
for (var i = 0; i < LENGTH_TABLE.length; i++) {
if (byte & LENGTH_TABLE[i]) {
break;
}
len++;
}
return len;
}; // length in ebml is stored in the first 4 to 8 bits
// of the first byte. 4 for the id length and 8 for the
// data size length. Length is measured by converting the number to binary
// then 1 + the number of zeros before a 1 is encountered starting
// from the left.
var getvint = function getvint(bytes, offset, removeLength, signed) {
if (removeLength === void 0) {
removeLength = true;
}
if (signed === void 0) {
signed = false;
}
var length = getLength(bytes[offset]);
var valueBytes = bytes.subarray(offset, offset + length); // NOTE that we do **not** subarray here because we need to copy these bytes
// as they will be modified below to remove the dataSizeLen bits and we do not
// want to modify the original data. normally we could just call slice on
// uint8array but ie 11 does not support that...
if (removeLength) {
valueBytes = Array.prototype.slice.call(bytes, offset, offset + length);
valueBytes[0] ^= LENGTH_TABLE[length - 1];
}
return {
length: length,
value: (0, _byteHelpers.bytesToNumber)(valueBytes, {
signed: signed
}),
bytes: valueBytes
};
};
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return path.match(/.{1,2}/g).map(function (p) {
return normalizePath(p);
});
}
if (typeof path === 'number') {
return (0, _byteHelpers.numberToBytes)(path);
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
var getInfinityDataSize = function getInfinityDataSize(id, bytes, offset) {
if (offset >= bytes.length) {
return bytes.length;
}
var innerid = getvint(bytes, offset, false);
if ((0, _byteHelpers.bytesMatch)(id.bytes, innerid.bytes)) {
return offset;
}
var dataHeader = getvint(bytes, offset + innerid.length);
return getInfinityDataSize(id, bytes, offset + dataHeader.length + dataHeader.value + innerid.length);
};
/**
* Notes on the EBLM format.
*
* EBLM uses "vints" tags. Every vint tag contains
* two parts
*
* 1. The length from the first byte. You get this by
* converting the byte to binary and counting the zeros
* before a 1. Then you add 1 to that. Examples
* 00011111 = length 4 because there are 3 zeros before a 1.
* 00100000 = length 3 because there are 2 zeros before a 1.
* 00000011 = length 7 because there are 6 zeros before a 1.
*
* 2. The bits used for length are removed from the first byte
* Then all the bytes are merged into a value. NOTE: this
* is not the case for id ebml tags as there id includes
* length bits.
*
*/
var findEbml = function findEbml(bytes, paths) {
paths = normalizePaths(paths);
bytes = (0, _byteHelpers.toUint8)(bytes);
var results = [];
if (!paths.length) {
return results;
}
var i = 0;
while (i < bytes.length) {
var id = getvint(bytes, i, false);
var dataHeader = getvint(bytes, i + id.length);
var dataStart = i + id.length + dataHeader.length; // dataSize is unknown or this is a live stream
if (dataHeader.value === 0x7f) {
dataHeader.value = getInfinityDataSize(id, bytes, dataStart);
if (dataHeader.value !== bytes.length) {
dataHeader.value -= dataStart;
}
}
var dataEnd = dataStart + dataHeader.value > bytes.length ? bytes.length : dataStart + dataHeader.value;
var data = bytes.subarray(dataStart, dataEnd);
if ((0, _byteHelpers.bytesMatch)(paths[0], id.bytes)) {
if (paths.length === 1) {
// this is the end of the paths and we've found the tag we were
// looking for
results.push(data);
} else {
// recursively search for the next tag inside of the data
// of this one
results = results.concat(findEbml(data, paths.slice(1)));
}
}
var totalLength = id.length + dataHeader.length + data.length; // move past this tag entirely, we are not looking for it
i += totalLength;
}
return results;
}; // see https://www.matroska.org/technical/basics.html#block-structure
exports.findEbml = findEbml;
var decodeBlock = function decodeBlock(block, type, timestampScale, clusterTimestamp) {
var duration;
if (type === 'group') {
duration = findEbml(block, [EBML_TAGS.BlockDuration])[0];
if (duration) {
duration = (0, _byteHelpers.bytesToNumber)(duration);
duration = 1 / timestampScale * duration * timestampScale / 1000;
}
block = findEbml(block, [EBML_TAGS.Block])[0];
type = 'block'; // treat data as a block after this point
}
var dv = new DataView(block.buffer, block.byteOffset, block.byteLength);
var trackNumber = getvint(block, 0);
var timestamp = dv.getInt16(trackNumber.length, false);
var flags = block[trackNumber.length + 2];
var data = block.subarray(trackNumber.length + 3); // pts/dts in seconds
var ptsdts = 1 / timestampScale * (clusterTimestamp + timestamp) * timestampScale / 1000; // return the frame
var parsed = {
duration: duration,
trackNumber: trackNumber.value,
keyframe: type === 'simple' && flags >> 7 === 1,
invisible: (flags & 0x08) >> 3 === 1,
lacing: (flags & 0x06) >> 1,
discardable: type === 'simple' && (flags & 0x01) === 1,
frames: [],
pts: ptsdts,
dts: ptsdts,
timestamp: timestamp
};
if (!parsed.lacing) {
parsed.frames.push(data);
return parsed;
}
var numberOfFrames = data[0] + 1;
var frameSizes = [];
var offset = 1; // Fixed
if (parsed.lacing === 2) {
var sizeOfFrame = (data.length - offset) / numberOfFrames;
for (var i = 0; i < numberOfFrames; i++) {
frameSizes.push(sizeOfFrame);
}
} // xiph
if (parsed.lacing === 1) {
for (var _i = 0; _i < numberOfFrames - 1; _i++) {
var size = 0;
do {
size += data[offset];
offset++;
} while (data[offset - 1] === 0xFF);
frameSizes.push(size);
}
} // ebml
if (parsed.lacing === 3) {
// first vint is unsinged
// after that vints are singed and
// based on a compounding size
var _size = 0;
for (var _i2 = 0; _i2 < numberOfFrames - 1; _i2++) {
var vint = _i2 === 0 ? getvint(data, offset) : getvint(data, offset, true, true);
_size += vint.value;
frameSizes.push(_size);
offset += vint.length;
}
}
frameSizes.forEach(function (size) {
parsed.frames.push(data.subarray(offset, offset + size));
offset += size;
});
return parsed;
}; // VP9 Codec Feature Metadata (CodecPrivate)
// https://www.webmproject.org/docs/container/
exports.decodeBlock = decodeBlock;
var parseVp9Private = function parseVp9Private(bytes) {
var i = 0;
var params = {};
while (i < bytes.length) {
var id = bytes[i] & 0x7f;
var len = bytes[i + 1];
var val = void 0;
if (len === 1) {
val = bytes[i + 2];
} else {
val = bytes.subarray(i + 2, i + 2 + len);
}
if (id === 1) {
params.profile = val;
} else if (id === 2) {
params.level = val;
} else if (id === 3) {
params.bitDepth = val;
} else if (id === 4) {
params.chromaSubsampling = val;
} else {
params[id] = val;
}
i += 2 + len;
}
return params;
};
var parseTracks = function parseTracks(bytes) {
bytes = (0, _byteHelpers.toUint8)(bytes);
var decodedTracks = [];
var tracks = findEbml(bytes, [EBML_TAGS.Segment, EBML_TAGS.Tracks, EBML_TAGS.Track]);
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Tracks, EBML_TAGS.Track]);
}
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Track]);
}
if (!tracks.length) {
return decodedTracks;
}
tracks.forEach(function (track) {
var trackType = findEbml(track, EBML_TAGS.TrackType)[0];
if (!trackType || !trackType.length) {
return;
} // 1 is video, 2 is audio, 17 is subtitle
// other values are unimportant in this context
if (trackType[0] === 1) {
trackType = 'video';
} else if (trackType[0] === 2) {
trackType = 'audio';
} else if (trackType[0] === 17) {
trackType = 'subtitle';
} else {
return;
} // todo parse language
var decodedTrack = {
rawCodec: (0, _byteHelpers.bytesToString)(findEbml(track, [EBML_TAGS.CodecID])[0]),
type: trackType,
codecPrivate: findEbml(track, [EBML_TAGS.CodecPrivate])[0],
number: (0, _byteHelpers.bytesToNumber)(findEbml(track, [EBML_TAGS.TrackNumber])[0]),
defaultDuration: (0, _byteHelpers.bytesToNumber)(findEbml(track, [EBML_TAGS.DefaultDuration])[0]),
default: findEbml(track, [EBML_TAGS.FlagDefault])[0],
rawData: track
};
var codec = '';
if (/V_MPEG4\/ISO\/AVC/.test(decodedTrack.rawCodec)) {
codec = "avc1." + (0, _codecHelpers.getAvcCodec)(decodedTrack.codecPrivate);
} else if (/V_MPEGH\/ISO\/HEVC/.test(decodedTrack.rawCodec)) {
codec = "hev1." + (0, _codecHelpers.getHvcCodec)(decodedTrack.codecPrivate);
} else if (/V_MPEG4\/ISO\/ASP/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4v.20.' + decodedTrack.codecPrivate[4].toString();
} else {
codec = 'mp4v.20.9';
}
} else if (/^V_THEORA/.test(decodedTrack.rawCodec)) {
codec = 'theora';
} else if (/^V_VP8/.test(decodedTrack.rawCodec)) {
codec = 'vp8';
} else if (/^V_VP9/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
var _parseVp9Private = parseVp9Private(decodedTrack.codecPrivate),
profile = _parseVp9Private.profile,
level = _parseVp9Private.level,
bitDepth = _parseVp9Private.bitDepth,
chromaSubsampling = _parseVp9Private.chromaSubsampling;
codec = 'vp09.';
codec += (0, _byteHelpers.padStart)(profile, 2, '0') + ".";
codec += (0, _byteHelpers.padStart)(level, 2, '0') + ".";
codec += (0, _byteHelpers.padStart)(bitDepth, 2, '0') + ".";
codec += "" + (0, _byteHelpers.padStart)(chromaSubsampling, 2, '0'); // Video -> Colour -> Ebml name
var matrixCoefficients = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB1]])[0] || [];
var videoFullRangeFlag = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB9]])[0] || [];
var transferCharacteristics = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBA]])[0] || [];
var colourPrimaries = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBB]])[0] || []; // if we find any optional codec parameter specify them all.
if (matrixCoefficients.length || videoFullRangeFlag.length || transferCharacteristics.length || colourPrimaries.length) {
codec += "." + (0, _byteHelpers.padStart)(colourPrimaries[0], 2, '0');
codec += "." + (0, _byteHelpers.padStart)(transferCharacteristics[0], 2, '0');
codec += "." + (0, _byteHelpers.padStart)(matrixCoefficients[0], 2, '0');
codec += "." + (0, _byteHelpers.padStart)(videoFullRangeFlag[0], 2, '0');
}
} else {
codec = 'vp9';
}
} else if (/^V_AV1/.test(decodedTrack.rawCodec)) {
codec = "av01." + (0, _codecHelpers.getAv1Codec)(decodedTrack.codecPrivate);
} else if (/A_ALAC/.test(decodedTrack.rawCodec)) {
codec = 'alac';
} else if (/A_MPEG\/L2/.test(decodedTrack.rawCodec)) {
codec = 'mp2';
} else if (/A_MPEG\/L3/.test(decodedTrack.rawCodec)) {
codec = 'mp3';
} else if (/^A_AAC/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4a.40.' + (decodedTrack.codecPrivate[0] >>> 3).toString();
} else {
codec = 'mp4a.40.2';
}
} else if (/^A_AC3/.test(decodedTrack.rawCodec)) {
codec = 'ac-3';
} else if (/^A_PCM/.test(decodedTrack.rawCodec)) {
codec = 'pcm';
} else if (/^A_MS\/ACM/.test(decodedTrack.rawCodec)) {
codec = 'speex';
} else if (/^A_EAC3/.test(decodedTrack.rawCodec)) {
codec = 'ec-3';
} else if (/^A_VORBIS/.test(decodedTrack.rawCodec)) {
codec = 'vorbis';
} else if (/^A_FLAC/.test(decodedTrack.rawCodec)) {
codec = 'flac';
} else if (/^A_OPUS/.test(decodedTrack.rawCodec)) {
codec = 'opus';
}
decodedTrack.codec = codec;
decodedTracks.push(decodedTrack);
});
return decodedTracks.sort(function (a, b) {
return a.number - b.number;
});
};
exports.parseTracks = parseTracks;
var parseData = function parseData(data, tracks) {
var allBlocks = [];
var segment = findEbml(data, [EBML_TAGS.Segment])[0];
var timestampScale = findEbml(segment, [EBML_TAGS.SegmentInfo, EBML_TAGS.TimestampScale])[0]; // in nanoseconds, defaults to 1ms
if (timestampScale && timestampScale.length) {
timestampScale = (0, _byteHelpers.bytesToNumber)(timestampScale);
} else {
timestampScale = 1000000;
}
var clusters = findEbml(segment, [EBML_TAGS.Cluster]);
if (!tracks) {
tracks = parseTracks(segment);
}
clusters.forEach(function (cluster, ci) {
var simpleBlocks = findEbml(cluster, [EBML_TAGS.SimpleBlock]).map(function (b) {
return {
type: 'simple',
data: b
};
});
var blockGroups = findEbml(cluster, [EBML_TAGS.BlockGroup]).map(function (b) {
return {
type: 'group',
data: b
};
});
var timestamp = findEbml(cluster, [EBML_TAGS.Timestamp])[0] || 0;
if (timestamp && timestamp.length) {
timestamp = (0, _byteHelpers.bytesToNumber)(timestamp);
} // get all blocks then sort them into the correct order
var blocks = simpleBlocks.concat(blockGroups).sort(function (a, b) {
return a.data.byteOffset - b.data.byteOffset;
});
blocks.forEach(function (block, bi) {
var decoded = decodeBlock(block.data, block.type, timestampScale, timestamp);
allBlocks.push(decoded);
});
});
return {
tracks: tracks,
blocks: allBlocks
};
};
exports.parseData = parseData;

408
node_modules/@videojs/vhs-utils/cjs/format-parser.js generated vendored Normal file
View file

@ -0,0 +1,408 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.parseFormatForBytes = void 0;
var _byteHelpers = require("./byte-helpers.js");
var _ebmlHelpers = require("./ebml-helpers.js");
var _mp4Helpers = require("./mp4-helpers.js");
var _riffHelpers = require("./riff-helpers.js");
var _oggHelpers = require("./ogg-helpers.js");
var _containers = require("./containers.js");
var _nalHelpers = require("./nal-helpers.js");
var _m2tsHelpers = require("./m2ts-helpers.js");
var _codecHelpers = require("./codec-helpers.js");
var _id3Helpers = require("./id3-helpers.js");
// https://docs.microsoft.com/en-us/windows/win32/medfound/audio-subtype-guids
// https://tools.ietf.org/html/rfc2361
var wFormatTagCodec = function wFormatTagCodec(wFormatTag) {
wFormatTag = (0, _byteHelpers.toUint8)(wFormatTag);
if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x00, 0x55])) {
return 'mp3';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x16, 0x00]) || (0, _byteHelpers.bytesMatch)(wFormatTag, [0x00, 0xFF])) {
return 'aac';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x70, 0x4f])) {
return 'opus';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x6C, 0x61])) {
return 'alac';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0xF1, 0xAC])) {
return 'flac';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x20, 0x00])) {
return 'ac-3';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0xFF, 0xFE])) {
return 'ec-3';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x00, 0x50])) {
return 'mp2';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0x56, 0x6f])) {
return 'vorbis';
} else if ((0, _byteHelpers.bytesMatch)(wFormatTag, [0xA1, 0x09])) {
return 'speex';
}
return '';
};
var formatMimetype = function formatMimetype(name, codecs) {
var codecString = ['video', 'audio'].reduce(function (acc, type) {
if (codecs[type]) {
acc += (acc.length ? ',' : '') + codecs[type];
}
return acc;
}, '');
return (codecs.video ? 'video' : 'audio') + "/" + name + (codecString ? ";codecs=\"" + codecString + "\"" : '');
};
var parseCodecFrom = {
mov: function mov(bytes) {
// mov and mp4 both use a nearly identical box structure.
var retval = parseCodecFrom.mp4(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('mp4', 'quicktime');
}
return retval;
},
mp4: function mp4(bytes) {
bytes = (0, _byteHelpers.toUint8)(bytes);
var codecs = {};
var tracks = (0, _mp4Helpers.parseTracks)(bytes);
for (var i = 0; i < tracks.length; i++) {
var track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {
codecs: codecs,
mimetype: formatMimetype('mp4', codecs)
};
},
'3gp': function gp(bytes) {
return {
codecs: {},
mimetype: 'video/3gpp'
};
},
ogg: function ogg(bytes) {
var pages = (0, _oggHelpers.getPages)(bytes, 0, 4);
var codecs = {};
pages.forEach(function (page) {
if ((0, _byteHelpers.bytesMatch)(page, [0x4F, 0x70, 0x75, 0x73], {
offset: 28
})) {
codecs.audio = 'opus';
} else if ((0, _byteHelpers.bytesMatch)(page, [0x56, 0x50, 0x38, 0x30], {
offset: 29
})) {
codecs.video = 'vp8';
} else if ((0, _byteHelpers.bytesMatch)(page, [0x74, 0x68, 0x65, 0x6F, 0x72, 0x61], {
offset: 29
})) {
codecs.video = 'theora';
} else if ((0, _byteHelpers.bytesMatch)(page, [0x46, 0x4C, 0x41, 0x43], {
offset: 29
})) {
codecs.audio = 'flac';
} else if ((0, _byteHelpers.bytesMatch)(page, [0x53, 0x70, 0x65, 0x65, 0x78], {
offset: 28
})) {
codecs.audio = 'speex';
} else if ((0, _byteHelpers.bytesMatch)(page, [0x76, 0x6F, 0x72, 0x62, 0x69, 0x73], {
offset: 29
})) {
codecs.audio = 'vorbis';
}
});
return {
codecs: codecs,
mimetype: formatMimetype('ogg', codecs)
};
},
wav: function wav(bytes) {
var format = (0, _riffHelpers.findFourCC)(bytes, ['WAVE', 'fmt'])[0];
var wFormatTag = Array.prototype.slice.call(format, 0, 2).reverse();
var mimetype = 'audio/vnd.wave';
var codecs = {
audio: wFormatTagCodec(wFormatTag)
};
var codecString = wFormatTag.reduce(function (acc, v) {
if (v) {
acc += (0, _byteHelpers.toHexString)(v);
}
return acc;
}, '');
if (codecString) {
mimetype += ";codec=" + codecString;
}
if (codecString && !codecs.audio) {
codecs.audio = codecString;
}
return {
codecs: codecs,
mimetype: mimetype
};
},
avi: function avi(bytes) {
var movi = (0, _riffHelpers.findFourCC)(bytes, ['AVI', 'movi'])[0];
var strls = (0, _riffHelpers.findFourCC)(bytes, ['AVI', 'hdrl', 'strl']);
var codecs = {};
strls.forEach(function (strl) {
var strh = (0, _riffHelpers.findFourCC)(strl, ['strh'])[0];
var strf = (0, _riffHelpers.findFourCC)(strl, ['strf'])[0]; // now parse AVIStreamHeader to get codec and type:
// https://docs.microsoft.com/en-us/previous-versions/windows/desktop/api/avifmt/ns-avifmt-avistreamheader
var type = (0, _byteHelpers.bytesToString)(strh.subarray(0, 4));
var codec;
var codecType;
if (type === 'vids') {
// https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader
var handler = (0, _byteHelpers.bytesToString)(strh.subarray(4, 8));
var compression = (0, _byteHelpers.bytesToString)(strf.subarray(16, 20)); // look for 00dc (compressed video fourcc code) or 00db (uncompressed video fourcc code)
var videoData = (0, _riffHelpers.findFourCC)(movi, ['00dc'])[0] || (0, _riffHelpers.findFourCC)(movi, ['00db'][0]);
if (handler === 'H264' || compression === 'H264') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h264(videoData).codecs.video;
} else {
codec = 'avc1';
}
} else if (handler === 'HEVC' || compression === 'HEVC') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h265(videoData).codecs.video;
} else {
codec = 'hev1';
}
} else if (handler === 'FMP4' || compression === 'FMP4') {
if (movi.length) {
codec = 'mp4v.20.' + movi[12].toString();
} else {
codec = 'mp4v.20';
}
} else if (handler === 'VP80' || compression === 'VP80') {
codec = 'vp8';
} else if (handler === 'VP90' || compression === 'VP90') {
codec = 'vp9';
} else if (handler === 'AV01' || compression === 'AV01') {
codec = 'av01';
} else if (handler === 'theo' || compression === 'theora') {
codec = 'theora';
} else {
if (videoData && videoData.length) {
var result = (0, _containers.detectContainerForBytes)(videoData);
if (result === 'h264') {
codec = parseCodecFrom.h264(movi).codecs.video;
}
if (result === 'h265') {
codec = parseCodecFrom.h265(movi).codecs.video;
}
}
if (!codec) {
codec = handler || compression;
}
}
codecType = 'video';
} else if (type === 'auds') {
codecType = 'audio'; // look for 00wb (audio data fourcc)
// const audioData = findFourCC(movi, ['01wb']);
var wFormatTag = Array.prototype.slice.call(strf, 0, 2).reverse();
codecs.audio = wFormatTagCodec(wFormatTag);
} else {
return;
}
if (codec) {
codecs[codecType] = codec;
}
});
return {
codecs: codecs,
mimetype: formatMimetype('avi', codecs)
};
},
ts: function ts(bytes) {
var result = (0, _m2tsHelpers.parseTs)(bytes, 2);
var codecs = {};
Object.keys(result.streams).forEach(function (esPid) {
var stream = result.streams[esPid];
if (stream.codec === 'avc1' && stream.packets.length) {
stream.codec = parseCodecFrom.h264(stream.packets[0]).codecs.video;
} else if (stream.codec === 'hev1' && stream.packets.length) {
stream.codec = parseCodecFrom.h265(stream.packets[0]).codecs.video;
}
codecs[stream.type] = stream.codec;
});
return {
codecs: codecs,
mimetype: formatMimetype('mp2t', codecs)
};
},
webm: function webm(bytes) {
// mkv and webm both use ebml to store code info
var retval = parseCodecFrom.mkv(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('x-matroska', 'webm');
}
return retval;
},
mkv: function mkv(bytes) {
var codecs = {};
var tracks = (0, _ebmlHelpers.parseTracks)(bytes);
for (var i = 0; i < tracks.length; i++) {
var track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {
codecs: codecs,
mimetype: formatMimetype('x-matroska', codecs)
};
},
aac: function aac(bytes) {
return {
codecs: {
audio: 'aac'
},
mimetype: 'audio/aac'
};
},
ac3: function ac3(bytes) {
// past id3 and syncword
var offset = (0, _id3Helpers.getId3Offset)(bytes) + 2; // default to ac-3
var codec = 'ac-3';
if ((0, _byteHelpers.bytesMatch)(bytes, [0xB8, 0xE0], {
offset: offset
})) {
codec = 'ac-3'; // 0x01, 0x7F
} else if ((0, _byteHelpers.bytesMatch)(bytes, [0x01, 0x7f], {
offset: offset
})) {
codec = 'ec-3';
}
return {
codecs: {
audio: codec
},
mimetype: 'audio/vnd.dolby.dd-raw'
};
},
mp3: function mp3(bytes) {
return {
codecs: {
audio: 'mp3'
},
mimetype: 'audio/mpeg'
};
},
flac: function flac(bytes) {
return {
codecs: {
audio: 'flac'
},
mimetype: 'audio/flac'
};
},
'h264': function h264(bytes) {
// find seq_parameter_set_rbsp to get encoding settings for codec
var nal = (0, _nalHelpers.findH264Nal)(bytes, 7, 3);
var retval = {
codecs: {
video: 'avc1'
},
mimetype: 'video/h264'
};
if (nal.length) {
retval.codecs.video += "." + (0, _codecHelpers.getAvcCodec)(nal);
}
return retval;
},
'h265': function h265(bytes) {
var retval = {
codecs: {
video: 'hev1'
},
mimetype: 'video/h265'
}; // find video_parameter_set_rbsp or seq_parameter_set_rbsp
// to get encoding settings for codec
var nal = (0, _nalHelpers.findH265Nal)(bytes, [32, 33], 3);
if (nal.length) {
var type = nal[0] >> 1 & 0x3F; // profile_tier_level starts at byte 5 for video_parameter_set_rbsp
// byte 2 for seq_parameter_set_rbsp
retval.codecs.video += "." + (0, _codecHelpers.getHvcCodec)(nal.subarray(type === 32 ? 5 : 2));
}
return retval;
}
};
var parseFormatForBytes = function parseFormatForBytes(bytes) {
bytes = (0, _byteHelpers.toUint8)(bytes);
var result = {
codecs: {},
container: (0, _containers.detectContainerForBytes)(bytes),
mimetype: ''
};
var parseCodecFn = parseCodecFrom[result.container];
if (parseCodecFn) {
var parsed = parseCodecFn ? parseCodecFn(bytes) : {};
result.codecs = parsed.codecs || {};
result.mimetype = parsed.mimetype || '';
}
return result;
};
exports.parseFormatForBytes = parseFormatForBytes;

51
node_modules/@videojs/vhs-utils/cjs/id3-helpers.js generated vendored Normal file
View file

@ -0,0 +1,51 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.getId3Offset = exports.getId3Size = void 0;
var _byteHelpers = require("./byte-helpers.js");
var ID3 = (0, _byteHelpers.toUint8)([0x49, 0x44, 0x33]);
var getId3Size = function getId3Size(bytes, offset) {
if (offset === void 0) {
offset = 0;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
var flags = bytes[offset + 5];
var returnSize = bytes[offset + 6] << 21 | bytes[offset + 7] << 14 | bytes[offset + 8] << 7 | bytes[offset + 9];
var footerPresent = (flags & 16) >> 4;
if (footerPresent) {
return returnSize + 20;
}
return returnSize + 10;
};
exports.getId3Size = getId3Size;
var getId3Offset = function getId3Offset(bytes, offset) {
if (offset === void 0) {
offset = 0;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
if (bytes.length - offset < 10 || !(0, _byteHelpers.bytesMatch)(bytes, ID3, {
offset: offset
})) {
return offset;
}
offset += getId3Size(bytes, offset); // recursive check for id3 tags as some files
// have multiple ID3 tag sections even though
// they should not.
return getId3Offset(bytes, offset);
};
exports.getId3Offset = getId3Offset;

36
node_modules/@videojs/vhs-utils/cjs/index.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
"use strict";
var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");
var _interopRequireWildcard = require("@babel/runtime/helpers/interopRequireWildcard");
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = void 0;
var codecs = _interopRequireWildcard(require("./codecs"));
var byteHelpers = _interopRequireWildcard(require("./byte-helpers.js"));
var containers = _interopRequireWildcard(require("./containers.js"));
var _decodeB64ToUint8Array = _interopRequireDefault(require("./decode-b64-to-uint8-array.js"));
var mediaGroups = _interopRequireWildcard(require("./media-groups.js"));
var _resolveUrl = _interopRequireDefault(require("./resolve-url.js"));
var _stream = _interopRequireDefault(require("./stream.js"));
var _default = {
codecs: codecs,
byteHelpers: byteHelpers,
containers: containers,
decodeB64ToUint8Array: _decodeB64ToUint8Array.default,
mediaGroups: mediaGroups,
resolveUrl: _resolveUrl.default,
Stream: _stream.default
};
exports.default = _default;
module.exports = exports.default;

116
node_modules/@videojs/vhs-utils/cjs/m2ts-helpers.js generated vendored Normal file
View file

@ -0,0 +1,116 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.parseTs = void 0;
var _byteHelpers = require("./byte-helpers.js");
var SYNC_BYTE = 0x47;
var parseTs = function parseTs(bytes, maxPes) {
if (maxPes === void 0) {
maxPes = Infinity;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
var startIndex = 0;
var endIndex = 188;
var pmt = {};
var pesCount = 0;
while (endIndex < bytes.byteLength && pesCount < maxPes) {
if (bytes[startIndex] !== SYNC_BYTE && bytes[endIndex] !== SYNC_BYTE) {
endIndex += 1;
startIndex += 1;
continue;
}
var packet = bytes.subarray(startIndex, endIndex);
var pid = (packet[1] & 0x1f) << 8 | packet[2];
var hasPusi = !!(packet[1] & 0x40);
var hasAdaptationHeader = (packet[3] & 0x30) >>> 4 > 0x01;
var payloadOffset = 4 + (hasAdaptationHeader ? packet[4] + 1 : 0);
if (hasPusi) {
payloadOffset += packet[payloadOffset] + 1;
}
if (pid === 0 && !pmt.pid) {
pmt.pid = (packet[payloadOffset + 10] & 0x1f) << 8 | packet[payloadOffset + 11];
} else if (pmt.pid && pid === pmt.pid && !pmt.streams) {
var isNotForward = packet[payloadOffset + 5] & 0x01; // ignore forward pmt delarations
if (!isNotForward) {
continue;
}
pmt.streams = {};
var sectionLength = (packet[payloadOffset + 1] & 0x0f) << 8 | packet[payloadOffset + 2];
var tableEnd = 3 + sectionLength - 4;
var programInfoLength = (packet[payloadOffset + 10] & 0x0f) << 8 | packet[payloadOffset + 11];
var offset = 12 + programInfoLength;
while (offset < tableEnd) {
// add an entry that maps the elementary_pid to the stream_type
var i = payloadOffset + offset;
var type = packet[i];
var esPid = (packet[i + 1] & 0x1F) << 8 | packet[i + 2];
var esLength = (packet[i + 3] & 0x0f) << 8 | packet[i + 4];
var esInfo = packet.subarray(i + 5, i + 5 + esLength);
var stream = pmt.streams[esPid] = {
esInfo: esInfo,
typeNumber: type,
packets: [],
type: '',
codec: ''
};
if (type === 0x06 && (0, _byteHelpers.bytesMatch)(esInfo, [0x4F, 0x70, 0x75, 0x73], {
offset: 2
})) {
stream.type = 'audio';
stream.codec = 'opus';
} else if (type === 0x1B || type === 0x20) {
stream.type = 'video';
stream.codec = 'avc1';
} else if (type === 0x24) {
stream.type = 'video';
stream.codec = 'hev1';
} else if (type === 0x10) {
stream.type = 'video';
stream.codec = 'mp4v.20';
} else if (type === 0x0F) {
stream.type = 'audio';
stream.codec = 'aac';
} else if (type === 0x81) {
stream.type = 'audio';
stream.codec = 'ac-3';
} else if (type === 0x87) {
stream.type = 'audio';
stream.codec = 'ec-3';
} else if (type === 0x03 || type === 0x04) {
stream.type = 'audio';
stream.codec = 'mp3';
}
offset += esLength + 5;
}
} else if (pmt.pid && pmt.streams) {
pmt.streams[pid].packets.push(packet.subarray(payloadOffset));
pesCount++;
}
startIndex += 188;
endIndex += 188;
}
if (!pmt.streams) {
pmt.streams = {};
}
return pmt;
};
exports.parseTs = parseTs;

30
node_modules/@videojs/vhs-utils/cjs/media-groups.js generated vendored Normal file
View file

@ -0,0 +1,30 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.forEachMediaGroup = void 0;
/**
* Loops through all supported media groups in master and calls the provided
* callback for each group
*
* @param {Object} master
* The parsed master manifest object
* @param {string[]} groups
* The media groups to call the callback for
* @param {Function} callback
* Callback to call for each media group
*/
var forEachMediaGroup = function forEachMediaGroup(master, groups, callback) {
groups.forEach(function (mediaType) {
for (var groupKey in master.mediaGroups[mediaType]) {
for (var labelKey in master.mediaGroups[mediaType][groupKey]) {
var mediaProperties = master.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};
exports.forEachMediaGroup = forEachMediaGroup;

44
node_modules/@videojs/vhs-utils/cjs/media-types.js generated vendored Normal file
View file

@ -0,0 +1,44 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.simpleTypeFromSourceType = void 0;
var MPEGURL_REGEX = /^(audio|video|application)\/(x-|vnd\.apple\.)?mpegurl/i;
var DASH_REGEX = /^application\/dash\+xml/i;
/**
* Returns a string that describes the type of source based on a video source object's
* media type.
*
* @see {@link https://dev.w3.org/html5/pf-summary/video.html#dom-source-type|Source Type}
*
* @param {string} type
* Video source object media type
* @return {('hls'|'dash'|'vhs-json'|null)}
* VHS source type string
*/
var simpleTypeFromSourceType = function simpleTypeFromSourceType(type) {
if (MPEGURL_REGEX.test(type)) {
return 'hls';
}
if (DASH_REGEX.test(type)) {
return 'dash';
} // Denotes the special case of a manifest object passed to http-streaming instead of a
// source URL.
//
// See https://en.wikipedia.org/wiki/Media_type for details on specifying media types.
//
// In this case, vnd stands for vendor, video.js for the organization, VHS for this
// project, and the +json suffix identifies the structure of the media type.
if (type === 'application/vnd.videojs.vhs+json') {
return 'vhs-json';
}
return null;
};
exports.simpleTypeFromSourceType = simpleTypeFromSourceType;

581
node_modules/@videojs/vhs-utils/cjs/mp4-helpers.js generated vendored Normal file
View file

@ -0,0 +1,581 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.parseMediaInfo = exports.parseTracks = exports.addSampleDescription = exports.buildFrameTable = exports.findNamedBox = exports.findBox = exports.parseDescriptors = void 0;
var _byteHelpers = require("./byte-helpers.js");
var _codecHelpers = require("./codec-helpers.js");
var _opusHelpers = require("./opus-helpers.js");
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return (0, _byteHelpers.stringToBytes)(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
var DESCRIPTORS;
var parseDescriptors = function parseDescriptors(bytes) {
bytes = (0, _byteHelpers.toUint8)(bytes);
var results = [];
var i = 0;
while (bytes.length > i) {
var tag = bytes[i];
var size = 0;
var headerSize = 0; // tag
headerSize++;
var byte = bytes[headerSize]; // first byte
headerSize++;
while (byte & 0x80) {
size = (byte & 0x7F) << 7;
byte = bytes[headerSize];
headerSize++;
}
size += byte & 0x7F;
for (var z = 0; z < DESCRIPTORS.length; z++) {
var _DESCRIPTORS$z = DESCRIPTORS[z],
id = _DESCRIPTORS$z.id,
parser = _DESCRIPTORS$z.parser;
if (tag === id) {
results.push(parser(bytes.subarray(headerSize, headerSize + size)));
break;
}
}
i += size + headerSize;
}
return results;
};
exports.parseDescriptors = parseDescriptors;
DESCRIPTORS = [{
id: 0x03,
parser: function parser(bytes) {
var desc = {
tag: 0x03,
id: bytes[0] << 8 | bytes[1],
flags: bytes[2],
size: 3,
dependsOnEsId: 0,
ocrEsId: 0,
descriptors: [],
url: ''
}; // depends on es id
if (desc.flags & 0x80) {
desc.dependsOnEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
} // url
if (desc.flags & 0x40) {
var len = bytes[desc.size];
desc.url = (0, _byteHelpers.bytesToString)(bytes.subarray(desc.size + 1, desc.size + 1 + len));
desc.size += len;
} // ocr es id
if (desc.flags & 0x20) {
desc.ocrEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
}
desc.descriptors = parseDescriptors(bytes.subarray(desc.size)) || [];
return desc;
}
}, {
id: 0x04,
parser: function parser(bytes) {
// DecoderConfigDescriptor
var desc = {
tag: 0x04,
oti: bytes[0],
streamType: bytes[1],
bufferSize: bytes[2] << 16 | bytes[3] << 8 | bytes[4],
maxBitrate: bytes[5] << 24 | bytes[6] << 16 | bytes[7] << 8 | bytes[8],
avgBitrate: bytes[9] << 24 | bytes[10] << 16 | bytes[11] << 8 | bytes[12],
descriptors: parseDescriptors(bytes.subarray(13))
};
return desc;
}
}, {
id: 0x05,
parser: function parser(bytes) {
// DecoderSpecificInfo
return {
tag: 0x05,
bytes: bytes
};
}
}, {
id: 0x06,
parser: function parser(bytes) {
// SLConfigDescriptor
return {
tag: 0x06,
bytes: bytes
};
}
}];
/**
* find any number of boxes by name given a path to it in an iso bmff
* such as mp4.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {Uint8Array[]|string[]|string|Uint8Array} name
* An array of paths or a single path representing the name
* of boxes to search through in bytes. Paths may be
* uint8 (character codes) or strings.
*
* @param {boolean} [complete=false]
* Should we search only for complete boxes on the final path.
* This is very useful when you do not want to get back partial boxes
* in the case of streaming files.
*
* @return {Uint8Array[]}
* An array of the end paths that we found.
*/
var findBox = function findBox(bytes, paths, complete) {
if (complete === void 0) {
complete = false;
}
paths = normalizePaths(paths);
bytes = (0, _byteHelpers.toUint8)(bytes);
var results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
var i = 0;
while (i < bytes.length) {
var size = (bytes[i] << 24 | bytes[i + 1] << 16 | bytes[i + 2] << 8 | bytes[i + 3]) >>> 0;
var type = bytes.subarray(i + 4, i + 8); // invalid box format.
if (size === 0) {
break;
}
var end = i + size;
if (end > bytes.length) {
// this box is bigger than the number of bytes we have
// and complete is set, we cannot find any more boxes.
if (complete) {
break;
}
end = bytes.length;
}
var data = bytes.subarray(i + 8, end);
if ((0, _byteHelpers.bytesMatch)(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
results.push.apply(results, findBox(data, paths.slice(1), complete));
}
}
i = end;
} // we've finished searching all of bytes
return results;
};
/**
* Search for a single matching box by name in an iso bmff format like
* mp4. This function is useful for finding codec boxes which
* can be placed arbitrarily in sample descriptions depending
* on the version of the file or file type.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {string|Uint8Array} name
* The name of the box to find.
*
* @return {Uint8Array[]}
* a subarray of bytes representing the name boxed we found.
*/
exports.findBox = findBox;
var findNamedBox = function findNamedBox(bytes, name) {
name = normalizePath(name);
if (!name.length) {
// short-circuit the search for empty paths
return bytes.subarray(bytes.length);
}
var i = 0;
while (i < bytes.length) {
if ((0, _byteHelpers.bytesMatch)(bytes.subarray(i, i + name.length), name)) {
var size = (bytes[i - 4] << 24 | bytes[i - 3] << 16 | bytes[i - 2] << 8 | bytes[i - 1]) >>> 0;
var end = size > 1 ? i + size : bytes.byteLength;
return bytes.subarray(i + 4, end);
}
i++;
} // we've finished searching all of bytes
return bytes.subarray(bytes.length);
};
exports.findNamedBox = findNamedBox;
var parseSamples = function parseSamples(data, entrySize, parseEntry) {
if (entrySize === void 0) {
entrySize = 4;
}
if (parseEntry === void 0) {
parseEntry = function parseEntry(d) {
return (0, _byteHelpers.bytesToNumber)(d);
};
}
var entries = [];
if (!data || !data.length) {
return entries;
}
var entryCount = (0, _byteHelpers.bytesToNumber)(data.subarray(4, 8));
for (var i = 8; entryCount; i += entrySize, entryCount--) {
entries.push(parseEntry(data.subarray(i, i + entrySize)));
}
return entries;
};
var buildFrameTable = function buildFrameTable(stbl, timescale) {
var keySamples = parseSamples(findBox(stbl, ['stss'])[0]);
var chunkOffsets = parseSamples(findBox(stbl, ['stco'])[0]);
var timeToSamples = parseSamples(findBox(stbl, ['stts'])[0], 8, function (entry) {
return {
sampleCount: (0, _byteHelpers.bytesToNumber)(entry.subarray(0, 4)),
sampleDelta: (0, _byteHelpers.bytesToNumber)(entry.subarray(4, 8))
};
});
var samplesToChunks = parseSamples(findBox(stbl, ['stsc'])[0], 12, function (entry) {
return {
firstChunk: (0, _byteHelpers.bytesToNumber)(entry.subarray(0, 4)),
samplesPerChunk: (0, _byteHelpers.bytesToNumber)(entry.subarray(4, 8)),
sampleDescriptionIndex: (0, _byteHelpers.bytesToNumber)(entry.subarray(8, 12))
};
});
var stsz = findBox(stbl, ['stsz'])[0]; // stsz starts with a 4 byte sampleSize which we don't need
var sampleSizes = parseSamples(stsz && stsz.length && stsz.subarray(4) || null);
var frames = [];
for (var chunkIndex = 0; chunkIndex < chunkOffsets.length; chunkIndex++) {
var samplesInChunk = void 0;
for (var i = 0; i < samplesToChunks.length; i++) {
var sampleToChunk = samplesToChunks[i];
var isThisOne = chunkIndex + 1 >= sampleToChunk.firstChunk && (i + 1 >= samplesToChunks.length || chunkIndex + 1 < samplesToChunks[i + 1].firstChunk);
if (isThisOne) {
samplesInChunk = sampleToChunk.samplesPerChunk;
break;
}
}
var chunkOffset = chunkOffsets[chunkIndex];
for (var _i = 0; _i < samplesInChunk; _i++) {
var frameEnd = sampleSizes[frames.length]; // if we don't have key samples every frame is a keyframe
var keyframe = !keySamples.length;
if (keySamples.length && keySamples.indexOf(frames.length + 1) !== -1) {
keyframe = true;
}
var frame = {
keyframe: keyframe,
start: chunkOffset,
end: chunkOffset + frameEnd
};
for (var k = 0; k < timeToSamples.length; k++) {
var _timeToSamples$k = timeToSamples[k],
sampleCount = _timeToSamples$k.sampleCount,
sampleDelta = _timeToSamples$k.sampleDelta;
if (frames.length <= sampleCount) {
// ms to ns
var lastTimestamp = frames.length ? frames[frames.length - 1].timestamp : 0;
frame.timestamp = lastTimestamp + sampleDelta / timescale * 1000;
frame.duration = sampleDelta;
break;
}
}
frames.push(frame);
chunkOffset += frameEnd;
}
}
return frames;
};
exports.buildFrameTable = buildFrameTable;
var addSampleDescription = function addSampleDescription(track, bytes) {
var codec = (0, _byteHelpers.bytesToString)(bytes.subarray(0, 4));
if (track.type === 'video') {
track.info = track.info || {};
track.info.width = bytes[28] << 8 | bytes[29];
track.info.height = bytes[30] << 8 | bytes[31];
} else if (track.type === 'audio') {
track.info = track.info || {};
track.info.channels = bytes[20] << 8 | bytes[21];
track.info.bitDepth = bytes[22] << 8 | bytes[23];
track.info.sampleRate = bytes[28] << 8 | bytes[29];
}
if (codec === 'avc1') {
var avcC = findNamedBox(bytes, 'avcC'); // AVCDecoderConfigurationRecord
codec += "." + (0, _codecHelpers.getAvcCodec)(avcC);
track.info.avcC = avcC; // TODO: do we need to parse all this?
/* {
configurationVersion: avcC[0],
profile: avcC[1],
profileCompatibility: avcC[2],
level: avcC[3],
lengthSizeMinusOne: avcC[4] & 0x3
};
let spsNalUnitCount = avcC[5] & 0x1F;
const spsNalUnits = track.info.avc.spsNalUnits = [];
// past spsNalUnitCount
let offset = 6;
while (spsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
spsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}
let ppsNalUnitCount = avcC[offset];
const ppsNalUnits = track.info.avc.ppsNalUnits = [];
// past ppsNalUnitCount
offset += 1;
while (ppsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
ppsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}*/
// HEVCDecoderConfigurationRecord
} else if (codec === 'hvc1' || codec === 'hev1') {
codec += "." + (0, _codecHelpers.getHvcCodec)(findNamedBox(bytes, 'hvcC'));
} else if (codec === 'mp4a' || codec === 'mp4v') {
var esds = findNamedBox(bytes, 'esds');
var esDescriptor = parseDescriptors(esds.subarray(4))[0];
var decoderConfig = esDescriptor && esDescriptor.descriptors.filter(function (_ref) {
var tag = _ref.tag;
return tag === 0x04;
})[0];
if (decoderConfig) {
// most codecs do not have a further '.'
// such as 0xa5 for ac-3 and 0xa6 for e-ac-3
codec += '.' + (0, _byteHelpers.toHexString)(decoderConfig.oti);
if (decoderConfig.oti === 0x40) {
codec += '.' + (decoderConfig.descriptors[0].bytes[0] >> 3).toString();
} else if (decoderConfig.oti === 0x20) {
codec += '.' + decoderConfig.descriptors[0].bytes[4].toString();
} else if (decoderConfig.oti === 0xdd) {
codec = 'vorbis';
}
} else if (track.type === 'audio') {
codec += '.40.2';
} else {
codec += '.20.9';
}
} else if (codec === 'av01') {
// AV1DecoderConfigurationRecord
codec += "." + (0, _codecHelpers.getAv1Codec)(findNamedBox(bytes, 'av1C'));
} else if (codec === 'vp09') {
// VPCodecConfigurationRecord
var vpcC = findNamedBox(bytes, 'vpcC'); // https://www.webmproject.org/vp9/mp4/
var profile = vpcC[0];
var level = vpcC[1];
var bitDepth = vpcC[2] >> 4;
var chromaSubsampling = (vpcC[2] & 0x0F) >> 1;
var videoFullRangeFlag = (vpcC[2] & 0x0F) >> 3;
var colourPrimaries = vpcC[3];
var transferCharacteristics = vpcC[4];
var matrixCoefficients = vpcC[5];
codec += "." + (0, _byteHelpers.padStart)(profile, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(level, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(bitDepth, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(chromaSubsampling, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(colourPrimaries, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(transferCharacteristics, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(matrixCoefficients, 2, '0');
codec += "." + (0, _byteHelpers.padStart)(videoFullRangeFlag, 2, '0');
} else if (codec === 'theo') {
codec = 'theora';
} else if (codec === 'spex') {
codec = 'speex';
} else if (codec === '.mp3') {
codec = 'mp4a.40.34';
} else if (codec === 'msVo') {
codec = 'vorbis';
} else if (codec === 'Opus') {
codec = 'opus';
var dOps = findNamedBox(bytes, 'dOps');
track.info.opus = (0, _opusHelpers.parseOpusHead)(dOps); // TODO: should this go into the webm code??
// Firefox requires a codecDelay for opus playback
// see https://bugzilla.mozilla.org/show_bug.cgi?id=1276238
track.info.codecDelay = 6500000;
} else {
codec = codec.toLowerCase();
}
/* eslint-enable */
// flac, ac-3, ec-3, opus
track.codec = codec;
};
exports.addSampleDescription = addSampleDescription;
var parseTracks = function parseTracks(bytes, frameTable) {
if (frameTable === void 0) {
frameTable = true;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
var traks = findBox(bytes, ['moov', 'trak'], true);
var tracks = [];
traks.forEach(function (trak) {
var track = {
bytes: trak
};
var mdia = findBox(trak, ['mdia'])[0];
var hdlr = findBox(mdia, ['hdlr'])[0];
var trakType = (0, _byteHelpers.bytesToString)(hdlr.subarray(8, 12));
if (trakType === 'soun') {
track.type = 'audio';
} else if (trakType === 'vide') {
track.type = 'video';
} else {
track.type = trakType;
}
var tkhd = findBox(trak, ['tkhd'])[0];
if (tkhd) {
var view = new DataView(tkhd.buffer, tkhd.byteOffset, tkhd.byteLength);
var tkhdVersion = view.getUint8(0);
track.number = tkhdVersion === 0 ? view.getUint32(12) : view.getUint32(20);
}
var mdhd = findBox(mdia, ['mdhd'])[0];
if (mdhd) {
// mdhd is a FullBox, meaning it will have its own version as the first byte
var version = mdhd[0];
var index = version === 0 ? 12 : 20;
track.timescale = (mdhd[index] << 24 | mdhd[index + 1] << 16 | mdhd[index + 2] << 8 | mdhd[index + 3]) >>> 0;
}
var stbl = findBox(mdia, ['minf', 'stbl'])[0];
var stsd = findBox(stbl, ['stsd'])[0];
var descriptionCount = (0, _byteHelpers.bytesToNumber)(stsd.subarray(4, 8));
var offset = 8; // add codec and codec info
while (descriptionCount--) {
var len = (0, _byteHelpers.bytesToNumber)(stsd.subarray(offset, offset + 4));
var sampleDescriptor = stsd.subarray(offset + 4, offset + 4 + len);
addSampleDescription(track, sampleDescriptor);
offset += 4 + len;
}
if (frameTable) {
track.frameTable = buildFrameTable(stbl, track.timescale);
} // codec has no sub parameters
tracks.push(track);
});
return tracks;
};
exports.parseTracks = parseTracks;
var parseMediaInfo = function parseMediaInfo(bytes) {
var mvhd = findBox(bytes, ['moov', 'mvhd'], true)[0];
if (!mvhd || !mvhd.length) {
return;
}
var info = {}; // ms to ns
// mvhd v1 has 8 byte duration and other fields too
if (mvhd[0] === 1) {
info.timestampScale = (0, _byteHelpers.bytesToNumber)(mvhd.subarray(20, 24));
info.duration = (0, _byteHelpers.bytesToNumber)(mvhd.subarray(24, 32));
} else {
info.timestampScale = (0, _byteHelpers.bytesToNumber)(mvhd.subarray(12, 16));
info.duration = (0, _byteHelpers.bytesToNumber)(mvhd.subarray(16, 20));
}
info.bytes = mvhd;
return info;
};
exports.parseMediaInfo = parseMediaInfo;

135
node_modules/@videojs/vhs-utils/cjs/nal-helpers.js generated vendored Normal file
View file

@ -0,0 +1,135 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.findH265Nal = exports.findH264Nal = exports.findNal = exports.discardEmulationPreventionBytes = exports.EMULATION_PREVENTION = exports.NAL_TYPE_TWO = exports.NAL_TYPE_ONE = void 0;
var _byteHelpers = require("./byte-helpers.js");
var NAL_TYPE_ONE = (0, _byteHelpers.toUint8)([0x00, 0x00, 0x00, 0x01]);
exports.NAL_TYPE_ONE = NAL_TYPE_ONE;
var NAL_TYPE_TWO = (0, _byteHelpers.toUint8)([0x00, 0x00, 0x01]);
exports.NAL_TYPE_TWO = NAL_TYPE_TWO;
var EMULATION_PREVENTION = (0, _byteHelpers.toUint8)([0x00, 0x00, 0x03]);
/**
* Expunge any "Emulation Prevention" bytes from a "Raw Byte
* Sequence Payload"
*
* @param data {Uint8Array} the bytes of a RBSP from a NAL
* unit
* @return {Uint8Array} the RBSP without any Emulation
* Prevention Bytes
*/
exports.EMULATION_PREVENTION = EMULATION_PREVENTION;
var discardEmulationPreventionBytes = function discardEmulationPreventionBytes(bytes) {
var positions = [];
var i = 1; // Find all `Emulation Prevention Bytes`
while (i < bytes.length - 2) {
if ((0, _byteHelpers.bytesMatch)(bytes.subarray(i, i + 3), EMULATION_PREVENTION)) {
positions.push(i + 2);
i++;
}
i++;
} // If no Emulation Prevention Bytes were found just return the original
// array
if (positions.length === 0) {
return bytes;
} // Create a new array to hold the NAL unit data
var newLength = bytes.length - positions.length;
var newData = new Uint8Array(newLength);
var sourceIndex = 0;
for (i = 0; i < newLength; sourceIndex++, i++) {
if (sourceIndex === positions[0]) {
// Skip this byte
sourceIndex++; // Remove this position index
positions.shift();
}
newData[i] = bytes[sourceIndex];
}
return newData;
};
exports.discardEmulationPreventionBytes = discardEmulationPreventionBytes;
var findNal = function findNal(bytes, dataType, types, nalLimit) {
if (nalLimit === void 0) {
nalLimit = Infinity;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
types = [].concat(types);
var i = 0;
var nalStart;
var nalsFound = 0; // keep searching until:
// we reach the end of bytes
// we reach the maximum number of nals they want to seach
// NOTE: that we disregard nalLimit when we have found the start
// of the nal we want so that we can find the end of the nal we want.
while (i < bytes.length && (nalsFound < nalLimit || nalStart)) {
var nalOffset = void 0;
if ((0, _byteHelpers.bytesMatch)(bytes.subarray(i), NAL_TYPE_ONE)) {
nalOffset = 4;
} else if ((0, _byteHelpers.bytesMatch)(bytes.subarray(i), NAL_TYPE_TWO)) {
nalOffset = 3;
} // we are unsynced,
// find the next nal unit
if (!nalOffset) {
i++;
continue;
}
nalsFound++;
if (nalStart) {
return discardEmulationPreventionBytes(bytes.subarray(nalStart, i));
}
var nalType = void 0;
if (dataType === 'h264') {
nalType = bytes[i + nalOffset] & 0x1f;
} else if (dataType === 'h265') {
nalType = bytes[i + nalOffset] >> 1 & 0x3f;
}
if (types.indexOf(nalType) !== -1) {
nalStart = i + nalOffset;
} // nal header is 1 length for h264, and 2 for h265
i += nalOffset + (dataType === 'h264' ? 1 : 2);
}
return bytes.subarray(0, 0);
};
exports.findNal = findNal;
var findH264Nal = function findH264Nal(bytes, type, nalLimit) {
return findNal(bytes, 'h264', type, nalLimit);
};
exports.findH264Nal = findH264Nal;
var findH265Nal = function findH265Nal(bytes, type, nalLimit) {
return findNal(bytes, 'h265', type, nalLimit);
};
exports.findH265Nal = findH265Nal;

39
node_modules/@videojs/vhs-utils/cjs/ogg-helpers.js generated vendored Normal file
View file

@ -0,0 +1,39 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.getPages = void 0;
var _byteHelpers = require("./byte-helpers");
var SYNC_WORD = (0, _byteHelpers.toUint8)([0x4f, 0x67, 0x67, 0x53]);
var getPages = function getPages(bytes, start, end) {
if (end === void 0) {
end = Infinity;
}
bytes = (0, _byteHelpers.toUint8)(bytes);
var pages = [];
var i = 0;
while (i < bytes.length && pages.length < end) {
// we are unsynced,
// find the next syncword
if (!(0, _byteHelpers.bytesMatch)(bytes, SYNC_WORD, {
offset: i
})) {
i++;
continue;
}
var segmentLength = bytes[i + 27];
pages.push(bytes.subarray(i, i + 28 + segmentLength));
i += pages[pages.length - 1].length;
}
return pages.slice(start, end);
};
exports.getPages = getPages;

65
node_modules/@videojs/vhs-utils/cjs/opus-helpers.js generated vendored Normal file
View file

@ -0,0 +1,65 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.setOpusHead = exports.parseOpusHead = exports.OPUS_HEAD = void 0;
var OPUS_HEAD = new Uint8Array([// O, p, u, s
0x4f, 0x70, 0x75, 0x73, // H, e, a, d
0x48, 0x65, 0x61, 0x64]); // https://wiki.xiph.org/OggOpus
// https://vfrmaniac.fushizen.eu/contents/opus_in_isobmff.html
// https://opus-codec.org/docs/opusfile_api-0.7/structOpusHead.html
exports.OPUS_HEAD = OPUS_HEAD;
var parseOpusHead = function parseOpusHead(bytes) {
var view = new DataView(bytes.buffer, bytes.byteOffset, bytes.byteLength);
var version = view.getUint8(0); // version 0, from mp4, does not use littleEndian.
var littleEndian = version !== 0;
var config = {
version: version,
channels: view.getUint8(1),
preSkip: view.getUint16(2, littleEndian),
sampleRate: view.getUint32(4, littleEndian),
outputGain: view.getUint16(8, littleEndian),
channelMappingFamily: view.getUint8(10)
};
if (config.channelMappingFamily > 0 && bytes.length > 10) {
config.streamCount = view.getUint8(11);
config.twoChannelStreamCount = view.getUint8(12);
config.channelMapping = [];
for (var c = 0; c < config.channels; c++) {
config.channelMapping.push(view.getUint8(13 + c));
}
}
return config;
};
exports.parseOpusHead = parseOpusHead;
var setOpusHead = function setOpusHead(config) {
var size = config.channelMappingFamily <= 0 ? 11 : 12 + config.channels;
var view = new DataView(new ArrayBuffer(size));
var littleEndian = config.version !== 0;
view.setUint8(0, config.version);
view.setUint8(1, config.channels);
view.setUint16(2, config.preSkip, littleEndian);
view.setUint32(4, config.sampleRate, littleEndian);
view.setUint16(8, config.outputGain, littleEndian);
view.setUint8(10, config.channelMappingFamily);
if (config.channelMappingFamily > 0) {
view.setUint8(11, config.streamCount);
config.channelMapping.foreach(function (cm, i) {
view.setUint8(12 + i, cm);
});
}
return new Uint8Array(view.buffer);
};
exports.setOpusHead = setOpusHead;

60
node_modules/@videojs/vhs-utils/cjs/resolve-url.js generated vendored Normal file
View file

@ -0,0 +1,60 @@
"use strict";
var _interopRequireDefault = require("@babel/runtime/helpers/interopRequireDefault");
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = void 0;
var _urlToolkit = _interopRequireDefault(require("url-toolkit"));
var _window = _interopRequireDefault(require("global/window"));
var DEFAULT_LOCATION = 'http://example.com';
var resolveUrl = function resolveUrl(baseUrl, relativeUrl) {
// return early if we don't need to resolve
if (/^[a-z]+:/i.test(relativeUrl)) {
return relativeUrl;
} // if baseUrl is a data URI, ignore it and resolve everything relative to window.location
if (/^data:/.test(baseUrl)) {
baseUrl = _window.default.location && _window.default.location.href || '';
} // IE11 supports URL but not the URL constructor
// feature detect the behavior we want
var nativeURL = typeof _window.default.URL === 'function';
var protocolLess = /^\/\//.test(baseUrl); // remove location if window.location isn't available (i.e. we're in node)
// and if baseUrl isn't an absolute url
var removeLocation = !_window.default.location && !/\/\//i.test(baseUrl); // if the base URL is relative then combine with the current location
if (nativeURL) {
baseUrl = new _window.default.URL(baseUrl, _window.default.location || DEFAULT_LOCATION);
} else if (!/\/\//i.test(baseUrl)) {
baseUrl = _urlToolkit.default.buildAbsoluteURL(_window.default.location && _window.default.location.href || '', baseUrl);
}
if (nativeURL) {
var newUrl = new URL(relativeUrl, baseUrl); // if we're a protocol-less url, remove the protocol
// and if we're location-less, remove the location
// otherwise, return the url unmodified
if (removeLocation) {
return newUrl.href.slice(DEFAULT_LOCATION.length);
} else if (protocolLess) {
return newUrl.href.slice(newUrl.protocol.length);
}
return newUrl.href;
}
return _urlToolkit.default.buildAbsoluteURL(baseUrl, relativeUrl);
};
var _default = resolveUrl;
exports.default = _default;
module.exports = exports.default;

84
node_modules/@videojs/vhs-utils/cjs/riff-helpers.js generated vendored Normal file
View file

@ -0,0 +1,84 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.findFourCC = void 0;
var _byteHelpers = require("./byte-helpers.js");
var CONSTANTS = {
LIST: (0, _byteHelpers.toUint8)([0x4c, 0x49, 0x53, 0x54]),
RIFF: (0, _byteHelpers.toUint8)([0x52, 0x49, 0x46, 0x46]),
WAVE: (0, _byteHelpers.toUint8)([0x57, 0x41, 0x56, 0x45])
};
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return (0, _byteHelpers.stringToBytes)(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
var findFourCC = function findFourCC(bytes, paths) {
paths = normalizePaths(paths);
bytes = (0, _byteHelpers.toUint8)(bytes);
var results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
var i = 0;
while (i < bytes.length) {
var type = bytes.subarray(i, i + 4);
var size = (bytes[i + 7] << 24 | bytes[i + 6] << 16 | bytes[i + 5] << 8 | bytes[i + 4]) >>> 0; // skip LIST/RIFF and get the actual type
if ((0, _byteHelpers.bytesMatch)(type, CONSTANTS.LIST) || (0, _byteHelpers.bytesMatch)(type, CONSTANTS.RIFF) || (0, _byteHelpers.bytesMatch)(type, CONSTANTS.WAVE)) {
type = bytes.subarray(i + 8, i + 12);
i += 4;
size -= 4;
}
var data = bytes.subarray(i + 8, i + 8 + size);
if ((0, _byteHelpers.bytesMatch)(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
var subresults = findFourCC(data, paths.slice(1));
if (subresults.length) {
results = results.concat(subresults);
}
}
}
i += 8 + data.length;
} // we've finished searching all of bytes
return results;
};
exports.findFourCC = findFourCC;

129
node_modules/@videojs/vhs-utils/cjs/stream.js generated vendored Normal file
View file

@ -0,0 +1,129 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = void 0;
/**
* @file stream.js
*/
/**
* A lightweight readable stream implemention that handles event dispatching.
*
* @class Stream
*/
var Stream = /*#__PURE__*/function () {
function Stream() {
this.listeners = {};
}
/**
* Add a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener the callback to be invoked when an event of
* the specified type occurs
*/
var _proto = Stream.prototype;
_proto.on = function on(type, listener) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(listener);
}
/**
* Remove a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener a function previously registered for this
* type of event through `on`
* @return {boolean} if we could turn it off or not
*/
;
_proto.off = function off(type, listener) {
if (!this.listeners[type]) {
return false;
}
var index = this.listeners[type].indexOf(listener); // TODO: which is better?
// In Video.js we slice listener functions
// on trigger so that it does not mess up the order
// while we loop through.
//
// Here we slice on off so that the loop in trigger
// can continue using it's old reference to loop without
// messing up the order.
this.listeners[type] = this.listeners[type].slice(0);
this.listeners[type].splice(index, 1);
return index > -1;
}
/**
* Trigger an event of the specified type on this stream. Any additional
* arguments to this function are passed as parameters to event listeners.
*
* @param {string} type the event name
*/
;
_proto.trigger = function trigger(type) {
var callbacks = this.listeners[type];
if (!callbacks) {
return;
} // Slicing the arguments on every invocation of this method
// can add a significant amount of overhead. Avoid the
// intermediate object creation for the common case of a
// single callback argument
if (arguments.length === 2) {
var length = callbacks.length;
for (var i = 0; i < length; ++i) {
callbacks[i].call(this, arguments[1]);
}
} else {
var args = Array.prototype.slice.call(arguments, 1);
var _length = callbacks.length;
for (var _i = 0; _i < _length; ++_i) {
callbacks[_i].apply(this, args);
}
}
}
/**
* Destroys the stream and cleans up.
*/
;
_proto.dispose = function dispose() {
this.listeners = {};
}
/**
* Forwards all `data` events on this stream to the destination stream. The
* destination stream should provide a method `push` to receive the data
* events as they arrive.
*
* @param {Stream} destination the stream that will receive all `data` events
* @see http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
*/
;
_proto.pipe = function pipe(destination) {
this.on('data', function (data) {
destination.push(data);
});
};
return Stream;
}();
exports.default = Stream;
module.exports = exports.default;

1576
node_modules/@videojs/vhs-utils/dist/vhs-utils.js generated vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

264
node_modules/@videojs/vhs-utils/es/byte-helpers.js generated vendored Normal file
View file

@ -0,0 +1,264 @@
import window from 'global/window'; // const log2 = Math.log2 ? Math.log2 : (x) => (Math.log(x) / Math.log(2));
var repeat = function repeat(str, len) {
var acc = '';
while (len--) {
acc += str;
}
return acc;
}; // count the number of bits it would take to represent a number
// we used to do this with log2 but BigInt does not support builtin math
// Math.ceil(log2(x));
export var countBits = function countBits(x) {
return x.toString(2).length;
}; // count the number of whole bytes it would take to represent a number
export var countBytes = function countBytes(x) {
return Math.ceil(countBits(x) / 8);
};
export var padStart = function padStart(b, len, str) {
if (str === void 0) {
str = ' ';
}
return (repeat(str, len) + b.toString()).slice(-len);
};
export var isTypedArray = function isTypedArray(obj) {
return ArrayBuffer.isView(obj);
};
export var toUint8 = function toUint8(bytes) {
if (bytes instanceof Uint8Array) {
return bytes;
}
if (!Array.isArray(bytes) && !isTypedArray(bytes) && !(bytes instanceof ArrayBuffer)) {
// any non-number or NaN leads to empty uint8array
// eslint-disable-next-line
if (typeof bytes !== 'number' || typeof bytes === 'number' && bytes !== bytes) {
bytes = 0;
} else {
bytes = [bytes];
}
}
return new Uint8Array(bytes && bytes.buffer || bytes, bytes && bytes.byteOffset || 0, bytes && bytes.byteLength || 0);
};
export var toHexString = function toHexString(bytes) {
bytes = toUint8(bytes);
var str = '';
for (var i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(16), 2, '0');
}
return str;
};
export var toBinaryString = function toBinaryString(bytes) {
bytes = toUint8(bytes);
var str = '';
for (var i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(2), 8, '0');
}
return str;
};
var BigInt = window.BigInt || Number;
var BYTE_TABLE = [BigInt('0x1'), BigInt('0x100'), BigInt('0x10000'), BigInt('0x1000000'), BigInt('0x100000000'), BigInt('0x10000000000'), BigInt('0x1000000000000'), BigInt('0x100000000000000'), BigInt('0x10000000000000000')];
export var ENDIANNESS = function () {
var a = new Uint16Array([0xFFCC]);
var b = new Uint8Array(a.buffer, a.byteOffset, a.byteLength);
if (b[0] === 0xFF) {
return 'big';
}
if (b[0] === 0xCC) {
return 'little';
}
return 'unknown';
}();
export var IS_BIG_ENDIAN = ENDIANNESS === 'big';
export var IS_LITTLE_ENDIAN = ENDIANNESS === 'little';
export var bytesToNumber = function bytesToNumber(bytes, _temp) {
var _ref = _temp === void 0 ? {} : _temp,
_ref$signed = _ref.signed,
signed = _ref$signed === void 0 ? false : _ref$signed,
_ref$le = _ref.le,
le = _ref$le === void 0 ? false : _ref$le;
bytes = toUint8(bytes);
var fn = le ? 'reduce' : 'reduceRight';
var obj = bytes[fn] ? bytes[fn] : Array.prototype[fn];
var number = obj.call(bytes, function (total, byte, i) {
var exponent = le ? i : Math.abs(i + 1 - bytes.length);
return total + BigInt(byte) * BYTE_TABLE[exponent];
}, BigInt(0));
if (signed) {
var max = BYTE_TABLE[bytes.length] / BigInt(2) - BigInt(1);
number = BigInt(number);
if (number > max) {
number -= max;
number -= max;
number -= BigInt(2);
}
}
return Number(number);
};
export var numberToBytes = function numberToBytes(number, _temp2) {
var _ref2 = _temp2 === void 0 ? {} : _temp2,
_ref2$le = _ref2.le,
le = _ref2$le === void 0 ? false : _ref2$le;
// eslint-disable-next-line
if (typeof number !== 'bigint' && typeof number !== 'number' || typeof number === 'number' && number !== number) {
number = 0;
}
number = BigInt(number);
var byteCount = countBytes(number);
var bytes = new Uint8Array(new ArrayBuffer(byteCount));
for (var i = 0; i < byteCount; i++) {
var byteIndex = le ? i : Math.abs(i + 1 - bytes.length);
bytes[byteIndex] = Number(number / BYTE_TABLE[i] & BigInt(0xFF));
if (number < 0) {
bytes[byteIndex] = Math.abs(~bytes[byteIndex]);
bytes[byteIndex] -= i === 0 ? 1 : 2;
}
}
return bytes;
};
export var bytesToString = function bytesToString(bytes) {
if (!bytes) {
return '';
} // TODO: should toUint8 handle cases where we only have 8 bytes
// but report more since this is a Uint16+ Array?
bytes = Array.prototype.slice.call(bytes);
var string = String.fromCharCode.apply(null, toUint8(bytes));
try {
return decodeURIComponent(escape(string));
} catch (e) {// if decodeURIComponent/escape fails, we are dealing with partial
// or full non string data. Just return the potentially garbled string.
}
return string;
};
export var stringToBytes = function stringToBytes(string, stringIsBytes) {
if (typeof string !== 'string' && string && typeof string.toString === 'function') {
string = string.toString();
}
if (typeof string !== 'string') {
return new Uint8Array();
} // If the string already is bytes, we don't have to do this
// otherwise we do this so that we split multi length characters
// into individual bytes
if (!stringIsBytes) {
string = unescape(encodeURIComponent(string));
}
var view = new Uint8Array(string.length);
for (var i = 0; i < string.length; i++) {
view[i] = string.charCodeAt(i);
}
return view;
};
export var concatTypedArrays = function concatTypedArrays() {
for (var _len = arguments.length, buffers = new Array(_len), _key = 0; _key < _len; _key++) {
buffers[_key] = arguments[_key];
}
buffers = buffers.filter(function (b) {
return b && (b.byteLength || b.length) && typeof b !== 'string';
});
if (buffers.length <= 1) {
// for 0 length we will return empty uint8
// for 1 length we return the first uint8
return toUint8(buffers[0]);
}
var totalLen = buffers.reduce(function (total, buf, i) {
return total + (buf.byteLength || buf.length);
}, 0);
var tempBuffer = new Uint8Array(totalLen);
var offset = 0;
buffers.forEach(function (buf) {
buf = toUint8(buf);
tempBuffer.set(buf, offset);
offset += buf.byteLength;
});
return tempBuffer;
};
/**
* Check if the bytes "b" are contained within bytes "a".
*
* @param {Uint8Array|Array} a
* Bytes to check in
*
* @param {Uint8Array|Array} b
* Bytes to check for
*
* @param {Object} options
* options
*
* @param {Array|Uint8Array} [offset=0]
* offset to use when looking at bytes in a
*
* @param {Array|Uint8Array} [mask=[]]
* mask to use on bytes before comparison.
*
* @return {boolean}
* If all bytes in b are inside of a, taking into account
* bit masks.
*/
export var bytesMatch = function bytesMatch(a, b, _temp3) {
var _ref3 = _temp3 === void 0 ? {} : _temp3,
_ref3$offset = _ref3.offset,
offset = _ref3$offset === void 0 ? 0 : _ref3$offset,
_ref3$mask = _ref3.mask,
mask = _ref3$mask === void 0 ? [] : _ref3$mask;
a = toUint8(a);
b = toUint8(b); // ie 11 does not support uint8 every
var fn = b.every ? b.every : Array.prototype.every;
return b.length && a.length - offset >= b.length && // ie 11 doesn't support every on uin8
fn.call(b, function (bByte, i) {
var aByte = mask[i] ? mask[i] & a[offset + i] : a[offset + i];
return bByte === aByte;
});
};
export var sliceBytes = function sliceBytes(src, start, end) {
if (Uint8Array.prototype.slice) {
return Uint8Array.prototype.slice.call(src, start, end);
}
return new Uint8Array(Array.prototype.slice.call(src, start, end));
};
export var reverseBytes = function reverseBytes(src) {
if (src.reverse) {
return src.reverse();
}
return Array.prototype.reverse.call(src);
};

96
node_modules/@videojs/vhs-utils/es/codec-helpers.js generated vendored Normal file
View file

@ -0,0 +1,96 @@
import { padStart, toHexString, toBinaryString } from './byte-helpers.js'; // https://aomediacodec.github.io/av1-isobmff/#av1codecconfigurationbox-syntax
// https://developer.mozilla.org/en-US/docs/Web/Media/Formats/codecs_parameter#AV1
export var getAv1Codec = function getAv1Codec(bytes) {
var codec = '';
var profile = bytes[1] >>> 3;
var level = bytes[1] & 0x1F;
var tier = bytes[2] >>> 7;
var highBitDepth = (bytes[2] & 0x40) >> 6;
var twelveBit = (bytes[2] & 0x20) >> 5;
var monochrome = (bytes[2] & 0x10) >> 4;
var chromaSubsamplingX = (bytes[2] & 0x08) >> 3;
var chromaSubsamplingY = (bytes[2] & 0x04) >> 2;
var chromaSamplePosition = bytes[2] & 0x03;
codec += profile + "." + padStart(level, 2, '0');
if (tier === 0) {
codec += 'M';
} else if (tier === 1) {
codec += 'H';
}
var bitDepth;
if (profile === 2 && highBitDepth) {
bitDepth = twelveBit ? 12 : 10;
} else {
bitDepth = highBitDepth ? 10 : 8;
}
codec += "." + padStart(bitDepth, 2, '0'); // TODO: can we parse color range??
codec += "." + monochrome;
codec += "." + chromaSubsamplingX + chromaSubsamplingY + chromaSamplePosition;
return codec;
};
export var getAvcCodec = function getAvcCodec(bytes) {
var profileId = toHexString(bytes[1]);
var constraintFlags = toHexString(bytes[2] & 0xFC);
var levelId = toHexString(bytes[3]);
return "" + profileId + constraintFlags + levelId;
};
export var getHvcCodec = function getHvcCodec(bytes) {
var codec = '';
var profileSpace = bytes[1] >> 6;
var profileId = bytes[1] & 0x1F;
var tierFlag = (bytes[1] & 0x20) >> 5;
var profileCompat = bytes.subarray(2, 6);
var constraintIds = bytes.subarray(6, 12);
var levelId = bytes[12];
if (profileSpace === 1) {
codec += 'A';
} else if (profileSpace === 2) {
codec += 'B';
} else if (profileSpace === 3) {
codec += 'C';
}
codec += profileId + "."; // ffmpeg does this in big endian
var profileCompatVal = parseInt(toBinaryString(profileCompat).split('').reverse().join(''), 2); // apple does this in little endian...
if (profileCompatVal > 255) {
profileCompatVal = parseInt(toBinaryString(profileCompat), 2);
}
codec += profileCompatVal.toString(16) + ".";
if (tierFlag === 0) {
codec += 'L';
} else {
codec += 'H';
}
codec += levelId;
var constraints = '';
for (var i = 0; i < constraintIds.length; i++) {
var v = constraintIds[i];
if (v) {
if (constraints) {
constraints += '.';
}
constraints += v.toString(16);
}
}
if (constraints) {
codec += "." + constraints;
}
return codec;
};

253
node_modules/@videojs/vhs-utils/es/codecs.js generated vendored Normal file
View file

@ -0,0 +1,253 @@
import window from 'global/window';
var regexs = {
// to determine mime types
mp4: /^(av0?1|avc0?[1234]|vp0?9|flac|opus|mp3|mp4a|mp4v|stpp.ttml.im1t)/,
webm: /^(vp0?[89]|av0?1|opus|vorbis)/,
ogg: /^(vp0?[89]|theora|flac|opus|vorbis)/,
// to determine if a codec is audio or video
video: /^(av0?1|avc0?[1234]|vp0?[89]|hvc1|hev1|theora|mp4v)/,
audio: /^(mp4a|flac|vorbis|opus|ac-[34]|ec-3|alac|mp3|speex|aac)/,
text: /^(stpp.ttml.im1t)/,
// mux.js support regex
muxerVideo: /^(avc0?1)/,
muxerAudio: /^(mp4a)/,
// match nothing as muxer does not support text right now.
// there cannot never be a character before the start of a string
// so this matches nothing.
muxerText: /a^/
};
var mediaTypes = ['video', 'audio', 'text'];
var upperMediaTypes = ['Video', 'Audio', 'Text'];
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec string with the standard
* `avc1.<hhhhhh>`
*
* @param {string} codec
* Codec string to translate
* @return {string}
* The translated codec string
*/
export var translateLegacyCodec = function translateLegacyCodec(codec) {
if (!codec) {
return codec;
}
return codec.replace(/avc1\.(\d+)\.(\d+)/i, function (orig, profile, avcLevel) {
var profileHex = ('00' + Number(profile).toString(16)).slice(-2);
var avcLevelHex = ('00' + Number(avcLevel).toString(16)).slice(-2);
return 'avc1.' + profileHex + '00' + avcLevelHex;
});
};
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec strings with the standard
* `avc1.<hhhhhh>`
*
* @param {string[]} codecs
* An array of codec strings to translate
* @return {string[]}
* The translated array of codec strings
*/
export var translateLegacyCodecs = function translateLegacyCodecs(codecs) {
return codecs.map(translateLegacyCodec);
};
/**
* Replace codecs in the codec string with the old apple-style `avc1.<dd>.<dd>` to the
* standard `avc1.<hhhhhh>`.
*
* @param {string} codecString
* The codec string
* @return {string}
* The codec string with old apple-style codecs replaced
*
* @private
*/
export var mapLegacyAvcCodecs = function mapLegacyAvcCodecs(codecString) {
return codecString.replace(/avc1\.(\d+)\.(\d+)/i, function (match) {
return translateLegacyCodecs([match])[0];
});
};
/**
* @typedef {Object} ParsedCodecInfo
* @property {number} codecCount
* Number of codecs parsed
* @property {string} [videoCodec]
* Parsed video codec (if found)
* @property {string} [videoObjectTypeIndicator]
* Video object type indicator (if found)
* @property {string|null} audioProfile
* Audio profile
*/
/**
* Parses a codec string to retrieve the number of codecs specified, the video codec and
* object type indicator, and the audio profile.
*
* @param {string} [codecString]
* The codec string to parse
* @return {ParsedCodecInfo}
* Parsed codec info
*/
export var parseCodecs = function parseCodecs(codecString) {
if (codecString === void 0) {
codecString = '';
}
var codecs = codecString.split(',');
var result = [];
codecs.forEach(function (codec) {
codec = codec.trim();
var codecType;
mediaTypes.forEach(function (name) {
var match = regexs[name].exec(codec.toLowerCase());
if (!match || match.length <= 1) {
return;
}
codecType = name; // maintain codec case
var type = codec.substring(0, match[1].length);
var details = codec.replace(type, '');
result.push({
type: type,
details: details,
mediaType: name
});
});
if (!codecType) {
result.push({
type: codec,
details: '',
mediaType: 'unknown'
});
}
});
return result;
};
/**
* Returns a ParsedCodecInfo object for the default alternate audio playlist if there is
* a default alternate audio playlist for the provided audio group.
*
* @param {Object} master
* The master playlist
* @param {string} audioGroupId
* ID of the audio group for which to find the default codec info
* @return {ParsedCodecInfo}
* Parsed codec info
*/
export var codecsFromDefault = function codecsFromDefault(master, audioGroupId) {
if (!master.mediaGroups.AUDIO || !audioGroupId) {
return null;
}
var audioGroup = master.mediaGroups.AUDIO[audioGroupId];
if (!audioGroup) {
return null;
}
for (var name in audioGroup) {
var audioType = audioGroup[name];
if (audioType.default && audioType.playlists) {
// codec should be the same for all playlists within the audio type
return parseCodecs(audioType.playlists[0].attributes.CODECS);
}
}
return null;
};
export var isVideoCodec = function isVideoCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.video.test(codec.trim().toLowerCase());
};
export var isAudioCodec = function isAudioCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.audio.test(codec.trim().toLowerCase());
};
export var isTextCodec = function isTextCodec(codec) {
if (codec === void 0) {
codec = '';
}
return regexs.text.test(codec.trim().toLowerCase());
};
export var getMimeForCodec = function getMimeForCodec(codecString) {
if (!codecString || typeof codecString !== 'string') {
return;
}
var codecs = codecString.toLowerCase().split(',').map(function (c) {
return translateLegacyCodec(c.trim());
}); // default to video type
var type = 'video'; // only change to audio type if the only codec we have is
// audio
if (codecs.length === 1 && isAudioCodec(codecs[0])) {
type = 'audio';
} else if (codecs.length === 1 && isTextCodec(codecs[0])) {
// text uses application/<container> for now
type = 'application';
} // default the container to mp4
var container = 'mp4'; // every codec must be able to go into the container
// for that container to be the correct one
if (codecs.every(function (c) {
return regexs.mp4.test(c);
})) {
container = 'mp4';
} else if (codecs.every(function (c) {
return regexs.webm.test(c);
})) {
container = 'webm';
} else if (codecs.every(function (c) {
return regexs.ogg.test(c);
})) {
container = 'ogg';
}
return type + "/" + container + ";codecs=\"" + codecString + "\"";
};
export var browserSupportsCodec = function browserSupportsCodec(codecString) {
if (codecString === void 0) {
codecString = '';
}
return window.MediaSource && window.MediaSource.isTypeSupported && window.MediaSource.isTypeSupported(getMimeForCodec(codecString)) || false;
};
export var muxerSupportsCodec = function muxerSupportsCodec(codecString) {
if (codecString === void 0) {
codecString = '';
}
return codecString.toLowerCase().split(',').every(function (codec) {
codec = codec.trim(); // any match is supported.
for (var i = 0; i < upperMediaTypes.length; i++) {
var type = upperMediaTypes[i];
if (regexs["muxer" + type].test(codec)) {
return true;
}
}
return false;
});
};
export var DEFAULT_AUDIO_CODEC = 'mp4a.40.2';
export var DEFAULT_VIDEO_CODEC = 'avc1.4d400d';

162
node_modules/@videojs/vhs-utils/es/containers.js generated vendored Normal file
View file

@ -0,0 +1,162 @@
import { toUint8, bytesMatch } from './byte-helpers.js';
import { findBox } from './mp4-helpers.js';
import { findEbml, EBML_TAGS } from './ebml-helpers.js';
import { getId3Offset } from './id3-helpers.js';
import { findH264Nal, findH265Nal } from './nal-helpers.js';
var CONSTANTS = {
// "webm" string literal in hex
'webm': toUint8([0x77, 0x65, 0x62, 0x6d]),
// "matroska" string literal in hex
'matroska': toUint8([0x6d, 0x61, 0x74, 0x72, 0x6f, 0x73, 0x6b, 0x61]),
// "fLaC" string literal in hex
'flac': toUint8([0x66, 0x4c, 0x61, 0x43]),
// "OggS" string literal in hex
'ogg': toUint8([0x4f, 0x67, 0x67, 0x53]),
// ac-3 sync byte, also works for ec-3 as that is simply a codec
// of ac-3
'ac3': toUint8([0x0b, 0x77]),
// "RIFF" string literal in hex used for wav and avi
'riff': toUint8([0x52, 0x49, 0x46, 0x46]),
// "AVI" string literal in hex
'avi': toUint8([0x41, 0x56, 0x49]),
// "WAVE" string literal in hex
'wav': toUint8([0x57, 0x41, 0x56, 0x45]),
// "ftyp3g" string literal in hex
'3gp': toUint8([0x66, 0x74, 0x79, 0x70, 0x33, 0x67]),
// "ftyp" string literal in hex
'mp4': toUint8([0x66, 0x74, 0x79, 0x70]),
// "styp" string literal in hex
'fmp4': toUint8([0x73, 0x74, 0x79, 0x70]),
// "ftyp" string literal in hex
'mov': toUint8([0x66, 0x74, 0x79, 0x70, 0x71, 0x74])
};
var _isLikely = {
aac: function aac(bytes) {
var offset = getId3Offset(bytes);
return bytesMatch(bytes, [0xFF, 0x10], {
offset: offset,
mask: [0xFF, 0x16]
});
},
mp3: function mp3(bytes) {
var offset = getId3Offset(bytes);
return bytesMatch(bytes, [0xFF, 0x02], {
offset: offset,
mask: [0xFF, 0x06]
});
},
webm: function webm(bytes) {
var docType = findEbml(bytes, [EBML_TAGS.EBML, EBML_TAGS.DocType])[0]; // check if DocType EBML tag is webm
return bytesMatch(docType, CONSTANTS.webm);
},
mkv: function mkv(bytes) {
var docType = findEbml(bytes, [EBML_TAGS.EBML, EBML_TAGS.DocType])[0]; // check if DocType EBML tag is matroska
return bytesMatch(docType, CONSTANTS.matroska);
},
mp4: function mp4(bytes) {
return !_isLikely['3gp'](bytes) && !_isLikely.mov(bytes) && (bytesMatch(bytes, CONSTANTS.mp4, {
offset: 4
}) || bytesMatch(bytes, CONSTANTS.fmp4, {
offset: 4
}));
},
mov: function mov(bytes) {
return bytesMatch(bytes, CONSTANTS.mov, {
offset: 4
});
},
'3gp': function gp(bytes) {
return bytesMatch(bytes, CONSTANTS['3gp'], {
offset: 4
});
},
ac3: function ac3(bytes) {
var offset = getId3Offset(bytes);
return bytesMatch(bytes, CONSTANTS.ac3, {
offset: offset
});
},
ts: function ts(bytes) {
if (bytes.length < 189 && bytes.length >= 1) {
return bytes[0] === 0x47;
}
var i = 0; // check the first 376 bytes for two matching sync bytes
while (i + 188 < bytes.length && i < 188) {
if (bytes[i] === 0x47 && bytes[i + 188] === 0x47) {
return true;
}
i += 1;
}
return false;
},
flac: function flac(bytes) {
var offset = getId3Offset(bytes);
return bytesMatch(bytes, CONSTANTS.flac, {
offset: offset
});
},
ogg: function ogg(bytes) {
return bytesMatch(bytes, CONSTANTS.ogg);
},
avi: function avi(bytes) {
return bytesMatch(bytes, CONSTANTS.riff) && bytesMatch(bytes, CONSTANTS.avi, {
offset: 8
});
},
wav: function wav(bytes) {
return bytesMatch(bytes, CONSTANTS.riff) && bytesMatch(bytes, CONSTANTS.wav, {
offset: 8
});
},
'h264': function h264(bytes) {
// find seq_parameter_set_rbsp
return findH264Nal(bytes, 7, 3).length;
},
'h265': function h265(bytes) {
// find video_parameter_set_rbsp or seq_parameter_set_rbsp
return findH265Nal(bytes, [32, 33], 3).length;
}
}; // get all the isLikely functions
// but make sure 'ts' is above h264 and h265
// but below everything else as it is the least specific
var isLikelyTypes = Object.keys(_isLikely) // remove ts, h264, h265
.filter(function (t) {
return t !== 'ts' && t !== 'h264' && t !== 'h265';
}) // add it back to the bottom
.concat(['ts', 'h264', 'h265']); // make sure we are dealing with uint8 data.
isLikelyTypes.forEach(function (type) {
var isLikelyFn = _isLikely[type];
_isLikely[type] = function (bytes) {
return isLikelyFn(toUint8(bytes));
};
}); // export after wrapping
export var isLikely = _isLikely; // A useful list of file signatures can be found here
// https://en.wikipedia.org/wiki/List_of_file_signatures
export var detectContainerForBytes = function detectContainerForBytes(bytes) {
bytes = toUint8(bytes);
for (var i = 0; i < isLikelyTypes.length; i++) {
var type = isLikelyTypes[i];
if (isLikely[type](bytes)) {
return type;
}
}
return '';
}; // fmp4 is not a container
export var isLikelyFmp4MediaSegment = function isLikelyFmp4MediaSegment(bytes) {
return findBox(bytes, ['moof']).length > 0;
};

View file

@ -0,0 +1,16 @@
import window from 'global/window';
var atob = function atob(s) {
return window.atob ? window.atob(s) : Buffer.from(s, 'base64').toString('binary');
};
export default function decodeB64ToUint8Array(b64Text) {
var decodedString = atob(b64Text);
var array = new Uint8Array(decodedString.length);
for (var i = 0; i < decodedString.length; i++) {
array[i] = decodedString.charCodeAt(i);
}
return array;
}

497
node_modules/@videojs/vhs-utils/es/ebml-helpers.js generated vendored Normal file
View file

@ -0,0 +1,497 @@
import { toUint8, bytesToNumber, bytesMatch, bytesToString, numberToBytes, padStart } from './byte-helpers';
import { getAvcCodec, getHvcCodec, getAv1Codec } from './codec-helpers.js'; // relevant specs for this parser:
// https://matroska-org.github.io/libebml/specs.html
// https://www.matroska.org/technical/elements.html
// https://www.webmproject.org/docs/container/
export var EBML_TAGS = {
EBML: toUint8([0x1A, 0x45, 0xDF, 0xA3]),
DocType: toUint8([0x42, 0x82]),
Segment: toUint8([0x18, 0x53, 0x80, 0x67]),
SegmentInfo: toUint8([0x15, 0x49, 0xA9, 0x66]),
Tracks: toUint8([0x16, 0x54, 0xAE, 0x6B]),
Track: toUint8([0xAE]),
TrackNumber: toUint8([0xd7]),
DefaultDuration: toUint8([0x23, 0xe3, 0x83]),
TrackEntry: toUint8([0xAE]),
TrackType: toUint8([0x83]),
FlagDefault: toUint8([0x88]),
CodecID: toUint8([0x86]),
CodecPrivate: toUint8([0x63, 0xA2]),
VideoTrack: toUint8([0xe0]),
AudioTrack: toUint8([0xe1]),
// Not used yet, but will be used for live webm/mkv
// see https://www.matroska.org/technical/basics.html#block-structure
// see https://www.matroska.org/technical/basics.html#simpleblock-structure
Cluster: toUint8([0x1F, 0x43, 0xB6, 0x75]),
Timestamp: toUint8([0xE7]),
TimestampScale: toUint8([0x2A, 0xD7, 0xB1]),
BlockGroup: toUint8([0xA0]),
BlockDuration: toUint8([0x9B]),
Block: toUint8([0xA1]),
SimpleBlock: toUint8([0xA3])
};
/**
* This is a simple table to determine the length
* of things in ebml. The length is one based (starts at 1,
* rather than zero) and for every zero bit before a one bit
* we add one to length. We also need this table because in some
* case we have to xor all the length bits from another value.
*/
var LENGTH_TABLE = [128, 64, 32, 16, 8, 4, 2, 1];
var getLength = function getLength(byte) {
var len = 1;
for (var i = 0; i < LENGTH_TABLE.length; i++) {
if (byte & LENGTH_TABLE[i]) {
break;
}
len++;
}
return len;
}; // length in ebml is stored in the first 4 to 8 bits
// of the first byte. 4 for the id length and 8 for the
// data size length. Length is measured by converting the number to binary
// then 1 + the number of zeros before a 1 is encountered starting
// from the left.
var getvint = function getvint(bytes, offset, removeLength, signed) {
if (removeLength === void 0) {
removeLength = true;
}
if (signed === void 0) {
signed = false;
}
var length = getLength(bytes[offset]);
var valueBytes = bytes.subarray(offset, offset + length); // NOTE that we do **not** subarray here because we need to copy these bytes
// as they will be modified below to remove the dataSizeLen bits and we do not
// want to modify the original data. normally we could just call slice on
// uint8array but ie 11 does not support that...
if (removeLength) {
valueBytes = Array.prototype.slice.call(bytes, offset, offset + length);
valueBytes[0] ^= LENGTH_TABLE[length - 1];
}
return {
length: length,
value: bytesToNumber(valueBytes, {
signed: signed
}),
bytes: valueBytes
};
};
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return path.match(/.{1,2}/g).map(function (p) {
return normalizePath(p);
});
}
if (typeof path === 'number') {
return numberToBytes(path);
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
var getInfinityDataSize = function getInfinityDataSize(id, bytes, offset) {
if (offset >= bytes.length) {
return bytes.length;
}
var innerid = getvint(bytes, offset, false);
if (bytesMatch(id.bytes, innerid.bytes)) {
return offset;
}
var dataHeader = getvint(bytes, offset + innerid.length);
return getInfinityDataSize(id, bytes, offset + dataHeader.length + dataHeader.value + innerid.length);
};
/**
* Notes on the EBLM format.
*
* EBLM uses "vints" tags. Every vint tag contains
* two parts
*
* 1. The length from the first byte. You get this by
* converting the byte to binary and counting the zeros
* before a 1. Then you add 1 to that. Examples
* 00011111 = length 4 because there are 3 zeros before a 1.
* 00100000 = length 3 because there are 2 zeros before a 1.
* 00000011 = length 7 because there are 6 zeros before a 1.
*
* 2. The bits used for length are removed from the first byte
* Then all the bytes are merged into a value. NOTE: this
* is not the case for id ebml tags as there id includes
* length bits.
*
*/
export var findEbml = function findEbml(bytes, paths) {
paths = normalizePaths(paths);
bytes = toUint8(bytes);
var results = [];
if (!paths.length) {
return results;
}
var i = 0;
while (i < bytes.length) {
var id = getvint(bytes, i, false);
var dataHeader = getvint(bytes, i + id.length);
var dataStart = i + id.length + dataHeader.length; // dataSize is unknown or this is a live stream
if (dataHeader.value === 0x7f) {
dataHeader.value = getInfinityDataSize(id, bytes, dataStart);
if (dataHeader.value !== bytes.length) {
dataHeader.value -= dataStart;
}
}
var dataEnd = dataStart + dataHeader.value > bytes.length ? bytes.length : dataStart + dataHeader.value;
var data = bytes.subarray(dataStart, dataEnd);
if (bytesMatch(paths[0], id.bytes)) {
if (paths.length === 1) {
// this is the end of the paths and we've found the tag we were
// looking for
results.push(data);
} else {
// recursively search for the next tag inside of the data
// of this one
results = results.concat(findEbml(data, paths.slice(1)));
}
}
var totalLength = id.length + dataHeader.length + data.length; // move past this tag entirely, we are not looking for it
i += totalLength;
}
return results;
}; // see https://www.matroska.org/technical/basics.html#block-structure
export var decodeBlock = function decodeBlock(block, type, timestampScale, clusterTimestamp) {
var duration;
if (type === 'group') {
duration = findEbml(block, [EBML_TAGS.BlockDuration])[0];
if (duration) {
duration = bytesToNumber(duration);
duration = 1 / timestampScale * duration * timestampScale / 1000;
}
block = findEbml(block, [EBML_TAGS.Block])[0];
type = 'block'; // treat data as a block after this point
}
var dv = new DataView(block.buffer, block.byteOffset, block.byteLength);
var trackNumber = getvint(block, 0);
var timestamp = dv.getInt16(trackNumber.length, false);
var flags = block[trackNumber.length + 2];
var data = block.subarray(trackNumber.length + 3); // pts/dts in seconds
var ptsdts = 1 / timestampScale * (clusterTimestamp + timestamp) * timestampScale / 1000; // return the frame
var parsed = {
duration: duration,
trackNumber: trackNumber.value,
keyframe: type === 'simple' && flags >> 7 === 1,
invisible: (flags & 0x08) >> 3 === 1,
lacing: (flags & 0x06) >> 1,
discardable: type === 'simple' && (flags & 0x01) === 1,
frames: [],
pts: ptsdts,
dts: ptsdts,
timestamp: timestamp
};
if (!parsed.lacing) {
parsed.frames.push(data);
return parsed;
}
var numberOfFrames = data[0] + 1;
var frameSizes = [];
var offset = 1; // Fixed
if (parsed.lacing === 2) {
var sizeOfFrame = (data.length - offset) / numberOfFrames;
for (var i = 0; i < numberOfFrames; i++) {
frameSizes.push(sizeOfFrame);
}
} // xiph
if (parsed.lacing === 1) {
for (var _i = 0; _i < numberOfFrames - 1; _i++) {
var size = 0;
do {
size += data[offset];
offset++;
} while (data[offset - 1] === 0xFF);
frameSizes.push(size);
}
} // ebml
if (parsed.lacing === 3) {
// first vint is unsinged
// after that vints are singed and
// based on a compounding size
var _size = 0;
for (var _i2 = 0; _i2 < numberOfFrames - 1; _i2++) {
var vint = _i2 === 0 ? getvint(data, offset) : getvint(data, offset, true, true);
_size += vint.value;
frameSizes.push(_size);
offset += vint.length;
}
}
frameSizes.forEach(function (size) {
parsed.frames.push(data.subarray(offset, offset + size));
offset += size;
});
return parsed;
}; // VP9 Codec Feature Metadata (CodecPrivate)
// https://www.webmproject.org/docs/container/
var parseVp9Private = function parseVp9Private(bytes) {
var i = 0;
var params = {};
while (i < bytes.length) {
var id = bytes[i] & 0x7f;
var len = bytes[i + 1];
var val = void 0;
if (len === 1) {
val = bytes[i + 2];
} else {
val = bytes.subarray(i + 2, i + 2 + len);
}
if (id === 1) {
params.profile = val;
} else if (id === 2) {
params.level = val;
} else if (id === 3) {
params.bitDepth = val;
} else if (id === 4) {
params.chromaSubsampling = val;
} else {
params[id] = val;
}
i += 2 + len;
}
return params;
};
export var parseTracks = function parseTracks(bytes) {
bytes = toUint8(bytes);
var decodedTracks = [];
var tracks = findEbml(bytes, [EBML_TAGS.Segment, EBML_TAGS.Tracks, EBML_TAGS.Track]);
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Tracks, EBML_TAGS.Track]);
}
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Track]);
}
if (!tracks.length) {
return decodedTracks;
}
tracks.forEach(function (track) {
var trackType = findEbml(track, EBML_TAGS.TrackType)[0];
if (!trackType || !trackType.length) {
return;
} // 1 is video, 2 is audio, 17 is subtitle
// other values are unimportant in this context
if (trackType[0] === 1) {
trackType = 'video';
} else if (trackType[0] === 2) {
trackType = 'audio';
} else if (trackType[0] === 17) {
trackType = 'subtitle';
} else {
return;
} // todo parse language
var decodedTrack = {
rawCodec: bytesToString(findEbml(track, [EBML_TAGS.CodecID])[0]),
type: trackType,
codecPrivate: findEbml(track, [EBML_TAGS.CodecPrivate])[0],
number: bytesToNumber(findEbml(track, [EBML_TAGS.TrackNumber])[0]),
defaultDuration: bytesToNumber(findEbml(track, [EBML_TAGS.DefaultDuration])[0]),
default: findEbml(track, [EBML_TAGS.FlagDefault])[0],
rawData: track
};
var codec = '';
if (/V_MPEG4\/ISO\/AVC/.test(decodedTrack.rawCodec)) {
codec = "avc1." + getAvcCodec(decodedTrack.codecPrivate);
} else if (/V_MPEGH\/ISO\/HEVC/.test(decodedTrack.rawCodec)) {
codec = "hev1." + getHvcCodec(decodedTrack.codecPrivate);
} else if (/V_MPEG4\/ISO\/ASP/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4v.20.' + decodedTrack.codecPrivate[4].toString();
} else {
codec = 'mp4v.20.9';
}
} else if (/^V_THEORA/.test(decodedTrack.rawCodec)) {
codec = 'theora';
} else if (/^V_VP8/.test(decodedTrack.rawCodec)) {
codec = 'vp8';
} else if (/^V_VP9/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
var _parseVp9Private = parseVp9Private(decodedTrack.codecPrivate),
profile = _parseVp9Private.profile,
level = _parseVp9Private.level,
bitDepth = _parseVp9Private.bitDepth,
chromaSubsampling = _parseVp9Private.chromaSubsampling;
codec = 'vp09.';
codec += padStart(profile, 2, '0') + ".";
codec += padStart(level, 2, '0') + ".";
codec += padStart(bitDepth, 2, '0') + ".";
codec += "" + padStart(chromaSubsampling, 2, '0'); // Video -> Colour -> Ebml name
var matrixCoefficients = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB1]])[0] || [];
var videoFullRangeFlag = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB9]])[0] || [];
var transferCharacteristics = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBA]])[0] || [];
var colourPrimaries = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBB]])[0] || []; // if we find any optional codec parameter specify them all.
if (matrixCoefficients.length || videoFullRangeFlag.length || transferCharacteristics.length || colourPrimaries.length) {
codec += "." + padStart(colourPrimaries[0], 2, '0');
codec += "." + padStart(transferCharacteristics[0], 2, '0');
codec += "." + padStart(matrixCoefficients[0], 2, '0');
codec += "." + padStart(videoFullRangeFlag[0], 2, '0');
}
} else {
codec = 'vp9';
}
} else if (/^V_AV1/.test(decodedTrack.rawCodec)) {
codec = "av01." + getAv1Codec(decodedTrack.codecPrivate);
} else if (/A_ALAC/.test(decodedTrack.rawCodec)) {
codec = 'alac';
} else if (/A_MPEG\/L2/.test(decodedTrack.rawCodec)) {
codec = 'mp2';
} else if (/A_MPEG\/L3/.test(decodedTrack.rawCodec)) {
codec = 'mp3';
} else if (/^A_AAC/.test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4a.40.' + (decodedTrack.codecPrivate[0] >>> 3).toString();
} else {
codec = 'mp4a.40.2';
}
} else if (/^A_AC3/.test(decodedTrack.rawCodec)) {
codec = 'ac-3';
} else if (/^A_PCM/.test(decodedTrack.rawCodec)) {
codec = 'pcm';
} else if (/^A_MS\/ACM/.test(decodedTrack.rawCodec)) {
codec = 'speex';
} else if (/^A_EAC3/.test(decodedTrack.rawCodec)) {
codec = 'ec-3';
} else if (/^A_VORBIS/.test(decodedTrack.rawCodec)) {
codec = 'vorbis';
} else if (/^A_FLAC/.test(decodedTrack.rawCodec)) {
codec = 'flac';
} else if (/^A_OPUS/.test(decodedTrack.rawCodec)) {
codec = 'opus';
}
decodedTrack.codec = codec;
decodedTracks.push(decodedTrack);
});
return decodedTracks.sort(function (a, b) {
return a.number - b.number;
});
};
export var parseData = function parseData(data, tracks) {
var allBlocks = [];
var segment = findEbml(data, [EBML_TAGS.Segment])[0];
var timestampScale = findEbml(segment, [EBML_TAGS.SegmentInfo, EBML_TAGS.TimestampScale])[0]; // in nanoseconds, defaults to 1ms
if (timestampScale && timestampScale.length) {
timestampScale = bytesToNumber(timestampScale);
} else {
timestampScale = 1000000;
}
var clusters = findEbml(segment, [EBML_TAGS.Cluster]);
if (!tracks) {
tracks = parseTracks(segment);
}
clusters.forEach(function (cluster, ci) {
var simpleBlocks = findEbml(cluster, [EBML_TAGS.SimpleBlock]).map(function (b) {
return {
type: 'simple',
data: b
};
});
var blockGroups = findEbml(cluster, [EBML_TAGS.BlockGroup]).map(function (b) {
return {
type: 'group',
data: b
};
});
var timestamp = findEbml(cluster, [EBML_TAGS.Timestamp])[0] || 0;
if (timestamp && timestamp.length) {
timestamp = bytesToNumber(timestamp);
} // get all blocks then sort them into the correct order
var blocks = simpleBlocks.concat(blockGroups).sort(function (a, b) {
return a.data.byteOffset - b.data.byteOffset;
});
blocks.forEach(function (block, bi) {
var decoded = decodeBlock(block.data, block.type, timestampScale, timestamp);
allBlocks.push(decoded);
});
});
return {
tracks: tracks,
blocks: allBlocks
};
};

388
node_modules/@videojs/vhs-utils/es/format-parser.js generated vendored Normal file
View file

@ -0,0 +1,388 @@
import { bytesToString, toUint8, toHexString, bytesMatch } from './byte-helpers.js';
import { parseTracks as parseEbmlTracks } from './ebml-helpers.js';
import { parseTracks as parseMp4Tracks } from './mp4-helpers.js';
import { findFourCC } from './riff-helpers.js';
import { getPages } from './ogg-helpers.js';
import { detectContainerForBytes } from './containers.js';
import { findH264Nal, findH265Nal } from './nal-helpers.js';
import { parseTs } from './m2ts-helpers.js';
import { getAvcCodec, getHvcCodec } from './codec-helpers.js';
import { getId3Offset } from './id3-helpers.js'; // https://docs.microsoft.com/en-us/windows/win32/medfound/audio-subtype-guids
// https://tools.ietf.org/html/rfc2361
var wFormatTagCodec = function wFormatTagCodec(wFormatTag) {
wFormatTag = toUint8(wFormatTag);
if (bytesMatch(wFormatTag, [0x00, 0x55])) {
return 'mp3';
} else if (bytesMatch(wFormatTag, [0x16, 0x00]) || bytesMatch(wFormatTag, [0x00, 0xFF])) {
return 'aac';
} else if (bytesMatch(wFormatTag, [0x70, 0x4f])) {
return 'opus';
} else if (bytesMatch(wFormatTag, [0x6C, 0x61])) {
return 'alac';
} else if (bytesMatch(wFormatTag, [0xF1, 0xAC])) {
return 'flac';
} else if (bytesMatch(wFormatTag, [0x20, 0x00])) {
return 'ac-3';
} else if (bytesMatch(wFormatTag, [0xFF, 0xFE])) {
return 'ec-3';
} else if (bytesMatch(wFormatTag, [0x00, 0x50])) {
return 'mp2';
} else if (bytesMatch(wFormatTag, [0x56, 0x6f])) {
return 'vorbis';
} else if (bytesMatch(wFormatTag, [0xA1, 0x09])) {
return 'speex';
}
return '';
};
var formatMimetype = function formatMimetype(name, codecs) {
var codecString = ['video', 'audio'].reduce(function (acc, type) {
if (codecs[type]) {
acc += (acc.length ? ',' : '') + codecs[type];
}
return acc;
}, '');
return (codecs.video ? 'video' : 'audio') + "/" + name + (codecString ? ";codecs=\"" + codecString + "\"" : '');
};
var parseCodecFrom = {
mov: function mov(bytes) {
// mov and mp4 both use a nearly identical box structure.
var retval = parseCodecFrom.mp4(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('mp4', 'quicktime');
}
return retval;
},
mp4: function mp4(bytes) {
bytes = toUint8(bytes);
var codecs = {};
var tracks = parseMp4Tracks(bytes);
for (var i = 0; i < tracks.length; i++) {
var track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {
codecs: codecs,
mimetype: formatMimetype('mp4', codecs)
};
},
'3gp': function gp(bytes) {
return {
codecs: {},
mimetype: 'video/3gpp'
};
},
ogg: function ogg(bytes) {
var pages = getPages(bytes, 0, 4);
var codecs = {};
pages.forEach(function (page) {
if (bytesMatch(page, [0x4F, 0x70, 0x75, 0x73], {
offset: 28
})) {
codecs.audio = 'opus';
} else if (bytesMatch(page, [0x56, 0x50, 0x38, 0x30], {
offset: 29
})) {
codecs.video = 'vp8';
} else if (bytesMatch(page, [0x74, 0x68, 0x65, 0x6F, 0x72, 0x61], {
offset: 29
})) {
codecs.video = 'theora';
} else if (bytesMatch(page, [0x46, 0x4C, 0x41, 0x43], {
offset: 29
})) {
codecs.audio = 'flac';
} else if (bytesMatch(page, [0x53, 0x70, 0x65, 0x65, 0x78], {
offset: 28
})) {
codecs.audio = 'speex';
} else if (bytesMatch(page, [0x76, 0x6F, 0x72, 0x62, 0x69, 0x73], {
offset: 29
})) {
codecs.audio = 'vorbis';
}
});
return {
codecs: codecs,
mimetype: formatMimetype('ogg', codecs)
};
},
wav: function wav(bytes) {
var format = findFourCC(bytes, ['WAVE', 'fmt'])[0];
var wFormatTag = Array.prototype.slice.call(format, 0, 2).reverse();
var mimetype = 'audio/vnd.wave';
var codecs = {
audio: wFormatTagCodec(wFormatTag)
};
var codecString = wFormatTag.reduce(function (acc, v) {
if (v) {
acc += toHexString(v);
}
return acc;
}, '');
if (codecString) {
mimetype += ";codec=" + codecString;
}
if (codecString && !codecs.audio) {
codecs.audio = codecString;
}
return {
codecs: codecs,
mimetype: mimetype
};
},
avi: function avi(bytes) {
var movi = findFourCC(bytes, ['AVI', 'movi'])[0];
var strls = findFourCC(bytes, ['AVI', 'hdrl', 'strl']);
var codecs = {};
strls.forEach(function (strl) {
var strh = findFourCC(strl, ['strh'])[0];
var strf = findFourCC(strl, ['strf'])[0]; // now parse AVIStreamHeader to get codec and type:
// https://docs.microsoft.com/en-us/previous-versions/windows/desktop/api/avifmt/ns-avifmt-avistreamheader
var type = bytesToString(strh.subarray(0, 4));
var codec;
var codecType;
if (type === 'vids') {
// https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader
var handler = bytesToString(strh.subarray(4, 8));
var compression = bytesToString(strf.subarray(16, 20)); // look for 00dc (compressed video fourcc code) or 00db (uncompressed video fourcc code)
var videoData = findFourCC(movi, ['00dc'])[0] || findFourCC(movi, ['00db'][0]);
if (handler === 'H264' || compression === 'H264') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h264(videoData).codecs.video;
} else {
codec = 'avc1';
}
} else if (handler === 'HEVC' || compression === 'HEVC') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h265(videoData).codecs.video;
} else {
codec = 'hev1';
}
} else if (handler === 'FMP4' || compression === 'FMP4') {
if (movi.length) {
codec = 'mp4v.20.' + movi[12].toString();
} else {
codec = 'mp4v.20';
}
} else if (handler === 'VP80' || compression === 'VP80') {
codec = 'vp8';
} else if (handler === 'VP90' || compression === 'VP90') {
codec = 'vp9';
} else if (handler === 'AV01' || compression === 'AV01') {
codec = 'av01';
} else if (handler === 'theo' || compression === 'theora') {
codec = 'theora';
} else {
if (videoData && videoData.length) {
var result = detectContainerForBytes(videoData);
if (result === 'h264') {
codec = parseCodecFrom.h264(movi).codecs.video;
}
if (result === 'h265') {
codec = parseCodecFrom.h265(movi).codecs.video;
}
}
if (!codec) {
codec = handler || compression;
}
}
codecType = 'video';
} else if (type === 'auds') {
codecType = 'audio'; // look for 00wb (audio data fourcc)
// const audioData = findFourCC(movi, ['01wb']);
var wFormatTag = Array.prototype.slice.call(strf, 0, 2).reverse();
codecs.audio = wFormatTagCodec(wFormatTag);
} else {
return;
}
if (codec) {
codecs[codecType] = codec;
}
});
return {
codecs: codecs,
mimetype: formatMimetype('avi', codecs)
};
},
ts: function ts(bytes) {
var result = parseTs(bytes, 2);
var codecs = {};
Object.keys(result.streams).forEach(function (esPid) {
var stream = result.streams[esPid];
if (stream.codec === 'avc1' && stream.packets.length) {
stream.codec = parseCodecFrom.h264(stream.packets[0]).codecs.video;
} else if (stream.codec === 'hev1' && stream.packets.length) {
stream.codec = parseCodecFrom.h265(stream.packets[0]).codecs.video;
}
codecs[stream.type] = stream.codec;
});
return {
codecs: codecs,
mimetype: formatMimetype('mp2t', codecs)
};
},
webm: function webm(bytes) {
// mkv and webm both use ebml to store code info
var retval = parseCodecFrom.mkv(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('x-matroska', 'webm');
}
return retval;
},
mkv: function mkv(bytes) {
var codecs = {};
var tracks = parseEbmlTracks(bytes);
for (var i = 0; i < tracks.length; i++) {
var track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {
codecs: codecs,
mimetype: formatMimetype('x-matroska', codecs)
};
},
aac: function aac(bytes) {
return {
codecs: {
audio: 'aac'
},
mimetype: 'audio/aac'
};
},
ac3: function ac3(bytes) {
// past id3 and syncword
var offset = getId3Offset(bytes) + 2; // default to ac-3
var codec = 'ac-3';
if (bytesMatch(bytes, [0xB8, 0xE0], {
offset: offset
})) {
codec = 'ac-3'; // 0x01, 0x7F
} else if (bytesMatch(bytes, [0x01, 0x7f], {
offset: offset
})) {
codec = 'ec-3';
}
return {
codecs: {
audio: codec
},
mimetype: 'audio/vnd.dolby.dd-raw'
};
},
mp3: function mp3(bytes) {
return {
codecs: {
audio: 'mp3'
},
mimetype: 'audio/mpeg'
};
},
flac: function flac(bytes) {
return {
codecs: {
audio: 'flac'
},
mimetype: 'audio/flac'
};
},
'h264': function h264(bytes) {
// find seq_parameter_set_rbsp to get encoding settings for codec
var nal = findH264Nal(bytes, 7, 3);
var retval = {
codecs: {
video: 'avc1'
},
mimetype: 'video/h264'
};
if (nal.length) {
retval.codecs.video += "." + getAvcCodec(nal);
}
return retval;
},
'h265': function h265(bytes) {
var retval = {
codecs: {
video: 'hev1'
},
mimetype: 'video/h265'
}; // find video_parameter_set_rbsp or seq_parameter_set_rbsp
// to get encoding settings for codec
var nal = findH265Nal(bytes, [32, 33], 3);
if (nal.length) {
var type = nal[0] >> 1 & 0x3F; // profile_tier_level starts at byte 5 for video_parameter_set_rbsp
// byte 2 for seq_parameter_set_rbsp
retval.codecs.video += "." + getHvcCodec(nal.subarray(type === 32 ? 5 : 2));
}
return retval;
}
};
export var parseFormatForBytes = function parseFormatForBytes(bytes) {
bytes = toUint8(bytes);
var result = {
codecs: {},
container: detectContainerForBytes(bytes),
mimetype: ''
};
var parseCodecFn = parseCodecFrom[result.container];
if (parseCodecFn) {
var parsed = parseCodecFn ? parseCodecFn(bytes) : {};
result.codecs = parsed.codecs || {};
result.mimetype = parsed.mimetype || '';
}
return result;
};

37
node_modules/@videojs/vhs-utils/es/id3-helpers.js generated vendored Normal file
View file

@ -0,0 +1,37 @@
import { toUint8, bytesMatch } from './byte-helpers.js';
var ID3 = toUint8([0x49, 0x44, 0x33]);
export var getId3Size = function getId3Size(bytes, offset) {
if (offset === void 0) {
offset = 0;
}
bytes = toUint8(bytes);
var flags = bytes[offset + 5];
var returnSize = bytes[offset + 6] << 21 | bytes[offset + 7] << 14 | bytes[offset + 8] << 7 | bytes[offset + 9];
var footerPresent = (flags & 16) >> 4;
if (footerPresent) {
return returnSize + 20;
}
return returnSize + 10;
};
export var getId3Offset = function getId3Offset(bytes, offset) {
if (offset === void 0) {
offset = 0;
}
bytes = toUint8(bytes);
if (bytes.length - offset < 10 || !bytesMatch(bytes, ID3, {
offset: offset
})) {
return offset;
}
offset += getId3Size(bytes, offset); // recursive check for id3 tags as some files
// have multiple ID3 tag sections even though
// they should not.
return getId3Offset(bytes, offset);
};

16
node_modules/@videojs/vhs-utils/es/index.js generated vendored Normal file
View file

@ -0,0 +1,16 @@
import * as codecs from './codecs';
import * as byteHelpers from './byte-helpers.js';
import * as containers from './containers.js';
import decodeB64ToUint8Array from './decode-b64-to-uint8-array.js';
import * as mediaGroups from './media-groups.js';
import resolveUrl from './resolve-url.js';
import Stream from './stream.js';
export default {
codecs: codecs,
byteHelpers: byteHelpers,
containers: containers,
decodeB64ToUint8Array: decodeB64ToUint8Array,
mediaGroups: mediaGroups,
resolveUrl: resolveUrl,
Stream: Stream
};

105
node_modules/@videojs/vhs-utils/es/m2ts-helpers.js generated vendored Normal file
View file

@ -0,0 +1,105 @@
import { bytesMatch, toUint8 } from './byte-helpers.js';
var SYNC_BYTE = 0x47;
export var parseTs = function parseTs(bytes, maxPes) {
if (maxPes === void 0) {
maxPes = Infinity;
}
bytes = toUint8(bytes);
var startIndex = 0;
var endIndex = 188;
var pmt = {};
var pesCount = 0;
while (endIndex < bytes.byteLength && pesCount < maxPes) {
if (bytes[startIndex] !== SYNC_BYTE && bytes[endIndex] !== SYNC_BYTE) {
endIndex += 1;
startIndex += 1;
continue;
}
var packet = bytes.subarray(startIndex, endIndex);
var pid = (packet[1] & 0x1f) << 8 | packet[2];
var hasPusi = !!(packet[1] & 0x40);
var hasAdaptationHeader = (packet[3] & 0x30) >>> 4 > 0x01;
var payloadOffset = 4 + (hasAdaptationHeader ? packet[4] + 1 : 0);
if (hasPusi) {
payloadOffset += packet[payloadOffset] + 1;
}
if (pid === 0 && !pmt.pid) {
pmt.pid = (packet[payloadOffset + 10] & 0x1f) << 8 | packet[payloadOffset + 11];
} else if (pmt.pid && pid === pmt.pid && !pmt.streams) {
var isNotForward = packet[payloadOffset + 5] & 0x01; // ignore forward pmt delarations
if (!isNotForward) {
continue;
}
pmt.streams = {};
var sectionLength = (packet[payloadOffset + 1] & 0x0f) << 8 | packet[payloadOffset + 2];
var tableEnd = 3 + sectionLength - 4;
var programInfoLength = (packet[payloadOffset + 10] & 0x0f) << 8 | packet[payloadOffset + 11];
var offset = 12 + programInfoLength;
while (offset < tableEnd) {
// add an entry that maps the elementary_pid to the stream_type
var i = payloadOffset + offset;
var type = packet[i];
var esPid = (packet[i + 1] & 0x1F) << 8 | packet[i + 2];
var esLength = (packet[i + 3] & 0x0f) << 8 | packet[i + 4];
var esInfo = packet.subarray(i + 5, i + 5 + esLength);
var stream = pmt.streams[esPid] = {
esInfo: esInfo,
typeNumber: type,
packets: [],
type: '',
codec: ''
};
if (type === 0x06 && bytesMatch(esInfo, [0x4F, 0x70, 0x75, 0x73], {
offset: 2
})) {
stream.type = 'audio';
stream.codec = 'opus';
} else if (type === 0x1B || type === 0x20) {
stream.type = 'video';
stream.codec = 'avc1';
} else if (type === 0x24) {
stream.type = 'video';
stream.codec = 'hev1';
} else if (type === 0x10) {
stream.type = 'video';
stream.codec = 'mp4v.20';
} else if (type === 0x0F) {
stream.type = 'audio';
stream.codec = 'aac';
} else if (type === 0x81) {
stream.type = 'audio';
stream.codec = 'ac-3';
} else if (type === 0x87) {
stream.type = 'audio';
stream.codec = 'ec-3';
} else if (type === 0x03 || type === 0x04) {
stream.type = 'audio';
stream.codec = 'mp3';
}
offset += esLength + 5;
}
} else if (pmt.pid && pmt.streams) {
pmt.streams[pid].packets.push(packet.subarray(payloadOffset));
pesCount++;
}
startIndex += 188;
endIndex += 188;
}
if (!pmt.streams) {
pmt.streams = {};
}
return pmt;
};

21
node_modules/@videojs/vhs-utils/es/media-groups.js generated vendored Normal file
View file

@ -0,0 +1,21 @@
/**
* Loops through all supported media groups in master and calls the provided
* callback for each group
*
* @param {Object} master
* The parsed master manifest object
* @param {string[]} groups
* The media groups to call the callback for
* @param {Function} callback
* Callback to call for each media group
*/
export var forEachMediaGroup = function forEachMediaGroup(master, groups, callback) {
groups.forEach(function (mediaType) {
for (var groupKey in master.mediaGroups[mediaType]) {
for (var labelKey in master.mediaGroups[mediaType][groupKey]) {
var mediaProperties = master.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};

36
node_modules/@videojs/vhs-utils/es/media-types.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
var MPEGURL_REGEX = /^(audio|video|application)\/(x-|vnd\.apple\.)?mpegurl/i;
var DASH_REGEX = /^application\/dash\+xml/i;
/**
* Returns a string that describes the type of source based on a video source object's
* media type.
*
* @see {@link https://dev.w3.org/html5/pf-summary/video.html#dom-source-type|Source Type}
*
* @param {string} type
* Video source object media type
* @return {('hls'|'dash'|'vhs-json'|null)}
* VHS source type string
*/
export var simpleTypeFromSourceType = function simpleTypeFromSourceType(type) {
if (MPEGURL_REGEX.test(type)) {
return 'hls';
}
if (DASH_REGEX.test(type)) {
return 'dash';
} // Denotes the special case of a manifest object passed to http-streaming instead of a
// source URL.
//
// See https://en.wikipedia.org/wiki/Media_type for details on specifying media types.
//
// In this case, vnd stands for vendor, video.js for the organization, VHS for this
// project, and the +json suffix identifies the structure of the media type.
if (type === 'application/vnd.videojs.vhs+json') {
return 'vhs-json';
}
return null;
};

553
node_modules/@videojs/vhs-utils/es/mp4-helpers.js generated vendored Normal file
View file

@ -0,0 +1,553 @@
import { stringToBytes, toUint8, bytesMatch, bytesToString, toHexString, padStart, bytesToNumber } from './byte-helpers.js';
import { getAvcCodec, getHvcCodec, getAv1Codec } from './codec-helpers.js';
import { parseOpusHead } from './opus-helpers.js';
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return stringToBytes(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
var DESCRIPTORS;
export var parseDescriptors = function parseDescriptors(bytes) {
bytes = toUint8(bytes);
var results = [];
var i = 0;
while (bytes.length > i) {
var tag = bytes[i];
var size = 0;
var headerSize = 0; // tag
headerSize++;
var byte = bytes[headerSize]; // first byte
headerSize++;
while (byte & 0x80) {
size = (byte & 0x7F) << 7;
byte = bytes[headerSize];
headerSize++;
}
size += byte & 0x7F;
for (var z = 0; z < DESCRIPTORS.length; z++) {
var _DESCRIPTORS$z = DESCRIPTORS[z],
id = _DESCRIPTORS$z.id,
parser = _DESCRIPTORS$z.parser;
if (tag === id) {
results.push(parser(bytes.subarray(headerSize, headerSize + size)));
break;
}
}
i += size + headerSize;
}
return results;
};
DESCRIPTORS = [{
id: 0x03,
parser: function parser(bytes) {
var desc = {
tag: 0x03,
id: bytes[0] << 8 | bytes[1],
flags: bytes[2],
size: 3,
dependsOnEsId: 0,
ocrEsId: 0,
descriptors: [],
url: ''
}; // depends on es id
if (desc.flags & 0x80) {
desc.dependsOnEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
} // url
if (desc.flags & 0x40) {
var len = bytes[desc.size];
desc.url = bytesToString(bytes.subarray(desc.size + 1, desc.size + 1 + len));
desc.size += len;
} // ocr es id
if (desc.flags & 0x20) {
desc.ocrEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
}
desc.descriptors = parseDescriptors(bytes.subarray(desc.size)) || [];
return desc;
}
}, {
id: 0x04,
parser: function parser(bytes) {
// DecoderConfigDescriptor
var desc = {
tag: 0x04,
oti: bytes[0],
streamType: bytes[1],
bufferSize: bytes[2] << 16 | bytes[3] << 8 | bytes[4],
maxBitrate: bytes[5] << 24 | bytes[6] << 16 | bytes[7] << 8 | bytes[8],
avgBitrate: bytes[9] << 24 | bytes[10] << 16 | bytes[11] << 8 | bytes[12],
descriptors: parseDescriptors(bytes.subarray(13))
};
return desc;
}
}, {
id: 0x05,
parser: function parser(bytes) {
// DecoderSpecificInfo
return {
tag: 0x05,
bytes: bytes
};
}
}, {
id: 0x06,
parser: function parser(bytes) {
// SLConfigDescriptor
return {
tag: 0x06,
bytes: bytes
};
}
}];
/**
* find any number of boxes by name given a path to it in an iso bmff
* such as mp4.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {Uint8Array[]|string[]|string|Uint8Array} name
* An array of paths or a single path representing the name
* of boxes to search through in bytes. Paths may be
* uint8 (character codes) or strings.
*
* @param {boolean} [complete=false]
* Should we search only for complete boxes on the final path.
* This is very useful when you do not want to get back partial boxes
* in the case of streaming files.
*
* @return {Uint8Array[]}
* An array of the end paths that we found.
*/
export var findBox = function findBox(bytes, paths, complete) {
if (complete === void 0) {
complete = false;
}
paths = normalizePaths(paths);
bytes = toUint8(bytes);
var results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
var i = 0;
while (i < bytes.length) {
var size = (bytes[i] << 24 | bytes[i + 1] << 16 | bytes[i + 2] << 8 | bytes[i + 3]) >>> 0;
var type = bytes.subarray(i + 4, i + 8); // invalid box format.
if (size === 0) {
break;
}
var end = i + size;
if (end > bytes.length) {
// this box is bigger than the number of bytes we have
// and complete is set, we cannot find any more boxes.
if (complete) {
break;
}
end = bytes.length;
}
var data = bytes.subarray(i + 8, end);
if (bytesMatch(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
results.push.apply(results, findBox(data, paths.slice(1), complete));
}
}
i = end;
} // we've finished searching all of bytes
return results;
};
/**
* Search for a single matching box by name in an iso bmff format like
* mp4. This function is useful for finding codec boxes which
* can be placed arbitrarily in sample descriptions depending
* on the version of the file or file type.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {string|Uint8Array} name
* The name of the box to find.
*
* @return {Uint8Array[]}
* a subarray of bytes representing the name boxed we found.
*/
export var findNamedBox = function findNamedBox(bytes, name) {
name = normalizePath(name);
if (!name.length) {
// short-circuit the search for empty paths
return bytes.subarray(bytes.length);
}
var i = 0;
while (i < bytes.length) {
if (bytesMatch(bytes.subarray(i, i + name.length), name)) {
var size = (bytes[i - 4] << 24 | bytes[i - 3] << 16 | bytes[i - 2] << 8 | bytes[i - 1]) >>> 0;
var end = size > 1 ? i + size : bytes.byteLength;
return bytes.subarray(i + 4, end);
}
i++;
} // we've finished searching all of bytes
return bytes.subarray(bytes.length);
};
var parseSamples = function parseSamples(data, entrySize, parseEntry) {
if (entrySize === void 0) {
entrySize = 4;
}
if (parseEntry === void 0) {
parseEntry = function parseEntry(d) {
return bytesToNumber(d);
};
}
var entries = [];
if (!data || !data.length) {
return entries;
}
var entryCount = bytesToNumber(data.subarray(4, 8));
for (var i = 8; entryCount; i += entrySize, entryCount--) {
entries.push(parseEntry(data.subarray(i, i + entrySize)));
}
return entries;
};
export var buildFrameTable = function buildFrameTable(stbl, timescale) {
var keySamples = parseSamples(findBox(stbl, ['stss'])[0]);
var chunkOffsets = parseSamples(findBox(stbl, ['stco'])[0]);
var timeToSamples = parseSamples(findBox(stbl, ['stts'])[0], 8, function (entry) {
return {
sampleCount: bytesToNumber(entry.subarray(0, 4)),
sampleDelta: bytesToNumber(entry.subarray(4, 8))
};
});
var samplesToChunks = parseSamples(findBox(stbl, ['stsc'])[0], 12, function (entry) {
return {
firstChunk: bytesToNumber(entry.subarray(0, 4)),
samplesPerChunk: bytesToNumber(entry.subarray(4, 8)),
sampleDescriptionIndex: bytesToNumber(entry.subarray(8, 12))
};
});
var stsz = findBox(stbl, ['stsz'])[0]; // stsz starts with a 4 byte sampleSize which we don't need
var sampleSizes = parseSamples(stsz && stsz.length && stsz.subarray(4) || null);
var frames = [];
for (var chunkIndex = 0; chunkIndex < chunkOffsets.length; chunkIndex++) {
var samplesInChunk = void 0;
for (var i = 0; i < samplesToChunks.length; i++) {
var sampleToChunk = samplesToChunks[i];
var isThisOne = chunkIndex + 1 >= sampleToChunk.firstChunk && (i + 1 >= samplesToChunks.length || chunkIndex + 1 < samplesToChunks[i + 1].firstChunk);
if (isThisOne) {
samplesInChunk = sampleToChunk.samplesPerChunk;
break;
}
}
var chunkOffset = chunkOffsets[chunkIndex];
for (var _i = 0; _i < samplesInChunk; _i++) {
var frameEnd = sampleSizes[frames.length]; // if we don't have key samples every frame is a keyframe
var keyframe = !keySamples.length;
if (keySamples.length && keySamples.indexOf(frames.length + 1) !== -1) {
keyframe = true;
}
var frame = {
keyframe: keyframe,
start: chunkOffset,
end: chunkOffset + frameEnd
};
for (var k = 0; k < timeToSamples.length; k++) {
var _timeToSamples$k = timeToSamples[k],
sampleCount = _timeToSamples$k.sampleCount,
sampleDelta = _timeToSamples$k.sampleDelta;
if (frames.length <= sampleCount) {
// ms to ns
var lastTimestamp = frames.length ? frames[frames.length - 1].timestamp : 0;
frame.timestamp = lastTimestamp + sampleDelta / timescale * 1000;
frame.duration = sampleDelta;
break;
}
}
frames.push(frame);
chunkOffset += frameEnd;
}
}
return frames;
};
export var addSampleDescription = function addSampleDescription(track, bytes) {
var codec = bytesToString(bytes.subarray(0, 4));
if (track.type === 'video') {
track.info = track.info || {};
track.info.width = bytes[28] << 8 | bytes[29];
track.info.height = bytes[30] << 8 | bytes[31];
} else if (track.type === 'audio') {
track.info = track.info || {};
track.info.channels = bytes[20] << 8 | bytes[21];
track.info.bitDepth = bytes[22] << 8 | bytes[23];
track.info.sampleRate = bytes[28] << 8 | bytes[29];
}
if (codec === 'avc1') {
var avcC = findNamedBox(bytes, 'avcC'); // AVCDecoderConfigurationRecord
codec += "." + getAvcCodec(avcC);
track.info.avcC = avcC; // TODO: do we need to parse all this?
/* {
configurationVersion: avcC[0],
profile: avcC[1],
profileCompatibility: avcC[2],
level: avcC[3],
lengthSizeMinusOne: avcC[4] & 0x3
};
let spsNalUnitCount = avcC[5] & 0x1F;
const spsNalUnits = track.info.avc.spsNalUnits = [];
// past spsNalUnitCount
let offset = 6;
while (spsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
spsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}
let ppsNalUnitCount = avcC[offset];
const ppsNalUnits = track.info.avc.ppsNalUnits = [];
// past ppsNalUnitCount
offset += 1;
while (ppsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
ppsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}*/
// HEVCDecoderConfigurationRecord
} else if (codec === 'hvc1' || codec === 'hev1') {
codec += "." + getHvcCodec(findNamedBox(bytes, 'hvcC'));
} else if (codec === 'mp4a' || codec === 'mp4v') {
var esds = findNamedBox(bytes, 'esds');
var esDescriptor = parseDescriptors(esds.subarray(4))[0];
var decoderConfig = esDescriptor && esDescriptor.descriptors.filter(function (_ref) {
var tag = _ref.tag;
return tag === 0x04;
})[0];
if (decoderConfig) {
// most codecs do not have a further '.'
// such as 0xa5 for ac-3 and 0xa6 for e-ac-3
codec += '.' + toHexString(decoderConfig.oti);
if (decoderConfig.oti === 0x40) {
codec += '.' + (decoderConfig.descriptors[0].bytes[0] >> 3).toString();
} else if (decoderConfig.oti === 0x20) {
codec += '.' + decoderConfig.descriptors[0].bytes[4].toString();
} else if (decoderConfig.oti === 0xdd) {
codec = 'vorbis';
}
} else if (track.type === 'audio') {
codec += '.40.2';
} else {
codec += '.20.9';
}
} else if (codec === 'av01') {
// AV1DecoderConfigurationRecord
codec += "." + getAv1Codec(findNamedBox(bytes, 'av1C'));
} else if (codec === 'vp09') {
// VPCodecConfigurationRecord
var vpcC = findNamedBox(bytes, 'vpcC'); // https://www.webmproject.org/vp9/mp4/
var profile = vpcC[0];
var level = vpcC[1];
var bitDepth = vpcC[2] >> 4;
var chromaSubsampling = (vpcC[2] & 0x0F) >> 1;
var videoFullRangeFlag = (vpcC[2] & 0x0F) >> 3;
var colourPrimaries = vpcC[3];
var transferCharacteristics = vpcC[4];
var matrixCoefficients = vpcC[5];
codec += "." + padStart(profile, 2, '0');
codec += "." + padStart(level, 2, '0');
codec += "." + padStart(bitDepth, 2, '0');
codec += "." + padStart(chromaSubsampling, 2, '0');
codec += "." + padStart(colourPrimaries, 2, '0');
codec += "." + padStart(transferCharacteristics, 2, '0');
codec += "." + padStart(matrixCoefficients, 2, '0');
codec += "." + padStart(videoFullRangeFlag, 2, '0');
} else if (codec === 'theo') {
codec = 'theora';
} else if (codec === 'spex') {
codec = 'speex';
} else if (codec === '.mp3') {
codec = 'mp4a.40.34';
} else if (codec === 'msVo') {
codec = 'vorbis';
} else if (codec === 'Opus') {
codec = 'opus';
var dOps = findNamedBox(bytes, 'dOps');
track.info.opus = parseOpusHead(dOps); // TODO: should this go into the webm code??
// Firefox requires a codecDelay for opus playback
// see https://bugzilla.mozilla.org/show_bug.cgi?id=1276238
track.info.codecDelay = 6500000;
} else {
codec = codec.toLowerCase();
}
/* eslint-enable */
// flac, ac-3, ec-3, opus
track.codec = codec;
};
export var parseTracks = function parseTracks(bytes, frameTable) {
if (frameTable === void 0) {
frameTable = true;
}
bytes = toUint8(bytes);
var traks = findBox(bytes, ['moov', 'trak'], true);
var tracks = [];
traks.forEach(function (trak) {
var track = {
bytes: trak
};
var mdia = findBox(trak, ['mdia'])[0];
var hdlr = findBox(mdia, ['hdlr'])[0];
var trakType = bytesToString(hdlr.subarray(8, 12));
if (trakType === 'soun') {
track.type = 'audio';
} else if (trakType === 'vide') {
track.type = 'video';
} else {
track.type = trakType;
}
var tkhd = findBox(trak, ['tkhd'])[0];
if (tkhd) {
var view = new DataView(tkhd.buffer, tkhd.byteOffset, tkhd.byteLength);
var tkhdVersion = view.getUint8(0);
track.number = tkhdVersion === 0 ? view.getUint32(12) : view.getUint32(20);
}
var mdhd = findBox(mdia, ['mdhd'])[0];
if (mdhd) {
// mdhd is a FullBox, meaning it will have its own version as the first byte
var version = mdhd[0];
var index = version === 0 ? 12 : 20;
track.timescale = (mdhd[index] << 24 | mdhd[index + 1] << 16 | mdhd[index + 2] << 8 | mdhd[index + 3]) >>> 0;
}
var stbl = findBox(mdia, ['minf', 'stbl'])[0];
var stsd = findBox(stbl, ['stsd'])[0];
var descriptionCount = bytesToNumber(stsd.subarray(4, 8));
var offset = 8; // add codec and codec info
while (descriptionCount--) {
var len = bytesToNumber(stsd.subarray(offset, offset + 4));
var sampleDescriptor = stsd.subarray(offset + 4, offset + 4 + len);
addSampleDescription(track, sampleDescriptor);
offset += 4 + len;
}
if (frameTable) {
track.frameTable = buildFrameTable(stbl, track.timescale);
} // codec has no sub parameters
tracks.push(track);
});
return tracks;
};
export var parseMediaInfo = function parseMediaInfo(bytes) {
var mvhd = findBox(bytes, ['moov', 'mvhd'], true)[0];
if (!mvhd || !mvhd.length) {
return;
}
var info = {}; // ms to ns
// mvhd v1 has 8 byte duration and other fields too
if (mvhd[0] === 1) {
info.timestampScale = bytesToNumber(mvhd.subarray(20, 24));
info.duration = bytesToNumber(mvhd.subarray(24, 32));
} else {
info.timestampScale = bytesToNumber(mvhd.subarray(12, 16));
info.duration = bytesToNumber(mvhd.subarray(16, 20));
}
info.bytes = mvhd;
return info;
};

112
node_modules/@videojs/vhs-utils/es/nal-helpers.js generated vendored Normal file
View file

@ -0,0 +1,112 @@
import { bytesMatch, toUint8 } from './byte-helpers.js';
export var NAL_TYPE_ONE = toUint8([0x00, 0x00, 0x00, 0x01]);
export var NAL_TYPE_TWO = toUint8([0x00, 0x00, 0x01]);
export var EMULATION_PREVENTION = toUint8([0x00, 0x00, 0x03]);
/**
* Expunge any "Emulation Prevention" bytes from a "Raw Byte
* Sequence Payload"
*
* @param data {Uint8Array} the bytes of a RBSP from a NAL
* unit
* @return {Uint8Array} the RBSP without any Emulation
* Prevention Bytes
*/
export var discardEmulationPreventionBytes = function discardEmulationPreventionBytes(bytes) {
var positions = [];
var i = 1; // Find all `Emulation Prevention Bytes`
while (i < bytes.length - 2) {
if (bytesMatch(bytes.subarray(i, i + 3), EMULATION_PREVENTION)) {
positions.push(i + 2);
i++;
}
i++;
} // If no Emulation Prevention Bytes were found just return the original
// array
if (positions.length === 0) {
return bytes;
} // Create a new array to hold the NAL unit data
var newLength = bytes.length - positions.length;
var newData = new Uint8Array(newLength);
var sourceIndex = 0;
for (i = 0; i < newLength; sourceIndex++, i++) {
if (sourceIndex === positions[0]) {
// Skip this byte
sourceIndex++; // Remove this position index
positions.shift();
}
newData[i] = bytes[sourceIndex];
}
return newData;
};
export var findNal = function findNal(bytes, dataType, types, nalLimit) {
if (nalLimit === void 0) {
nalLimit = Infinity;
}
bytes = toUint8(bytes);
types = [].concat(types);
var i = 0;
var nalStart;
var nalsFound = 0; // keep searching until:
// we reach the end of bytes
// we reach the maximum number of nals they want to seach
// NOTE: that we disregard nalLimit when we have found the start
// of the nal we want so that we can find the end of the nal we want.
while (i < bytes.length && (nalsFound < nalLimit || nalStart)) {
var nalOffset = void 0;
if (bytesMatch(bytes.subarray(i), NAL_TYPE_ONE)) {
nalOffset = 4;
} else if (bytesMatch(bytes.subarray(i), NAL_TYPE_TWO)) {
nalOffset = 3;
} // we are unsynced,
// find the next nal unit
if (!nalOffset) {
i++;
continue;
}
nalsFound++;
if (nalStart) {
return discardEmulationPreventionBytes(bytes.subarray(nalStart, i));
}
var nalType = void 0;
if (dataType === 'h264') {
nalType = bytes[i + nalOffset] & 0x1f;
} else if (dataType === 'h265') {
nalType = bytes[i + nalOffset] >> 1 & 0x3f;
}
if (types.indexOf(nalType) !== -1) {
nalStart = i + nalOffset;
} // nal header is 1 length for h264, and 2 for h265
i += nalOffset + (dataType === 'h264' ? 1 : 2);
}
return bytes.subarray(0, 0);
};
export var findH264Nal = function findH264Nal(bytes, type, nalLimit) {
return findNal(bytes, 'h264', type, nalLimit);
};
export var findH265Nal = function findH265Nal(bytes, type, nalLimit) {
return findNal(bytes, 'h265', type, nalLimit);
};

28
node_modules/@videojs/vhs-utils/es/ogg-helpers.js generated vendored Normal file
View file

@ -0,0 +1,28 @@
import { bytesMatch, toUint8 } from './byte-helpers';
var SYNC_WORD = toUint8([0x4f, 0x67, 0x67, 0x53]);
export var getPages = function getPages(bytes, start, end) {
if (end === void 0) {
end = Infinity;
}
bytes = toUint8(bytes);
var pages = [];
var i = 0;
while (i < bytes.length && pages.length < end) {
// we are unsynced,
// find the next syncword
if (!bytesMatch(bytes, SYNC_WORD, {
offset: i
})) {
i++;
continue;
}
var segmentLength = bytes[i + 27];
pages.push(bytes.subarray(i, i + 28 + segmentLength));
i += pages[pages.length - 1].length;
}
return pages.slice(start, end);
};

52
node_modules/@videojs/vhs-utils/es/opus-helpers.js generated vendored Normal file
View file

@ -0,0 +1,52 @@
export var OPUS_HEAD = new Uint8Array([// O, p, u, s
0x4f, 0x70, 0x75, 0x73, // H, e, a, d
0x48, 0x65, 0x61, 0x64]); // https://wiki.xiph.org/OggOpus
// https://vfrmaniac.fushizen.eu/contents/opus_in_isobmff.html
// https://opus-codec.org/docs/opusfile_api-0.7/structOpusHead.html
export var parseOpusHead = function parseOpusHead(bytes) {
var view = new DataView(bytes.buffer, bytes.byteOffset, bytes.byteLength);
var version = view.getUint8(0); // version 0, from mp4, does not use littleEndian.
var littleEndian = version !== 0;
var config = {
version: version,
channels: view.getUint8(1),
preSkip: view.getUint16(2, littleEndian),
sampleRate: view.getUint32(4, littleEndian),
outputGain: view.getUint16(8, littleEndian),
channelMappingFamily: view.getUint8(10)
};
if (config.channelMappingFamily > 0 && bytes.length > 10) {
config.streamCount = view.getUint8(11);
config.twoChannelStreamCount = view.getUint8(12);
config.channelMapping = [];
for (var c = 0; c < config.channels; c++) {
config.channelMapping.push(view.getUint8(13 + c));
}
}
return config;
};
export var setOpusHead = function setOpusHead(config) {
var size = config.channelMappingFamily <= 0 ? 11 : 12 + config.channels;
var view = new DataView(new ArrayBuffer(size));
var littleEndian = config.version !== 0;
view.setUint8(0, config.version);
view.setUint8(1, config.channels);
view.setUint16(2, config.preSkip, littleEndian);
view.setUint32(4, config.sampleRate, littleEndian);
view.setUint16(8, config.outputGain, littleEndian);
view.setUint8(10, config.channelMappingFamily);
if (config.channelMappingFamily > 0) {
view.setUint8(11, config.streamCount);
config.channelMapping.foreach(function (cm, i) {
view.setUint8(12 + i, cm);
});
}
return new Uint8Array(view.buffer);
};

47
node_modules/@videojs/vhs-utils/es/resolve-url.js generated vendored Normal file
View file

@ -0,0 +1,47 @@
import URLToolkit from 'url-toolkit';
import window from 'global/window';
var DEFAULT_LOCATION = 'http://example.com';
var resolveUrl = function resolveUrl(baseUrl, relativeUrl) {
// return early if we don't need to resolve
if (/^[a-z]+:/i.test(relativeUrl)) {
return relativeUrl;
} // if baseUrl is a data URI, ignore it and resolve everything relative to window.location
if (/^data:/.test(baseUrl)) {
baseUrl = window.location && window.location.href || '';
} // IE11 supports URL but not the URL constructor
// feature detect the behavior we want
var nativeURL = typeof window.URL === 'function';
var protocolLess = /^\/\//.test(baseUrl); // remove location if window.location isn't available (i.e. we're in node)
// and if baseUrl isn't an absolute url
var removeLocation = !window.location && !/\/\//i.test(baseUrl); // if the base URL is relative then combine with the current location
if (nativeURL) {
baseUrl = new window.URL(baseUrl, window.location || DEFAULT_LOCATION);
} else if (!/\/\//i.test(baseUrl)) {
baseUrl = URLToolkit.buildAbsoluteURL(window.location && window.location.href || '', baseUrl);
}
if (nativeURL) {
var newUrl = new URL(relativeUrl, baseUrl); // if we're a protocol-less url, remove the protocol
// and if we're location-less, remove the location
// otherwise, return the url unmodified
if (removeLocation) {
return newUrl.href.slice(DEFAULT_LOCATION.length);
} else if (protocolLess) {
return newUrl.href.slice(newUrl.protocol.length);
}
return newUrl.href;
}
return URLToolkit.buildAbsoluteURL(baseUrl, relativeUrl);
};
export default resolveUrl;

74
node_modules/@videojs/vhs-utils/es/riff-helpers.js generated vendored Normal file
View file

@ -0,0 +1,74 @@
import { toUint8, stringToBytes, bytesMatch } from './byte-helpers.js';
var CONSTANTS = {
LIST: toUint8([0x4c, 0x49, 0x53, 0x54]),
RIFF: toUint8([0x52, 0x49, 0x46, 0x46]),
WAVE: toUint8([0x57, 0x41, 0x56, 0x45])
};
var normalizePath = function normalizePath(path) {
if (typeof path === 'string') {
return stringToBytes(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
var normalizePaths = function normalizePaths(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map(function (p) {
return normalizePath(p);
});
};
export var findFourCC = function findFourCC(bytes, paths) {
paths = normalizePaths(paths);
bytes = toUint8(bytes);
var results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
var i = 0;
while (i < bytes.length) {
var type = bytes.subarray(i, i + 4);
var size = (bytes[i + 7] << 24 | bytes[i + 6] << 16 | bytes[i + 5] << 8 | bytes[i + 4]) >>> 0; // skip LIST/RIFF and get the actual type
if (bytesMatch(type, CONSTANTS.LIST) || bytesMatch(type, CONSTANTS.RIFF) || bytesMatch(type, CONSTANTS.WAVE)) {
type = bytes.subarray(i + 8, i + 12);
i += 4;
size -= 4;
}
var data = bytes.subarray(i + 8, i + 8 + size);
if (bytesMatch(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
var subresults = findFourCC(data, paths.slice(1));
if (subresults.length) {
results = results.concat(subresults);
}
}
}
i += 8 + data.length;
} // we've finished searching all of bytes
return results;
};

121
node_modules/@videojs/vhs-utils/es/stream.js generated vendored Normal file
View file

@ -0,0 +1,121 @@
/**
* @file stream.js
*/
/**
* A lightweight readable stream implemention that handles event dispatching.
*
* @class Stream
*/
var Stream = /*#__PURE__*/function () {
function Stream() {
this.listeners = {};
}
/**
* Add a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener the callback to be invoked when an event of
* the specified type occurs
*/
var _proto = Stream.prototype;
_proto.on = function on(type, listener) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(listener);
}
/**
* Remove a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener a function previously registered for this
* type of event through `on`
* @return {boolean} if we could turn it off or not
*/
;
_proto.off = function off(type, listener) {
if (!this.listeners[type]) {
return false;
}
var index = this.listeners[type].indexOf(listener); // TODO: which is better?
// In Video.js we slice listener functions
// on trigger so that it does not mess up the order
// while we loop through.
//
// Here we slice on off so that the loop in trigger
// can continue using it's old reference to loop without
// messing up the order.
this.listeners[type] = this.listeners[type].slice(0);
this.listeners[type].splice(index, 1);
return index > -1;
}
/**
* Trigger an event of the specified type on this stream. Any additional
* arguments to this function are passed as parameters to event listeners.
*
* @param {string} type the event name
*/
;
_proto.trigger = function trigger(type) {
var callbacks = this.listeners[type];
if (!callbacks) {
return;
} // Slicing the arguments on every invocation of this method
// can add a significant amount of overhead. Avoid the
// intermediate object creation for the common case of a
// single callback argument
if (arguments.length === 2) {
var length = callbacks.length;
for (var i = 0; i < length; ++i) {
callbacks[i].call(this, arguments[1]);
}
} else {
var args = Array.prototype.slice.call(arguments, 1);
var _length = callbacks.length;
for (var _i = 0; _i < _length; ++_i) {
callbacks[_i].apply(this, args);
}
}
}
/**
* Destroys the stream and cleans up.
*/
;
_proto.dispose = function dispose() {
this.listeners = {};
}
/**
* Forwards all `data` events on this stream to the destination stream. The
* destination stream should provide a method `push` to receive the data
* events as they arrive.
*
* @param {Stream} destination the stream that will receive all `data` events
* @see http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
*/
;
_proto.pipe = function pipe(destination) {
this.on('data', function (data) {
destination.push(data);
});
};
return Stream;
}();
export { Stream as default };

18
node_modules/@videojs/vhs-utils/index.html generated vendored Normal file
View file

@ -0,0 +1,18 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>@videojs/vhs-utils Demo</title>
</head>
<body>
<h1>Test things with window.vhsUtils.* in the console</h1>
<ul>
<li><a href="/test/debug.html">Run unit tests in browser.</a></li>
</ul>
<script src="dist/vhs-utils.js"></script>
<script>
console.log('Test things with window.vhsUtils.*');
</script>
</body>
</html>

135
node_modules/@videojs/vhs-utils/package.json generated vendored Normal file
View file

@ -0,0 +1,135 @@
{
"_from": "@videojs/vhs-utils@^3.0.2",
"_id": "@videojs/vhs-utils@3.0.2",
"_inBundle": false,
"_integrity": "sha512-r8Yas1/tNGsGRNoIaDJuiWiQgM0P2yaEnobgzw5JcBiEqxnS8EXoUm4QtKH7nJtnppZ1yqBx1agBZCvBMKXA2w==",
"_location": "/@videojs/vhs-utils",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "@videojs/vhs-utils@^3.0.2",
"name": "@videojs/vhs-utils",
"escapedName": "@videojs%2fvhs-utils",
"scope": "@videojs",
"rawSpec": "^3.0.2",
"saveSpec": null,
"fetchSpec": "^3.0.2"
},
"_requiredBy": [
"/@videojs/http-streaming",
"/aes-decrypter",
"/m3u8-parser",
"/mpd-parser",
"/video.js"
],
"_resolved": "https://registry.npmjs.org/@videojs/vhs-utils/-/vhs-utils-3.0.2.tgz",
"_shasum": "0203418ecaaff29bc33c69b6ad707787347b7614",
"_spec": "@videojs/vhs-utils@^3.0.2",
"_where": "F:\\Documents\\websites\\BMM\\node_modules\\video.js",
"author": {
"name": "brandonocasey",
"email": "brandonocasey@gmail.com"
},
"browser": "./dist/vhs-utils.js",
"browserslist": [
"defaults",
"ie 11"
],
"bugs": {
"url": "https://github.com/videojs/vhs-utils/issues"
},
"bundleDependencies": false,
"dependencies": {
"@babel/runtime": "^7.12.5",
"global": "^4.4.0",
"url-toolkit": "^2.2.1"
},
"deprecated": false,
"description": "Objects and functions shared throughtout @videojs/http-streaming code",
"devDependencies": {
"@babel/cli": "^7.12.8",
"@videojs/babel-config": "^0.2.0",
"@videojs/generator-helpers": "~2.0.1",
"karma": "^5.2.3",
"rollup": "^2.28.2",
"videojs-generate-karma-config": "~7.0.0",
"videojs-generate-rollup-config": "~6.0.0",
"videojs-generator-verify": "~3.0.3",
"videojs-standard": "^8.0.4"
},
"engines": {
"node": ">=8",
"npm": ">=5"
},
"files": [
"CONTRIBUTING.md",
"es/",
"cjs/",
"dist/",
"docs/",
"index.html",
"scripts/",
"src/",
"test/"
],
"generator-videojs-plugin": {
"version": "7.7.1"
},
"homepage": "https://github.com/videojs/vhs-utils#readme",
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"keywords": [
"videojs",
"videojs-plugin"
],
"license": "MIT",
"lint-staged": {
"*.js": "vjsstandard --fix",
"README.md": "doctoc --notitle"
},
"main": "./cjs/index.js",
"module": "./es/index.js",
"name": "@videojs/vhs-utils",
"repository": {
"type": "git",
"url": "git+ssh://git@github.com/videojs/vhs-utils.git"
},
"scripts": {
"build": "npm-run-all -s clean -p build:*",
"build-prod": "cross-env-shell NO_TEST_BUNDLE=1 'npm run build'",
"build-test": "cross-env-shell TEST_BUNDLE_ONLY=1 'npm run build'",
"build:cjs": "babel-config-cjs -d ./cjs ./src",
"build:es": "babel-config-es -d ./es ./src",
"build:js": "rollup -c scripts/rollup.config.js",
"clean": "shx rm -rf ./dist ./test/dist ./cjs ./es && shx mkdir -p ./test/dist ./cjs ./es",
"lint": "vjsstandard",
"posttest": "shx cat test/dist/coverage/text.txt",
"prepublishOnly": "npm-run-all build-prod && vjsverify --verbose",
"preversion": "npm test",
"server": "karma start scripts/karma.conf.js --singleRun=false --auto-watch",
"start": "npm-run-all -p server watch",
"test": "npm-run-all lint build-test && npm-run-all test:*",
"test:browser": "karma start scripts/karma.conf.js",
"test:node": "qunit test/dist/bundle.js",
"update-changelog": "conventional-changelog -p videojs -i CHANGELOG.md -s",
"version": "is-prerelease || npm run update-changelog && git add CHANGELOG.md",
"watch": "npm-run-all -p watch:*",
"watch:cjs": "npm run build:cjs -- -w",
"watch:es": "npm run build:es -- -w",
"watch:js": "npm run build:js -- -w"
},
"version": "3.0.2",
"vjsstandard": {
"ignore": [
"dist",
"cjs",
"es",
"docs",
"test/dist"
]
}
}

View file

@ -0,0 +1,96 @@
const fs = require('fs');
const path = require('path');
const baseDir = path.join(__dirname, '..');
const formatDir = path.join(baseDir, 'test', 'fixtures', 'formats');
const parsingDir = path.join(baseDir, 'test', 'fixtures', 'parsing');
const getFiles = (dir) => (fs.readdirSync(dir) || []).reduce((acc, d) => {
d = path.resolve(dir, d);
const stat = fs.statSync(d);
if (!stat.isDirectory()) {
return acc;
}
const subfiles = fs.readdirSync(d).map((f) => path.resolve(d, f));
return acc.concat(subfiles);
}, []);
const buildDataString = function(files, id) {
const data = {};
files.forEach((file) => {
// read the file directly as a buffer before converting to base64
const base64 = fs.readFileSync(file).toString('base64');
data[path.basename(file)] = base64;
});
const dataExportStrings = Object.keys(data).reduce((acc, key) => {
// use a function since the segment may be cleared out on usage
acc.push(`${id}Files['${key}'] = () => {
cache['${key}'] = cache['${key}'] || base64ToUint8Array('${data[key]}');
const dest = new Uint8Array(cache['${key}'].byteLength);
dest.set(cache['${key}']);
return dest;
};`);
return acc;
}, []);
const file =
'/* istanbul ignore file */\n' +
'\n' +
`import base64ToUint8Array from "${path.resolve(baseDir, 'src/decode-b64-to-uint8-array.js')}";\n` +
'const cache = {};\n' +
`const ${id}Files = {};\n` +
dataExportStrings.join('\n') +
`export default ${id}Files`;
return file;
};
/* we refer to them as .js, so that babel and other plugins can work on them */
const formatsKey = 'create-test-data!formats.js';
const parsingKey = 'create-test-data!parsing.js';
module.exports = function() {
return {
name: 'createTestData',
buildStart() {
this.addWatchFile(formatDir);
this.addWatchFile(parsingDir);
getFiles(formatDir).forEach((file) => this.addWatchFile(file));
getFiles(parsingDir).forEach((file) => this.addWatchFile(file));
},
resolveId(importee, importer) {
// if this is not an id we can resolve return
if (importee.indexOf('create-test-data!') !== 0) {
return;
}
const name = importee.split('!')[1];
if (name.indexOf('formats') !== -1) {
return formatsKey;
}
if (name.indexOf('parsing') !== -1) {
return parsingKey;
}
return null;
},
load(id) {
if (id === formatsKey) {
return buildDataString.call(this, getFiles(formatDir), 'format');
}
if (id === parsingKey) {
return buildDataString.call(this, getFiles(parsingDir), 'parsing');
}
}
};
};

16
node_modules/@videojs/vhs-utils/scripts/karma.conf.js generated vendored Normal file
View file

@ -0,0 +1,16 @@
const generate = require('videojs-generate-karma-config');
module.exports = function(config) {
// see https://github.com/videojs/videojs-generate-karma-config
// for options
const options = {
serverBrowsers() {
return [];
}
};
config = generate(config, options);
// any other custom stuff not supported by options here!
};

View file

@ -0,0 +1,28 @@
const createTestData = require('./create-test-data.js');
const generate = require('videojs-generate-rollup-config');
// see https://github.com/videojs/videojs-generate-rollup-config
// for options
const options = {
input: 'src/index.js',
exportName: 'vhsUtils',
distName: 'vhs-utils',
primedPlugins(defaults) {
return Object.assign(defaults, {
createTestData: createTestData()
});
},
plugins(defaults) {
defaults.test.splice(0, 0, 'createTestData');
return defaults;
}
};
const config = generate(options);
if (config.builds.module) {
delete config.builds.module;
}
// Add additonal builds/customization here!
// export the builds to rollup
export default Object.values(config.builds);

272
node_modules/@videojs/vhs-utils/src/byte-helpers.js generated vendored Normal file
View file

@ -0,0 +1,272 @@
import window from 'global/window';
// const log2 = Math.log2 ? Math.log2 : (x) => (Math.log(x) / Math.log(2));
const repeat = function(str, len) {
let acc = '';
while (len--) {
acc += str;
}
return acc;
};
// count the number of bits it would take to represent a number
// we used to do this with log2 but BigInt does not support builtin math
// Math.ceil(log2(x));
export const countBits = (x) => x.toString(2).length;
// count the number of whole bytes it would take to represent a number
export const countBytes = (x) => Math.ceil(countBits(x) / 8);
export const padStart = (b, len, str = ' ') => (repeat(str, len) + b.toString()).slice(-len);
export const isTypedArray = (obj) => ArrayBuffer.isView(obj);
export const toUint8 = function(bytes) {
if (bytes instanceof Uint8Array) {
return bytes;
}
if (!Array.isArray(bytes) && !isTypedArray(bytes) && !(bytes instanceof ArrayBuffer)) {
// any non-number or NaN leads to empty uint8array
// eslint-disable-next-line
if (typeof bytes !== 'number' || (typeof bytes === 'number' && bytes !== bytes)) {
bytes = 0;
} else {
bytes = [bytes];
}
}
return new Uint8Array(
bytes && bytes.buffer || bytes,
bytes && bytes.byteOffset || 0,
bytes && bytes.byteLength || 0
);
};
export const toHexString = function(bytes) {
bytes = toUint8(bytes);
let str = '';
for (let i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(16), 2, '0');
}
return str;
};
export const toBinaryString = function(bytes) {
bytes = toUint8(bytes);
let str = '';
for (let i = 0; i < bytes.length; i++) {
str += padStart(bytes[i].toString(2), 8, '0');
}
return str;
};
const BigInt = window.BigInt || Number;
const BYTE_TABLE = [
BigInt('0x1'),
BigInt('0x100'),
BigInt('0x10000'),
BigInt('0x1000000'),
BigInt('0x100000000'),
BigInt('0x10000000000'),
BigInt('0x1000000000000'),
BigInt('0x100000000000000'),
BigInt('0x10000000000000000')
];
export const ENDIANNESS = (function() {
const a = new Uint16Array([0xFFCC]);
const b = new Uint8Array(a.buffer, a.byteOffset, a.byteLength);
if (b[0] === 0xFF) {
return 'big';
}
if (b[0] === 0xCC) {
return 'little';
}
return 'unknown';
})();
export const IS_BIG_ENDIAN = ENDIANNESS === 'big';
export const IS_LITTLE_ENDIAN = ENDIANNESS === 'little';
export const bytesToNumber = function(bytes, {signed = false, le = false} = {}) {
bytes = toUint8(bytes);
const fn = le ? 'reduce' : 'reduceRight';
const obj = bytes[fn] ? bytes[fn] : Array.prototype[fn];
let number = obj.call(bytes, function(total, byte, i) {
const exponent = le ? i : Math.abs(i + 1 - bytes.length);
return total + (BigInt(byte) * BYTE_TABLE[exponent]);
}, BigInt(0));
if (signed) {
const max = BYTE_TABLE[bytes.length] / BigInt(2) - BigInt(1);
number = BigInt(number);
if (number > max) {
number -= max;
number -= max;
number -= BigInt(2);
}
}
return Number(number);
};
export const numberToBytes = function(number, {le = false} = {}) {
// eslint-disable-next-line
if ((typeof number !== 'bigint' && typeof number !== 'number') || (typeof number === 'number' && number !== number)) {
number = 0;
}
number = BigInt(number);
const byteCount = countBytes(number);
const bytes = new Uint8Array(new ArrayBuffer(byteCount));
for (let i = 0; i < byteCount; i++) {
const byteIndex = le ? i : Math.abs(i + 1 - bytes.length);
bytes[byteIndex] = Number((number / BYTE_TABLE[i]) & BigInt(0xFF));
if (number < 0) {
bytes[byteIndex] = Math.abs(~bytes[byteIndex]);
bytes[byteIndex] -= i === 0 ? 1 : 2;
}
}
return bytes;
};
export const bytesToString = (bytes) => {
if (!bytes) {
return '';
}
// TODO: should toUint8 handle cases where we only have 8 bytes
// but report more since this is a Uint16+ Array?
bytes = Array.prototype.slice.call(bytes);
const string = String.fromCharCode.apply(null, toUint8(bytes));
try {
return decodeURIComponent(escape(string));
} catch (e) {
// if decodeURIComponent/escape fails, we are dealing with partial
// or full non string data. Just return the potentially garbled string.
}
return string;
};
export const stringToBytes = (string, stringIsBytes) => {
if (typeof string !== 'string' && string && typeof string.toString === 'function') {
string = string.toString();
}
if (typeof string !== 'string') {
return new Uint8Array();
}
// If the string already is bytes, we don't have to do this
// otherwise we do this so that we split multi length characters
// into individual bytes
if (!stringIsBytes) {
string = unescape(encodeURIComponent(string));
}
const view = new Uint8Array(string.length);
for (let i = 0; i < string.length; i++) {
view[i] = string.charCodeAt(i);
}
return view;
};
export const concatTypedArrays = (...buffers) => {
buffers = buffers.filter((b) => b && (b.byteLength || b.length) && typeof b !== 'string');
if (buffers.length <= 1) {
// for 0 length we will return empty uint8
// for 1 length we return the first uint8
return toUint8(buffers[0]);
}
const totalLen = buffers.reduce((total, buf, i) => total + (buf.byteLength || buf.length), 0);
const tempBuffer = new Uint8Array(totalLen);
let offset = 0;
buffers.forEach(function(buf) {
buf = toUint8(buf);
tempBuffer.set(buf, offset);
offset += buf.byteLength;
});
return tempBuffer;
};
/**
* Check if the bytes "b" are contained within bytes "a".
*
* @param {Uint8Array|Array} a
* Bytes to check in
*
* @param {Uint8Array|Array} b
* Bytes to check for
*
* @param {Object} options
* options
*
* @param {Array|Uint8Array} [offset=0]
* offset to use when looking at bytes in a
*
* @param {Array|Uint8Array} [mask=[]]
* mask to use on bytes before comparison.
*
* @return {boolean}
* If all bytes in b are inside of a, taking into account
* bit masks.
*/
export const bytesMatch = (a, b, {offset = 0, mask = []} = {}) => {
a = toUint8(a);
b = toUint8(b);
// ie 11 does not support uint8 every
const fn = b.every ? b.every : Array.prototype.every;
return b.length &&
a.length - offset >= b.length &&
// ie 11 doesn't support every on uin8
fn.call(b, (bByte, i) => {
const aByte = (mask[i] ? (mask[i] & a[offset + i]) : a[offset + i]);
return bByte === aByte;
});
};
export const sliceBytes = function(src, start, end) {
if (Uint8Array.prototype.slice) {
return Uint8Array.prototype.slice.call(src, start, end);
}
return new Uint8Array(Array.prototype.slice.call(src, start, end));
};
export const reverseBytes = function(src) {
if (src.reverse) {
return src.reverse();
}
return Array.prototype.reverse.call(src);
};

106
node_modules/@videojs/vhs-utils/src/codec-helpers.js generated vendored Normal file
View file

@ -0,0 +1,106 @@
import {padStart, toHexString, toBinaryString} from './byte-helpers.js';
// https://aomediacodec.github.io/av1-isobmff/#av1codecconfigurationbox-syntax
// https://developer.mozilla.org/en-US/docs/Web/Media/Formats/codecs_parameter#AV1
export const getAv1Codec = function(bytes) {
let codec = '';
const profile = bytes[1] >>> 3;
const level = bytes[1] & 0x1F;
const tier = bytes[2] >>> 7;
const highBitDepth = (bytes[2] & 0x40) >> 6;
const twelveBit = (bytes[2] & 0x20) >> 5;
const monochrome = (bytes[2] & 0x10) >> 4;
const chromaSubsamplingX = (bytes[2] & 0x08) >> 3;
const chromaSubsamplingY = (bytes[2] & 0x04) >> 2;
const chromaSamplePosition = bytes[2] & 0x03;
codec += `${profile}.${padStart(level, 2, '0')}`;
if (tier === 0) {
codec += 'M';
} else if (tier === 1) {
codec += 'H';
}
let bitDepth;
if (profile === 2 && highBitDepth) {
bitDepth = twelveBit ? 12 : 10;
} else {
bitDepth = highBitDepth ? 10 : 8;
}
codec += `.${padStart(bitDepth, 2, '0')}`;
// TODO: can we parse color range??
codec += `.${monochrome}`;
codec += `.${chromaSubsamplingX}${chromaSubsamplingY}${chromaSamplePosition}`;
return codec;
};
export const getAvcCodec = function(bytes) {
const profileId = toHexString(bytes[1]);
const constraintFlags = toHexString(bytes[2] & 0xFC);
const levelId = toHexString(bytes[3]);
return `${profileId}${constraintFlags}${levelId}`;
};
export const getHvcCodec = function(bytes) {
let codec = '';
const profileSpace = bytes[1] >> 6;
const profileId = bytes[1] & 0x1F;
const tierFlag = (bytes[1] & 0x20) >> 5;
const profileCompat = bytes.subarray(2, 6);
const constraintIds = bytes.subarray(6, 12);
const levelId = bytes[12];
if (profileSpace === 1) {
codec += 'A';
} else if (profileSpace === 2) {
codec += 'B';
} else if (profileSpace === 3) {
codec += 'C';
}
codec += `${profileId}.`;
// ffmpeg does this in big endian
let profileCompatVal = parseInt(toBinaryString(profileCompat).split('').reverse().join(''), 2);
// apple does this in little endian...
if (profileCompatVal > 255) {
profileCompatVal = parseInt(toBinaryString(profileCompat), 2);
}
codec += `${profileCompatVal.toString(16)}.`;
if (tierFlag === 0) {
codec += 'L';
} else {
codec += 'H';
}
codec += levelId;
let constraints = '';
for (let i = 0; i < constraintIds.length; i++) {
const v = constraintIds[i];
if (v) {
if (constraints) {
constraints += '.';
}
constraints += v.toString(16);
}
}
if (constraints) {
codec += `.${constraints}`;
}
return codec;
};

225
node_modules/@videojs/vhs-utils/src/codecs.js generated vendored Normal file
View file

@ -0,0 +1,225 @@
import window from 'global/window';
const regexs = {
// to determine mime types
mp4: /^(av0?1|avc0?[1234]|vp0?9|flac|opus|mp3|mp4a|mp4v|stpp.ttml.im1t)/,
webm: /^(vp0?[89]|av0?1|opus|vorbis)/,
ogg: /^(vp0?[89]|theora|flac|opus|vorbis)/,
// to determine if a codec is audio or video
video: /^(av0?1|avc0?[1234]|vp0?[89]|hvc1|hev1|theora|mp4v)/,
audio: /^(mp4a|flac|vorbis|opus|ac-[34]|ec-3|alac|mp3|speex|aac)/,
text: /^(stpp.ttml.im1t)/,
// mux.js support regex
muxerVideo: /^(avc0?1)/,
muxerAudio: /^(mp4a)/,
// match nothing as muxer does not support text right now.
// there cannot never be a character before the start of a string
// so this matches nothing.
muxerText: /a^/
};
const mediaTypes = ['video', 'audio', 'text'];
const upperMediaTypes = ['Video', 'Audio', 'Text'];
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec string with the standard
* `avc1.<hhhhhh>`
*
* @param {string} codec
* Codec string to translate
* @return {string}
* The translated codec string
*/
export const translateLegacyCodec = function(codec) {
if (!codec) {
return codec;
}
return codec.replace(/avc1\.(\d+)\.(\d+)/i, function(orig, profile, avcLevel) {
const profileHex = ('00' + Number(profile).toString(16)).slice(-2);
const avcLevelHex = ('00' + Number(avcLevel).toString(16)).slice(-2);
return 'avc1.' + profileHex + '00' + avcLevelHex;
});
};
/**
* Replace the old apple-style `avc1.<dd>.<dd>` codec strings with the standard
* `avc1.<hhhhhh>`
*
* @param {string[]} codecs
* An array of codec strings to translate
* @return {string[]}
* The translated array of codec strings
*/
export const translateLegacyCodecs = function(codecs) {
return codecs.map(translateLegacyCodec);
};
/**
* Replace codecs in the codec string with the old apple-style `avc1.<dd>.<dd>` to the
* standard `avc1.<hhhhhh>`.
*
* @param {string} codecString
* The codec string
* @return {string}
* The codec string with old apple-style codecs replaced
*
* @private
*/
export const mapLegacyAvcCodecs = function(codecString) {
return codecString.replace(/avc1\.(\d+)\.(\d+)/i, (match) => {
return translateLegacyCodecs([match])[0];
});
};
/**
* @typedef {Object} ParsedCodecInfo
* @property {number} codecCount
* Number of codecs parsed
* @property {string} [videoCodec]
* Parsed video codec (if found)
* @property {string} [videoObjectTypeIndicator]
* Video object type indicator (if found)
* @property {string|null} audioProfile
* Audio profile
*/
/**
* Parses a codec string to retrieve the number of codecs specified, the video codec and
* object type indicator, and the audio profile.
*
* @param {string} [codecString]
* The codec string to parse
* @return {ParsedCodecInfo}
* Parsed codec info
*/
export const parseCodecs = function(codecString = '') {
const codecs = codecString.split(',');
const result = [];
codecs.forEach(function(codec) {
codec = codec.trim();
let codecType;
mediaTypes.forEach(function(name) {
const match = regexs[name].exec(codec.toLowerCase());
if (!match || match.length <= 1) {
return;
}
codecType = name;
// maintain codec case
const type = codec.substring(0, match[1].length);
const details = codec.replace(type, '');
result.push({type, details, mediaType: name});
});
if (!codecType) {
result.push({type: codec, details: '', mediaType: 'unknown'});
}
});
return result;
};
/**
* Returns a ParsedCodecInfo object for the default alternate audio playlist if there is
* a default alternate audio playlist for the provided audio group.
*
* @param {Object} master
* The master playlist
* @param {string} audioGroupId
* ID of the audio group for which to find the default codec info
* @return {ParsedCodecInfo}
* Parsed codec info
*/
export const codecsFromDefault = (master, audioGroupId) => {
if (!master.mediaGroups.AUDIO || !audioGroupId) {
return null;
}
const audioGroup = master.mediaGroups.AUDIO[audioGroupId];
if (!audioGroup) {
return null;
}
for (const name in audioGroup) {
const audioType = audioGroup[name];
if (audioType.default && audioType.playlists) {
// codec should be the same for all playlists within the audio type
return parseCodecs(audioType.playlists[0].attributes.CODECS);
}
}
return null;
};
export const isVideoCodec = (codec = '') => regexs.video.test(codec.trim().toLowerCase());
export const isAudioCodec = (codec = '') => regexs.audio.test(codec.trim().toLowerCase());
export const isTextCodec = (codec = '') => regexs.text.test(codec.trim().toLowerCase());
export const getMimeForCodec = (codecString) => {
if (!codecString || typeof codecString !== 'string') {
return;
}
const codecs = codecString
.toLowerCase()
.split(',')
.map((c) => translateLegacyCodec(c.trim()));
// default to video type
let type = 'video';
// only change to audio type if the only codec we have is
// audio
if (codecs.length === 1 && isAudioCodec(codecs[0])) {
type = 'audio';
} else if (codecs.length === 1 && isTextCodec(codecs[0])) {
// text uses application/<container> for now
type = 'application';
}
// default the container to mp4
let container = 'mp4';
// every codec must be able to go into the container
// for that container to be the correct one
if (codecs.every((c) => regexs.mp4.test(c))) {
container = 'mp4';
} else if (codecs.every((c) => regexs.webm.test(c))) {
container = 'webm';
} else if (codecs.every((c) => regexs.ogg.test(c))) {
container = 'ogg';
}
return `${type}/${container};codecs="${codecString}"`;
};
export const browserSupportsCodec = (codecString = '') => window.MediaSource &&
window.MediaSource.isTypeSupported &&
window.MediaSource.isTypeSupported(getMimeForCodec(codecString)) || false;
export const muxerSupportsCodec = (codecString = '') => codecString.toLowerCase().split(',').every((codec) => {
codec = codec.trim();
// any match is supported.
for (let i = 0; i < upperMediaTypes.length; i++) {
const type = upperMediaTypes[i];
if (regexs[`muxer${type}`].test(codec)) {
return true;
}
}
return false;
});
export const DEFAULT_AUDIO_CODEC = 'mp4a.40.2';
export const DEFAULT_VIDEO_CODEC = 'avc1.4d400d';

172
node_modules/@videojs/vhs-utils/src/containers.js generated vendored Normal file
View file

@ -0,0 +1,172 @@
import {toUint8, bytesMatch} from './byte-helpers.js';
import {findBox} from './mp4-helpers.js';
import {findEbml, EBML_TAGS} from './ebml-helpers.js';
import {getId3Offset} from './id3-helpers.js';
import {findH264Nal, findH265Nal} from './nal-helpers.js';
const CONSTANTS = {
// "webm" string literal in hex
'webm': toUint8([0x77, 0x65, 0x62, 0x6d]),
// "matroska" string literal in hex
'matroska': toUint8([0x6d, 0x61, 0x74, 0x72, 0x6f, 0x73, 0x6b, 0x61]),
// "fLaC" string literal in hex
'flac': toUint8([0x66, 0x4c, 0x61, 0x43]),
// "OggS" string literal in hex
'ogg': toUint8([0x4f, 0x67, 0x67, 0x53]),
// ac-3 sync byte, also works for ec-3 as that is simply a codec
// of ac-3
'ac3': toUint8([0x0b, 0x77]),
// "RIFF" string literal in hex used for wav and avi
'riff': toUint8([0x52, 0x49, 0x46, 0x46]),
// "AVI" string literal in hex
'avi': toUint8([0x41, 0x56, 0x49]),
// "WAVE" string literal in hex
'wav': toUint8([0x57, 0x41, 0x56, 0x45]),
// "ftyp3g" string literal in hex
'3gp': toUint8([0x66, 0x74, 0x79, 0x70, 0x33, 0x67]),
// "ftyp" string literal in hex
'mp4': toUint8([0x66, 0x74, 0x79, 0x70]),
// "styp" string literal in hex
'fmp4': toUint8([0x73, 0x74, 0x79, 0x70]),
// "ftyp" string literal in hex
'mov': toUint8([0x66, 0x74, 0x79, 0x70, 0x71, 0x74])
};
const _isLikely = {
aac(bytes) {
const offset = getId3Offset(bytes);
return bytesMatch(bytes, [0xFF, 0x10], {offset, mask: [0xFF, 0x16]});
},
mp3(bytes) {
const offset = getId3Offset(bytes);
return bytesMatch(bytes, [0xFF, 0x02], {offset, mask: [0xFF, 0x06]});
},
webm(bytes) {
const docType = findEbml(bytes, [EBML_TAGS.EBML, EBML_TAGS.DocType])[0];
// check if DocType EBML tag is webm
return bytesMatch(docType, CONSTANTS.webm);
},
mkv(bytes) {
const docType = findEbml(bytes, [EBML_TAGS.EBML, EBML_TAGS.DocType])[0];
// check if DocType EBML tag is matroska
return bytesMatch(docType, CONSTANTS.matroska);
},
mp4(bytes) {
return !_isLikely['3gp'](bytes) && !_isLikely.mov(bytes) &&
(bytesMatch(bytes, CONSTANTS.mp4, {offset: 4}) ||
bytesMatch(bytes, CONSTANTS.fmp4, {offset: 4}));
},
mov(bytes) {
return bytesMatch(bytes, CONSTANTS.mov, {offset: 4});
},
'3gp'(bytes) {
return bytesMatch(bytes, CONSTANTS['3gp'], {offset: 4});
},
ac3(bytes) {
const offset = getId3Offset(bytes);
return bytesMatch(bytes, CONSTANTS.ac3, {offset});
},
ts(bytes) {
if (bytes.length < 189 && bytes.length >= 1) {
return bytes[0] === 0x47;
}
let i = 0;
// check the first 376 bytes for two matching sync bytes
while (i + 188 < bytes.length && i < 188) {
if (bytes[i] === 0x47 && bytes[i + 188] === 0x47) {
return true;
}
i += 1;
}
return false;
},
flac(bytes) {
const offset = getId3Offset(bytes);
return bytesMatch(bytes, CONSTANTS.flac, {offset});
},
ogg(bytes) {
return bytesMatch(bytes, CONSTANTS.ogg);
},
avi(bytes) {
return bytesMatch(bytes, CONSTANTS.riff) &&
bytesMatch(bytes, CONSTANTS.avi, {offset: 8});
},
wav(bytes) {
return bytesMatch(bytes, CONSTANTS.riff) &&
bytesMatch(bytes, CONSTANTS.wav, {offset: 8});
},
'h264'(bytes) {
// find seq_parameter_set_rbsp
return findH264Nal(bytes, 7, 3).length;
},
'h265'(bytes) {
// find video_parameter_set_rbsp or seq_parameter_set_rbsp
return findH265Nal(bytes, [32, 33], 3).length;
}
};
// get all the isLikely functions
// but make sure 'ts' is above h264 and h265
// but below everything else as it is the least specific
const isLikelyTypes = Object.keys(_isLikely)
// remove ts, h264, h265
.filter((t) => t !== 'ts' && t !== 'h264' && t !== 'h265')
// add it back to the bottom
.concat(['ts', 'h264', 'h265']);
// make sure we are dealing with uint8 data.
isLikelyTypes.forEach(function(type) {
const isLikelyFn = _isLikely[type];
_isLikely[type] = (bytes) => isLikelyFn(toUint8(bytes));
});
// export after wrapping
export const isLikely = _isLikely;
// A useful list of file signatures can be found here
// https://en.wikipedia.org/wiki/List_of_file_signatures
export const detectContainerForBytes = (bytes) => {
bytes = toUint8(bytes);
for (let i = 0; i < isLikelyTypes.length; i++) {
const type = isLikelyTypes[i];
if (isLikely[type](bytes)) {
return type;
}
}
return '';
};
// fmp4 is not a container
export const isLikelyFmp4MediaSegment = (bytes) => {
return findBox(bytes, ['moof']).length > 0;
};

View file

@ -0,0 +1,13 @@
import window from 'global/window';
const atob = (s) => window.atob ? window.atob(s) : Buffer.from(s, 'base64').toString('binary');
export default function decodeB64ToUint8Array(b64Text) {
const decodedString = atob(b64Text);
const array = new Uint8Array(decodedString.length);
for (let i = 0; i < decodedString.length; i++) {
array[i] = decodedString.charCodeAt(i);
}
return array;
}

503
node_modules/@videojs/vhs-utils/src/ebml-helpers.js generated vendored Normal file
View file

@ -0,0 +1,503 @@
import {
toUint8,
bytesToNumber,
bytesMatch,
bytesToString,
numberToBytes,
padStart
} from './byte-helpers';
import {getAvcCodec, getHvcCodec, getAv1Codec} from './codec-helpers.js';
// relevant specs for this parser:
// https://matroska-org.github.io/libebml/specs.html
// https://www.matroska.org/technical/elements.html
// https://www.webmproject.org/docs/container/
export const EBML_TAGS = {
EBML: toUint8([0x1A, 0x45, 0xDF, 0xA3]),
DocType: toUint8([0x42, 0x82]),
Segment: toUint8([0x18, 0x53, 0x80, 0x67]),
SegmentInfo: toUint8([0x15, 0x49, 0xA9, 0x66]),
Tracks: toUint8([0x16, 0x54, 0xAE, 0x6B]),
Track: toUint8([0xAE]),
TrackNumber: toUint8([0xd7]),
DefaultDuration: toUint8([0x23, 0xe3, 0x83]),
TrackEntry: toUint8([0xAE]),
TrackType: toUint8([0x83]),
FlagDefault: toUint8([0x88]),
CodecID: toUint8([0x86]),
CodecPrivate: toUint8([0x63, 0xA2]),
VideoTrack: toUint8([0xe0]),
AudioTrack: toUint8([0xe1]),
// Not used yet, but will be used for live webm/mkv
// see https://www.matroska.org/technical/basics.html#block-structure
// see https://www.matroska.org/technical/basics.html#simpleblock-structure
Cluster: toUint8([0x1F, 0x43, 0xB6, 0x75]),
Timestamp: toUint8([0xE7]),
TimestampScale: toUint8([0x2A, 0xD7, 0xB1]),
BlockGroup: toUint8([0xA0]),
BlockDuration: toUint8([0x9B]),
Block: toUint8([0xA1]),
SimpleBlock: toUint8([0xA3])
};
/**
* This is a simple table to determine the length
* of things in ebml. The length is one based (starts at 1,
* rather than zero) and for every zero bit before a one bit
* we add one to length. We also need this table because in some
* case we have to xor all the length bits from another value.
*/
const LENGTH_TABLE = [
0b10000000,
0b01000000,
0b00100000,
0b00010000,
0b00001000,
0b00000100,
0b00000010,
0b00000001
];
const getLength = function(byte) {
let len = 1;
for (let i = 0; i < LENGTH_TABLE.length; i++) {
if (byte & LENGTH_TABLE[i]) {
break;
}
len++;
}
return len;
};
// length in ebml is stored in the first 4 to 8 bits
// of the first byte. 4 for the id length and 8 for the
// data size length. Length is measured by converting the number to binary
// then 1 + the number of zeros before a 1 is encountered starting
// from the left.
const getvint = function(bytes, offset, removeLength = true, signed = false) {
const length = getLength(bytes[offset]);
let valueBytes = bytes.subarray(offset, offset + length);
// NOTE that we do **not** subarray here because we need to copy these bytes
// as they will be modified below to remove the dataSizeLen bits and we do not
// want to modify the original data. normally we could just call slice on
// uint8array but ie 11 does not support that...
if (removeLength) {
valueBytes = Array.prototype.slice.call(bytes, offset, offset + length);
valueBytes[0] ^= LENGTH_TABLE[length - 1];
}
return {
length,
value: bytesToNumber(valueBytes, {signed}),
bytes: valueBytes
};
};
const normalizePath = function(path) {
if (typeof path === 'string') {
return path.match(/.{1,2}/g).map((p) => normalizePath(p));
}
if (typeof path === 'number') {
return numberToBytes(path);
}
return path;
};
const normalizePaths = function(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map((p) => normalizePath(p));
};
const getInfinityDataSize = (id, bytes, offset) => {
if (offset >= bytes.length) {
return bytes.length;
}
const innerid = getvint(bytes, offset, false);
if (bytesMatch(id.bytes, innerid.bytes)) {
return offset;
}
const dataHeader = getvint(bytes, offset + innerid.length);
return getInfinityDataSize(id, bytes, offset + dataHeader.length + dataHeader.value + innerid.length);
};
/**
* Notes on the EBLM format.
*
* EBLM uses "vints" tags. Every vint tag contains
* two parts
*
* 1. The length from the first byte. You get this by
* converting the byte to binary and counting the zeros
* before a 1. Then you add 1 to that. Examples
* 00011111 = length 4 because there are 3 zeros before a 1.
* 00100000 = length 3 because there are 2 zeros before a 1.
* 00000011 = length 7 because there are 6 zeros before a 1.
*
* 2. The bits used for length are removed from the first byte
* Then all the bytes are merged into a value. NOTE: this
* is not the case for id ebml tags as there id includes
* length bits.
*
*/
export const findEbml = function(bytes, paths) {
paths = normalizePaths(paths);
bytes = toUint8(bytes);
let results = [];
if (!paths.length) {
return results;
}
let i = 0;
while (i < bytes.length) {
const id = getvint(bytes, i, false);
const dataHeader = getvint(bytes, i + id.length);
const dataStart = i + id.length + dataHeader.length;
// dataSize is unknown or this is a live stream
if (dataHeader.value === 0x7f) {
dataHeader.value = getInfinityDataSize(id, bytes, dataStart);
if (dataHeader.value !== bytes.length) {
dataHeader.value -= dataStart;
}
}
const dataEnd = (dataStart + dataHeader.value) > bytes.length ? bytes.length : (dataStart + dataHeader.value);
const data = bytes.subarray(dataStart, dataEnd);
if (bytesMatch(paths[0], id.bytes)) {
if (paths.length === 1) {
// this is the end of the paths and we've found the tag we were
// looking for
results.push(data);
} else {
// recursively search for the next tag inside of the data
// of this one
results = results.concat(findEbml(data, paths.slice(1)));
}
}
const totalLength = id.length + dataHeader.length + data.length;
// move past this tag entirely, we are not looking for it
i += totalLength;
}
return results;
};
// see https://www.matroska.org/technical/basics.html#block-structure
export const decodeBlock = function(block, type, timestampScale, clusterTimestamp) {
let duration;
if (type === 'group') {
duration = findEbml(block, [EBML_TAGS.BlockDuration])[0];
if (duration) {
duration = bytesToNumber(duration);
duration = (((1 / timestampScale) * (duration)) * timestampScale) / 1000;
}
block = findEbml(block, [EBML_TAGS.Block])[0];
type = 'block';
// treat data as a block after this point
}
const dv = new DataView(block.buffer, block.byteOffset, block.byteLength);
const trackNumber = getvint(block, 0);
const timestamp = dv.getInt16(trackNumber.length, false);
const flags = block[trackNumber.length + 2];
const data = block.subarray(trackNumber.length + 3);
// pts/dts in seconds
const ptsdts = (((1 / timestampScale) * (clusterTimestamp + timestamp)) * timestampScale) / 1000;
// return the frame
const parsed = {
duration,
trackNumber: trackNumber.value,
keyframe: type === 'simple' && (flags >> 7) === 1,
invisible: ((flags & 0x08) >> 3) === 1,
lacing: ((flags & 0x06) >> 1),
discardable: type === 'simple' && (flags & 0x01) === 1,
frames: [],
pts: ptsdts,
dts: ptsdts,
timestamp
};
if (!parsed.lacing) {
parsed.frames.push(data);
return parsed;
}
const numberOfFrames = data[0] + 1;
const frameSizes = [];
let offset = 1;
// Fixed
if (parsed.lacing === 2) {
const sizeOfFrame = (data.length - offset) / numberOfFrames;
for (let i = 0; i < numberOfFrames; i++) {
frameSizes.push(sizeOfFrame);
}
}
// xiph
if (parsed.lacing === 1) {
for (let i = 0; i < numberOfFrames - 1; i++) {
let size = 0;
do {
size += data[offset];
offset++;
} while (data[offset - 1] === 0xFF);
frameSizes.push(size);
}
}
// ebml
if (parsed.lacing === 3) {
// first vint is unsinged
// after that vints are singed and
// based on a compounding size
let size = 0;
for (let i = 0; i < numberOfFrames - 1; i++) {
const vint = i === 0 ? getvint(data, offset) : getvint(data, offset, true, true);
size += vint.value;
frameSizes.push(size);
offset += vint.length;
}
}
frameSizes.forEach(function(size) {
parsed.frames.push(data.subarray(offset, offset + size));
offset += size;
});
return parsed;
};
// VP9 Codec Feature Metadata (CodecPrivate)
// https://www.webmproject.org/docs/container/
const parseVp9Private = (bytes) => {
let i = 0;
const params = {};
while (i < bytes.length) {
const id = bytes[i] & 0x7f;
const len = bytes[i + 1];
let val;
if (len === 1) {
val = bytes[i + 2];
} else {
val = bytes.subarray(i + 2, i + 2 + len);
}
if (id === 1) {
params.profile = val;
} else if (id === 2) {
params.level = val;
} else if (id === 3) {
params.bitDepth = val;
} else if (id === 4) {
params.chromaSubsampling = val;
} else {
params[id] = val;
}
i += 2 + len;
}
return params;
};
export const parseTracks = function(bytes) {
bytes = toUint8(bytes);
const decodedTracks = [];
let tracks = findEbml(bytes, [EBML_TAGS.Segment, EBML_TAGS.Tracks, EBML_TAGS.Track]);
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Tracks, EBML_TAGS.Track]);
}
if (!tracks.length) {
tracks = findEbml(bytes, [EBML_TAGS.Track]);
}
if (!tracks.length) {
return decodedTracks;
}
tracks.forEach(function(track) {
let trackType = findEbml(track, EBML_TAGS.TrackType)[0];
if (!trackType || !trackType.length) {
return;
}
// 1 is video, 2 is audio, 17 is subtitle
// other values are unimportant in this context
if (trackType[0] === 1) {
trackType = 'video';
} else if (trackType[0] === 2) {
trackType = 'audio';
} else if (trackType[0] === 17) {
trackType = 'subtitle';
} else {
return;
}
// todo parse language
const decodedTrack = {
rawCodec: bytesToString(findEbml(track, [EBML_TAGS.CodecID])[0]),
type: trackType,
codecPrivate: findEbml(track, [EBML_TAGS.CodecPrivate])[0],
number: bytesToNumber(findEbml(track, [EBML_TAGS.TrackNumber])[0]),
defaultDuration: bytesToNumber(findEbml(track, [EBML_TAGS.DefaultDuration])[0]),
default: findEbml(track, [EBML_TAGS.FlagDefault])[0],
rawData: track
};
let codec = '';
if ((/V_MPEG4\/ISO\/AVC/).test(decodedTrack.rawCodec)) {
codec = `avc1.${getAvcCodec(decodedTrack.codecPrivate)}`;
} else if ((/V_MPEGH\/ISO\/HEVC/).test(decodedTrack.rawCodec)) {
codec = `hev1.${getHvcCodec(decodedTrack.codecPrivate)}`;
} else if ((/V_MPEG4\/ISO\/ASP/).test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4v.20.' + decodedTrack.codecPrivate[4].toString();
} else {
codec = 'mp4v.20.9';
}
} else if ((/^V_THEORA/).test(decodedTrack.rawCodec)) {
codec = 'theora';
} else if ((/^V_VP8/).test(decodedTrack.rawCodec)) {
codec = 'vp8';
} else if ((/^V_VP9/).test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
const {profile, level, bitDepth, chromaSubsampling} = parseVp9Private(decodedTrack.codecPrivate);
codec = 'vp09.';
codec += `${padStart(profile, 2, '0')}.`;
codec += `${padStart(level, 2, '0')}.`;
codec += `${padStart(bitDepth, 2, '0')}.`;
codec += `${padStart(chromaSubsampling, 2, '0')}`;
// Video -> Colour -> Ebml name
const matrixCoefficients = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB1]])[0] || [];
const videoFullRangeFlag = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xB9]])[0] || [];
const transferCharacteristics = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBA]])[0] || [];
const colourPrimaries = findEbml(track, [0xE0, [0x55, 0xB0], [0x55, 0xBB]])[0] || [];
// if we find any optional codec parameter specify them all.
if (matrixCoefficients.length ||
videoFullRangeFlag.length ||
transferCharacteristics.length ||
colourPrimaries.length) {
codec += `.${padStart(colourPrimaries[0], 2, '0')}`;
codec += `.${padStart(transferCharacteristics[0], 2, '0')}`;
codec += `.${padStart(matrixCoefficients[0], 2, '0')}`;
codec += `.${padStart(videoFullRangeFlag[0], 2, '0')}`;
}
} else {
codec = 'vp9';
}
} else if ((/^V_AV1/).test(decodedTrack.rawCodec)) {
codec = `av01.${getAv1Codec(decodedTrack.codecPrivate)}`;
} else if ((/A_ALAC/).test(decodedTrack.rawCodec)) {
codec = 'alac';
} else if ((/A_MPEG\/L2/).test(decodedTrack.rawCodec)) {
codec = 'mp2';
} else if ((/A_MPEG\/L3/).test(decodedTrack.rawCodec)) {
codec = 'mp3';
} else if ((/^A_AAC/).test(decodedTrack.rawCodec)) {
if (decodedTrack.codecPrivate) {
codec = 'mp4a.40.' + (decodedTrack.codecPrivate[0] >>> 3).toString();
} else {
codec = 'mp4a.40.2';
}
} else if ((/^A_AC3/).test(decodedTrack.rawCodec)) {
codec = 'ac-3';
} else if ((/^A_PCM/).test(decodedTrack.rawCodec)) {
codec = 'pcm';
} else if ((/^A_MS\/ACM/).test(decodedTrack.rawCodec)) {
codec = 'speex';
} else if ((/^A_EAC3/).test(decodedTrack.rawCodec)) {
codec = 'ec-3';
} else if ((/^A_VORBIS/).test(decodedTrack.rawCodec)) {
codec = 'vorbis';
} else if ((/^A_FLAC/).test(decodedTrack.rawCodec)) {
codec = 'flac';
} else if ((/^A_OPUS/).test(decodedTrack.rawCodec)) {
codec = 'opus';
}
decodedTrack.codec = codec;
decodedTracks.push(decodedTrack);
});
return decodedTracks.sort((a, b) => a.number - b.number);
};
export const parseData = function(data, tracks) {
const allBlocks = [];
const segment = findEbml(data, [EBML_TAGS.Segment])[0];
let timestampScale = findEbml(segment, [EBML_TAGS.SegmentInfo, EBML_TAGS.TimestampScale])[0];
// in nanoseconds, defaults to 1ms
if (timestampScale && timestampScale.length) {
timestampScale = bytesToNumber(timestampScale);
} else {
timestampScale = 1000000;
}
const clusters = findEbml(segment, [EBML_TAGS.Cluster]);
if (!tracks) {
tracks = parseTracks(segment);
}
clusters.forEach(function(cluster, ci) {
const simpleBlocks = findEbml(cluster, [EBML_TAGS.SimpleBlock]).map((b) => ({type: 'simple', data: b}));
const blockGroups = findEbml(cluster, [EBML_TAGS.BlockGroup]).map((b) => ({type: 'group', data: b}));
let timestamp = findEbml(cluster, [EBML_TAGS.Timestamp])[0] || 0;
if (timestamp && timestamp.length) {
timestamp = bytesToNumber(timestamp);
}
// get all blocks then sort them into the correct order
const blocks = simpleBlocks
.concat(blockGroups)
.sort((a, b) => a.data.byteOffset - b.data.byteOffset);
blocks.forEach(function(block, bi) {
const decoded = decodeBlock(block.data, block.type, timestampScale, timestamp);
allBlocks.push(decoded);
});
});
return {tracks, blocks: allBlocks};
};

338
node_modules/@videojs/vhs-utils/src/format-parser.js generated vendored Normal file
View file

@ -0,0 +1,338 @@
import {bytesToString, toUint8, toHexString, bytesMatch} from './byte-helpers.js';
import {parseTracks as parseEbmlTracks} from './ebml-helpers.js';
import {parseTracks as parseMp4Tracks} from './mp4-helpers.js';
import {findFourCC} from './riff-helpers.js';
import {getPages} from './ogg-helpers.js';
import {detectContainerForBytes} from './containers.js';
import {findH264Nal, findH265Nal} from './nal-helpers.js';
import {parseTs} from './m2ts-helpers.js';
import {getAvcCodec, getHvcCodec} from './codec-helpers.js';
import {getId3Offset} from './id3-helpers.js';
// https://docs.microsoft.com/en-us/windows/win32/medfound/audio-subtype-guids
// https://tools.ietf.org/html/rfc2361
const wFormatTagCodec = function(wFormatTag) {
wFormatTag = toUint8(wFormatTag);
if (bytesMatch(wFormatTag, [0x00, 0x55])) {
return 'mp3';
} else if (bytesMatch(wFormatTag, [0x16, 0x00]) || bytesMatch(wFormatTag, [0x00, 0xFF])) {
return 'aac';
} else if (bytesMatch(wFormatTag, [0x70, 0x4f])) {
return 'opus';
} else if (bytesMatch(wFormatTag, [0x6C, 0x61])) {
return 'alac';
} else if (bytesMatch(wFormatTag, [0xF1, 0xAC])) {
return 'flac';
} else if (bytesMatch(wFormatTag, [0x20, 0x00])) {
return 'ac-3';
} else if (bytesMatch(wFormatTag, [0xFF, 0xFE])) {
return 'ec-3';
} else if (bytesMatch(wFormatTag, [0x00, 0x50])) {
return 'mp2';
} else if (bytesMatch(wFormatTag, [0x56, 0x6f])) {
return 'vorbis';
} else if (bytesMatch(wFormatTag, [0xA1, 0x09])) {
return 'speex';
}
return '';
};
const formatMimetype = (name, codecs) => {
const codecString = ['video', 'audio'].reduce((acc, type) => {
if (codecs[type]) {
acc += (acc.length ? ',' : '') + codecs[type];
}
return acc;
}, '');
return `${(codecs.video ? 'video' : 'audio')}/${name}${codecString ? `;codecs="${codecString}"` : ''}`;
};
const parseCodecFrom = {
mov(bytes) {
// mov and mp4 both use a nearly identical box structure.
const retval = parseCodecFrom.mp4(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('mp4', 'quicktime');
}
return retval;
},
mp4(bytes) {
bytes = toUint8(bytes);
const codecs = {};
const tracks = parseMp4Tracks(bytes);
for (let i = 0; i < tracks.length; i++) {
const track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {codecs, mimetype: formatMimetype('mp4', codecs)};
},
'3gp'(bytes) {
return {codecs: {}, mimetype: 'video/3gpp'};
},
ogg(bytes) {
const pages = getPages(bytes, 0, 4);
const codecs = {};
pages.forEach(function(page) {
if (bytesMatch(page, [0x4F, 0x70, 0x75, 0x73], {offset: 28})) {
codecs.audio = 'opus';
} else if (bytesMatch(page, [0x56, 0x50, 0x38, 0x30], {offset: 29})) {
codecs.video = 'vp8';
} else if (bytesMatch(page, [0x74, 0x68, 0x65, 0x6F, 0x72, 0x61], {offset: 29})) {
codecs.video = 'theora';
} else if (bytesMatch(page, [0x46, 0x4C, 0x41, 0x43], {offset: 29})) {
codecs.audio = 'flac';
} else if (bytesMatch(page, [0x53, 0x70, 0x65, 0x65, 0x78], {offset: 28})) {
codecs.audio = 'speex';
} else if (bytesMatch(page, [0x76, 0x6F, 0x72, 0x62, 0x69, 0x73], {offset: 29})) {
codecs.audio = 'vorbis';
}
});
return {codecs, mimetype: formatMimetype('ogg', codecs)};
},
wav(bytes) {
const format = findFourCC(bytes, ['WAVE', 'fmt'])[0];
const wFormatTag = Array.prototype.slice.call(format, 0, 2).reverse();
let mimetype = 'audio/vnd.wave';
const codecs = {
audio: wFormatTagCodec(wFormatTag)
};
const codecString = wFormatTag.reduce(function(acc, v) {
if (v) {
acc += toHexString(v);
}
return acc;
}, '');
if (codecString) {
mimetype += `;codec=${codecString}`;
}
if (codecString && !codecs.audio) {
codecs.audio = codecString;
}
return {codecs, mimetype};
},
avi(bytes) {
const movi = findFourCC(bytes, ['AVI', 'movi'])[0];
const strls = findFourCC(bytes, ['AVI', 'hdrl', 'strl']);
const codecs = {};
strls.forEach(function(strl) {
const strh = findFourCC(strl, ['strh'])[0];
const strf = findFourCC(strl, ['strf'])[0];
// now parse AVIStreamHeader to get codec and type:
// https://docs.microsoft.com/en-us/previous-versions/windows/desktop/api/avifmt/ns-avifmt-avistreamheader
const type = bytesToString(strh.subarray(0, 4));
let codec;
let codecType;
if (type === 'vids') {
// https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader
const handler = bytesToString(strh.subarray(4, 8));
const compression = bytesToString(strf.subarray(16, 20));
// look for 00dc (compressed video fourcc code) or 00db (uncompressed video fourcc code)
const videoData = findFourCC(movi, ['00dc'])[0] || findFourCC(movi, ['00db'][0]);
if (handler === 'H264' || compression === 'H264') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h264(videoData).codecs.video;
} else {
codec = 'avc1';
}
} else if (handler === 'HEVC' || compression === 'HEVC') {
if (videoData && videoData.length) {
codec = parseCodecFrom.h265(videoData).codecs.video;
} else {
codec = 'hev1';
}
} else if (handler === 'FMP4' || compression === 'FMP4') {
if (movi.length) {
codec = 'mp4v.20.' + movi[12].toString();
} else {
codec = 'mp4v.20';
}
} else if (handler === 'VP80' || compression === 'VP80') {
codec = 'vp8';
} else if (handler === 'VP90' || compression === 'VP90') {
codec = 'vp9';
} else if (handler === 'AV01' || compression === 'AV01') {
codec = 'av01';
} else if (handler === 'theo' || compression === 'theora') {
codec = 'theora';
} else {
if (videoData && videoData.length) {
const result = detectContainerForBytes(videoData);
if (result === 'h264') {
codec = parseCodecFrom.h264(movi).codecs.video;
}
if (result === 'h265') {
codec = parseCodecFrom.h265(movi).codecs.video;
}
}
if (!codec) {
codec = handler || compression;
}
}
codecType = 'video';
} else if (type === 'auds') {
codecType = 'audio';
// look for 00wb (audio data fourcc)
// const audioData = findFourCC(movi, ['01wb']);
const wFormatTag = Array.prototype.slice.call(strf, 0, 2).reverse();
codecs.audio = wFormatTagCodec(wFormatTag);
} else {
return;
}
if (codec) {
codecs[codecType] = codec;
}
});
return {codecs, mimetype: formatMimetype('avi', codecs)};
},
ts(bytes) {
const result = parseTs(bytes, 2);
const codecs = {};
Object.keys(result.streams).forEach(function(esPid) {
const stream = result.streams[esPid];
if (stream.codec === 'avc1' && stream.packets.length) {
stream.codec = parseCodecFrom.h264(stream.packets[0]).codecs.video;
} else if (stream.codec === 'hev1' && stream.packets.length) {
stream.codec = parseCodecFrom.h265(stream.packets[0]).codecs.video;
}
codecs[stream.type] = stream.codec;
});
return {codecs, mimetype: formatMimetype('mp2t', codecs)};
},
webm(bytes) {
// mkv and webm both use ebml to store code info
const retval = parseCodecFrom.mkv(bytes);
if (retval.mimetype) {
retval.mimetype = retval.mimetype.replace('x-matroska', 'webm');
}
return retval;
},
mkv(bytes) {
const codecs = {};
const tracks = parseEbmlTracks(bytes);
for (let i = 0; i < tracks.length; i++) {
const track = tracks[i];
if (track.type === 'audio' && !codecs.audio) {
codecs.audio = track.codec;
}
if (track.type === 'video' && !codecs.video) {
codecs.video = track.codec;
}
}
return {codecs, mimetype: formatMimetype('x-matroska', codecs)};
},
aac(bytes) {
return {codecs: {audio: 'aac'}, mimetype: 'audio/aac'};
},
ac3(bytes) {
// past id3 and syncword
const offset = getId3Offset(bytes) + 2;
// default to ac-3
let codec = 'ac-3';
if (bytesMatch(bytes, [0xB8, 0xE0], {offset})) {
codec = 'ac-3';
// 0x01, 0x7F
} else if (bytesMatch(bytes, [0x01, 0x7f], {offset})) {
codec = 'ec-3';
}
return {codecs: {audio: codec}, mimetype: 'audio/vnd.dolby.dd-raw'};
},
mp3(bytes) {
return {codecs: {audio: 'mp3'}, mimetype: 'audio/mpeg'};
},
flac(bytes) {
return {codecs: {audio: 'flac'}, mimetype: 'audio/flac'};
},
'h264'(bytes) {
// find seq_parameter_set_rbsp to get encoding settings for codec
const nal = findH264Nal(bytes, 7, 3);
const retval = {codecs: {video: 'avc1'}, mimetype: 'video/h264'};
if (nal.length) {
retval.codecs.video += `.${getAvcCodec(nal)}`;
}
return retval;
},
'h265'(bytes) {
const retval = {codecs: {video: 'hev1'}, mimetype: 'video/h265'};
// find video_parameter_set_rbsp or seq_parameter_set_rbsp
// to get encoding settings for codec
const nal = findH265Nal(bytes, [32, 33], 3);
if (nal.length) {
const type = (nal[0] >> 1) & 0x3F;
// profile_tier_level starts at byte 5 for video_parameter_set_rbsp
// byte 2 for seq_parameter_set_rbsp
retval.codecs.video += `.${getHvcCodec(nal.subarray(type === 32 ? 5 : 2))}`;
}
return retval;
}
};
export const parseFormatForBytes = (bytes) => {
bytes = toUint8(bytes);
const result = {
codecs: {},
container: detectContainerForBytes(bytes),
mimetype: ''
};
const parseCodecFn = parseCodecFrom[result.container];
if (parseCodecFn) {
const parsed = parseCodecFn ? parseCodecFn(bytes) : {};
result.codecs = parsed.codecs || {};
result.mimetype = parsed.mimetype || '';
}
return result;
};

35
node_modules/@videojs/vhs-utils/src/id3-helpers.js generated vendored Normal file
View file

@ -0,0 +1,35 @@
import {toUint8, bytesMatch} from './byte-helpers.js';
const ID3 = toUint8([0x49, 0x44, 0x33]);
export const getId3Size = function(bytes, offset = 0) {
bytes = toUint8(bytes);
const flags = bytes[offset + 5];
const returnSize = (bytes[offset + 6] << 21) |
(bytes[offset + 7] << 14) |
(bytes[offset + 8] << 7) |
(bytes[offset + 9]);
const footerPresent = (flags & 16) >> 4;
if (footerPresent) {
return returnSize + 20;
}
return returnSize + 10;
};
export const getId3Offset = function(bytes, offset = 0) {
bytes = toUint8(bytes);
if ((bytes.length - offset) < 10 || !bytesMatch(bytes, ID3, {offset})) {
return offset;
}
offset += getId3Size(bytes, offset);
// recursive check for id3 tags as some files
// have multiple ID3 tag sections even though
// they should not.
return getId3Offset(bytes, offset);
};

17
node_modules/@videojs/vhs-utils/src/index.js generated vendored Normal file
View file

@ -0,0 +1,17 @@
import * as codecs from './codecs';
import * as byteHelpers from './byte-helpers.js';
import * as containers from './containers.js';
import decodeB64ToUint8Array from './decode-b64-to-uint8-array.js';
import * as mediaGroups from './media-groups.js';
import resolveUrl from './resolve-url.js';
import Stream from './stream.js';
export default {
codecs,
byteHelpers,
containers,
decodeB64ToUint8Array,
mediaGroups,
resolveUrl,
Stream
};

101
node_modules/@videojs/vhs-utils/src/m2ts-helpers.js generated vendored Normal file
View file

@ -0,0 +1,101 @@
import {bytesMatch, toUint8} from './byte-helpers.js';
const SYNC_BYTE = 0x47;
export const parseTs = function(bytes, maxPes = Infinity) {
bytes = toUint8(bytes);
let startIndex = 0;
let endIndex = 188;
const pmt = {};
let pesCount = 0;
while (endIndex < bytes.byteLength && pesCount < maxPes) {
if (bytes[startIndex] !== SYNC_BYTE && bytes[endIndex] !== SYNC_BYTE) {
endIndex += 1;
startIndex += 1;
continue;
}
const packet = bytes.subarray(startIndex, endIndex);
const pid = (((packet[1] & 0x1f) << 8) | packet[2]);
const hasPusi = !!(packet[1] & 0x40);
const hasAdaptationHeader = (((packet[3] & 0x30) >>> 4) > 0x01);
let payloadOffset = 4 + (hasAdaptationHeader ? (packet[4] + 1) : 0);
if (hasPusi) {
payloadOffset += packet[payloadOffset] + 1;
}
if (pid === 0 && !pmt.pid) {
pmt.pid = (packet[payloadOffset + 10] & 0x1f) << 8 | packet[payloadOffset + 11];
} else if (pmt.pid && pid === pmt.pid && !pmt.streams) {
const isNotForward = packet[payloadOffset + 5] & 0x01;
// ignore forward pmt delarations
if (!isNotForward) {
continue;
}
pmt.streams = {};
const sectionLength = (packet[payloadOffset + 1] & 0x0f) << 8 | packet[payloadOffset + 2];
const tableEnd = 3 + sectionLength - 4;
const programInfoLength = (packet[payloadOffset + 10] & 0x0f) << 8 | packet[payloadOffset + 11];
let offset = 12 + programInfoLength;
while (offset < tableEnd) {
// add an entry that maps the elementary_pid to the stream_type
const i = payloadOffset + offset;
const type = packet[i];
const esPid = (packet[i + 1] & 0x1F) << 8 | packet[i + 2];
const esLength = ((packet[i + 3] & 0x0f) << 8 | (packet[i + 4]));
const esInfo = packet.subarray(i + 5, i + 5 + esLength);
const stream = pmt.streams[esPid] = {
esInfo,
typeNumber: type,
packets: [],
type: '',
codec: ''
};
if (type === 0x06 && bytesMatch(esInfo, [0x4F, 0x70, 0x75, 0x73], {offset: 2})) {
stream.type = 'audio';
stream.codec = 'opus';
} else if (type === 0x1B || type === 0x20) {
stream.type = 'video';
stream.codec = 'avc1';
} else if (type === 0x24) {
stream.type = 'video';
stream.codec = 'hev1';
} else if (type === 0x10) {
stream.type = 'video';
stream.codec = 'mp4v.20';
} else if (type === 0x0F) {
stream.type = 'audio';
stream.codec = 'aac';
} else if (type === 0x81) {
stream.type = 'audio';
stream.codec = 'ac-3';
} else if (type === 0x87) {
stream.type = 'audio';
stream.codec = 'ec-3';
} else if (type === 0x03 || type === 0x04) {
stream.type = 'audio';
stream.codec = 'mp3';
}
offset += esLength + 5;
}
} else if (pmt.pid && pmt.streams) {
pmt.streams[pid].packets.push(packet.subarray(payloadOffset));
pesCount++;
}
startIndex += 188;
endIndex += 188;
}
if (!pmt.streams) {
pmt.streams = {};
}
return pmt;
};

22
node_modules/@videojs/vhs-utils/src/media-groups.js generated vendored Normal file
View file

@ -0,0 +1,22 @@
/**
* Loops through all supported media groups in master and calls the provided
* callback for each group
*
* @param {Object} master
* The parsed master manifest object
* @param {string[]} groups
* The media groups to call the callback for
* @param {Function} callback
* Callback to call for each media group
*/
export const forEachMediaGroup = (master, groups, callback) => {
groups.forEach((mediaType) => {
for (const groupKey in master.mediaGroups[mediaType]) {
for (const labelKey in master.mediaGroups[mediaType][groupKey]) {
const mediaProperties = master.mediaGroups[mediaType][groupKey][labelKey];
callback(mediaProperties, mediaType, groupKey, labelKey);
}
}
});
};

36
node_modules/@videojs/vhs-utils/src/media-types.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
const MPEGURL_REGEX = /^(audio|video|application)\/(x-|vnd\.apple\.)?mpegurl/i;
const DASH_REGEX = /^application\/dash\+xml/i;
/**
* Returns a string that describes the type of source based on a video source object's
* media type.
*
* @see {@link https://dev.w3.org/html5/pf-summary/video.html#dom-source-type|Source Type}
*
* @param {string} type
* Video source object media type
* @return {('hls'|'dash'|'vhs-json'|null)}
* VHS source type string
*/
export const simpleTypeFromSourceType = (type) => {
if (MPEGURL_REGEX.test(type)) {
return 'hls';
}
if (DASH_REGEX.test(type)) {
return 'dash';
}
// Denotes the special case of a manifest object passed to http-streaming instead of a
// source URL.
//
// See https://en.wikipedia.org/wiki/Media_type for details on specifying media types.
//
// In this case, vnd stands for vendor, video.js for the organization, VHS for this
// project, and the +json suffix identifies the structure of the media type.
if (type === 'application/vnd.videojs.vhs+json') {
return 'vhs-json';
}
return null;
};

564
node_modules/@videojs/vhs-utils/src/mp4-helpers.js generated vendored Normal file
View file

@ -0,0 +1,564 @@
import {
stringToBytes,
toUint8,
bytesMatch,
bytesToString,
toHexString,
padStart,
bytesToNumber
} from './byte-helpers.js';
import {getAvcCodec, getHvcCodec, getAv1Codec} from './codec-helpers.js';
import {parseOpusHead} from './opus-helpers.js';
const normalizePath = function(path) {
if (typeof path === 'string') {
return stringToBytes(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
const normalizePaths = function(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map((p) => normalizePath(p));
};
let DESCRIPTORS;
export const parseDescriptors = function(bytes) {
bytes = toUint8(bytes);
const results = [];
let i = 0;
while (bytes.length > i) {
const tag = bytes[i];
let size = 0;
let headerSize = 0;
// tag
headerSize++;
let byte = bytes[headerSize];
// first byte
headerSize++;
while (byte & 0x80) {
size = (byte & 0x7F) << 7;
byte = bytes[headerSize];
headerSize++;
}
size += byte & 0x7F;
for (let z = 0; z < DESCRIPTORS.length; z++) {
const {id, parser} = DESCRIPTORS[z];
if (tag === id) {
results.push(parser(bytes.subarray(headerSize, headerSize + size)));
break;
}
}
i += size + headerSize;
}
return results;
};
DESCRIPTORS = [
{id: 0x03, parser(bytes) {
const desc = {
tag: 0x03,
id: bytes[0] << 8 | bytes[1],
flags: bytes[2],
size: 3,
dependsOnEsId: 0,
ocrEsId: 0,
descriptors: [],
url: ''
};
// depends on es id
if (desc.flags & 0x80) {
desc.dependsOnEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
}
// url
if (desc.flags & 0x40) {
const len = bytes[desc.size];
desc.url = bytesToString(bytes.subarray(desc.size + 1, desc.size + 1 + len));
desc.size += len;
}
// ocr es id
if (desc.flags & 0x20) {
desc.ocrEsId = bytes[desc.size] << 8 | bytes[desc.size + 1];
desc.size += 2;
}
desc.descriptors = parseDescriptors(bytes.subarray(desc.size)) || [];
return desc;
}},
{id: 0x04, parser(bytes) {
// DecoderConfigDescriptor
const desc = {
tag: 0x04,
oti: bytes[0],
streamType: bytes[1],
bufferSize: bytes[2] << 16 | bytes [3] << 8 | bytes[4],
maxBitrate: bytes[5] << 24 | bytes[6] << 16 | bytes [7] << 8 | bytes[8],
avgBitrate: bytes[9] << 24 | bytes[10] << 16 | bytes [11] << 8 | bytes[12],
descriptors: parseDescriptors(bytes.subarray(13))
};
return desc;
}},
{id: 0x05, parser(bytes) {
// DecoderSpecificInfo
return {tag: 0x05, bytes};
}},
{id: 0x06, parser(bytes) {
// SLConfigDescriptor
return {tag: 0x06, bytes};
}}
];
/**
* find any number of boxes by name given a path to it in an iso bmff
* such as mp4.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {Uint8Array[]|string[]|string|Uint8Array} name
* An array of paths or a single path representing the name
* of boxes to search through in bytes. Paths may be
* uint8 (character codes) or strings.
*
* @param {boolean} [complete=false]
* Should we search only for complete boxes on the final path.
* This is very useful when you do not want to get back partial boxes
* in the case of streaming files.
*
* @return {Uint8Array[]}
* An array of the end paths that we found.
*/
export const findBox = function(bytes, paths, complete = false) {
paths = normalizePaths(paths);
bytes = toUint8(bytes);
const results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
let i = 0;
while (i < bytes.length) {
const size = (bytes[i] << 24 | bytes[i + 1] << 16 | bytes[i + 2] << 8 | bytes[i + 3]) >>> 0;
const type = bytes.subarray(i + 4, i + 8);
// invalid box format.
if (size === 0) {
break;
}
let end = i + size;
if (end > bytes.length) {
// this box is bigger than the number of bytes we have
// and complete is set, we cannot find any more boxes.
if (complete) {
break;
}
end = bytes.length;
}
const data = bytes.subarray(i + 8, end);
if (bytesMatch(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
results.push.apply(results, findBox(data, paths.slice(1), complete));
}
}
i = end;
}
// we've finished searching all of bytes
return results;
};
/**
* Search for a single matching box by name in an iso bmff format like
* mp4. This function is useful for finding codec boxes which
* can be placed arbitrarily in sample descriptions depending
* on the version of the file or file type.
*
* @param {TypedArray} bytes
* bytes for the iso bmff to search for boxes in
*
* @param {string|Uint8Array} name
* The name of the box to find.
*
* @return {Uint8Array[]}
* a subarray of bytes representing the name boxed we found.
*/
export const findNamedBox = function(bytes, name) {
name = normalizePath(name);
if (!name.length) {
// short-circuit the search for empty paths
return bytes.subarray(bytes.length);
}
let i = 0;
while (i < bytes.length) {
if (bytesMatch(bytes.subarray(i, i + name.length), name)) {
const size = (bytes[i - 4] << 24 | bytes[i - 3] << 16 | bytes[i - 2] << 8 | bytes[i - 1]) >>> 0;
const end = size > 1 ? i + size : bytes.byteLength;
return bytes.subarray(i + 4, end);
}
i++;
}
// we've finished searching all of bytes
return bytes.subarray(bytes.length);
};
const parseSamples = function(data, entrySize = 4, parseEntry = (d) => bytesToNumber(d)) {
const entries = [];
if (!data || !data.length) {
return entries;
}
let entryCount = bytesToNumber(data.subarray(4, 8));
for (let i = 8; entryCount; i += entrySize, entryCount--) {
entries.push(parseEntry(data.subarray(i, i + entrySize)));
}
return entries;
};
export const buildFrameTable = function(stbl, timescale) {
const keySamples = parseSamples(findBox(stbl, ['stss'])[0]);
const chunkOffsets = parseSamples(findBox(stbl, ['stco'])[0]);
const timeToSamples = parseSamples(findBox(stbl, ['stts'])[0], 8, (entry) => ({
sampleCount: bytesToNumber(entry.subarray(0, 4)),
sampleDelta: bytesToNumber(entry.subarray(4, 8))
}));
const samplesToChunks = parseSamples(findBox(stbl, ['stsc'])[0], 12, (entry) => ({
firstChunk: bytesToNumber(entry.subarray(0, 4)),
samplesPerChunk: bytesToNumber(entry.subarray(4, 8)),
sampleDescriptionIndex: bytesToNumber(entry.subarray(8, 12))
}));
const stsz = findBox(stbl, ['stsz'])[0];
// stsz starts with a 4 byte sampleSize which we don't need
const sampleSizes = parseSamples(stsz && stsz.length && stsz.subarray(4) || null);
const frames = [];
for (let chunkIndex = 0; chunkIndex < chunkOffsets.length; chunkIndex++) {
let samplesInChunk;
for (let i = 0; i < samplesToChunks.length; i++) {
const sampleToChunk = samplesToChunks[i];
const isThisOne = (chunkIndex + 1) >= sampleToChunk.firstChunk &&
(i + 1 >= samplesToChunks.length || (chunkIndex + 1) < samplesToChunks[i + 1].firstChunk);
if (isThisOne) {
samplesInChunk = sampleToChunk.samplesPerChunk;
break;
}
}
let chunkOffset = chunkOffsets[chunkIndex];
for (let i = 0; i < samplesInChunk; i++) {
const frameEnd = sampleSizes[frames.length];
// if we don't have key samples every frame is a keyframe
let keyframe = !keySamples.length;
if (keySamples.length && keySamples.indexOf(frames.length + 1) !== -1) {
keyframe = true;
}
const frame = {
keyframe,
start: chunkOffset,
end: chunkOffset + frameEnd
};
for (let k = 0; k < timeToSamples.length; k++) {
const {sampleCount, sampleDelta} = timeToSamples[k];
if ((frames.length) <= sampleCount) {
// ms to ns
const lastTimestamp = frames.length ? frames[frames.length - 1].timestamp : 0;
frame.timestamp = lastTimestamp + ((sampleDelta / timescale) * 1000);
frame.duration = sampleDelta;
break;
}
}
frames.push(frame);
chunkOffset += frameEnd;
}
}
return frames;
};
export const addSampleDescription = function(track, bytes) {
let codec = bytesToString(bytes.subarray(0, 4));
if (track.type === 'video') {
track.info = track.info || {};
track.info.width = bytes[28] << 8 | bytes[29];
track.info.height = bytes[30] << 8 | bytes[31];
} else if (track.type === 'audio') {
track.info = track.info || {};
track.info.channels = bytes[20] << 8 | bytes[21];
track.info.bitDepth = bytes[22] << 8 | bytes[23];
track.info.sampleRate = bytes[28] << 8 | bytes[29];
}
if (codec === 'avc1') {
const avcC = findNamedBox(bytes, 'avcC');
// AVCDecoderConfigurationRecord
codec += `.${getAvcCodec(avcC)}`;
track.info.avcC = avcC;
// TODO: do we need to parse all this?
/* {
configurationVersion: avcC[0],
profile: avcC[1],
profileCompatibility: avcC[2],
level: avcC[3],
lengthSizeMinusOne: avcC[4] & 0x3
};
let spsNalUnitCount = avcC[5] & 0x1F;
const spsNalUnits = track.info.avc.spsNalUnits = [];
// past spsNalUnitCount
let offset = 6;
while (spsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
spsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}
let ppsNalUnitCount = avcC[offset];
const ppsNalUnits = track.info.avc.ppsNalUnits = [];
// past ppsNalUnitCount
offset += 1;
while (ppsNalUnitCount--) {
const nalLen = avcC[offset] << 8 | avcC[offset + 1];
ppsNalUnits.push(avcC.subarray(offset + 2, offset + 2 + nalLen));
offset += nalLen + 2;
}*/
// HEVCDecoderConfigurationRecord
} else if (codec === 'hvc1' || codec === 'hev1') {
codec += `.${getHvcCodec(findNamedBox(bytes, 'hvcC'))}`;
} else if (codec === 'mp4a' || codec === 'mp4v') {
const esds = findNamedBox(bytes, 'esds');
const esDescriptor = parseDescriptors(esds.subarray(4))[0];
const decoderConfig = esDescriptor && esDescriptor.descriptors.filter(({tag}) => tag === 0x04)[0];
if (decoderConfig) {
// most codecs do not have a further '.'
// such as 0xa5 for ac-3 and 0xa6 for e-ac-3
codec += '.' + toHexString(decoderConfig.oti);
if (decoderConfig.oti === 0x40) {
codec += '.' + (decoderConfig.descriptors[0].bytes[0] >> 3).toString();
} else if (decoderConfig.oti === 0x20) {
codec += '.' + (decoderConfig.descriptors[0].bytes[4]).toString();
} else if (decoderConfig.oti === 0xdd) {
codec = 'vorbis';
}
} else if (track.type === 'audio') {
codec += '.40.2';
} else {
codec += '.20.9';
}
} else if (codec === 'av01') {
// AV1DecoderConfigurationRecord
codec += `.${getAv1Codec(findNamedBox(bytes, 'av1C'))}`;
} else if (codec === 'vp09') {
// VPCodecConfigurationRecord
const vpcC = findNamedBox(bytes, 'vpcC');
// https://www.webmproject.org/vp9/mp4/
const profile = vpcC[0];
const level = vpcC[1];
const bitDepth = vpcC[2] >> 4;
const chromaSubsampling = (vpcC[2] & 0x0F) >> 1;
const videoFullRangeFlag = (vpcC[2] & 0x0F) >> 3;
const colourPrimaries = vpcC[3];
const transferCharacteristics = vpcC[4];
const matrixCoefficients = vpcC[5];
codec += `.${padStart(profile, 2, '0')}`;
codec += `.${padStart(level, 2, '0')}`;
codec += `.${padStart(bitDepth, 2, '0')}`;
codec += `.${padStart(chromaSubsampling, 2, '0')}`;
codec += `.${padStart(colourPrimaries, 2, '0')}`;
codec += `.${padStart(transferCharacteristics, 2, '0')}`;
codec += `.${padStart(matrixCoefficients, 2, '0')}`;
codec += `.${padStart(videoFullRangeFlag, 2, '0')}`;
} else if (codec === 'theo') {
codec = 'theora';
} else if (codec === 'spex') {
codec = 'speex';
} else if (codec === '.mp3') {
codec = 'mp4a.40.34';
} else if (codec === 'msVo') {
codec = 'vorbis';
} else if (codec === 'Opus') {
codec = 'opus';
const dOps = findNamedBox(bytes, 'dOps');
track.info.opus = parseOpusHead(dOps);
// TODO: should this go into the webm code??
// Firefox requires a codecDelay for opus playback
// see https://bugzilla.mozilla.org/show_bug.cgi?id=1276238
track.info.codecDelay = 6500000;
} else {
codec = codec.toLowerCase();
}
/* eslint-enable */
// flac, ac-3, ec-3, opus
track.codec = codec;
};
export const parseTracks = function(bytes, frameTable = true) {
bytes = toUint8(bytes);
const traks = findBox(bytes, ['moov', 'trak'], true);
const tracks = [];
traks.forEach(function(trak) {
const track = {bytes: trak};
const mdia = findBox(trak, ['mdia'])[0];
const hdlr = findBox(mdia, ['hdlr'])[0];
const trakType = bytesToString(hdlr.subarray(8, 12));
if (trakType === 'soun') {
track.type = 'audio';
} else if (trakType === 'vide') {
track.type = 'video';
} else {
track.type = trakType;
}
const tkhd = findBox(trak, ['tkhd'])[0];
if (tkhd) {
const view = new DataView(tkhd.buffer, tkhd.byteOffset, tkhd.byteLength);
const tkhdVersion = view.getUint8(0);
track.number = (tkhdVersion === 0) ? view.getUint32(12) : view.getUint32(20);
}
const mdhd = findBox(mdia, ['mdhd'])[0];
if (mdhd) {
// mdhd is a FullBox, meaning it will have its own version as the first byte
const version = mdhd[0];
const index = version === 0 ? 12 : 20;
track.timescale = (
mdhd[index] << 24 |
mdhd[index + 1] << 16 |
mdhd[index + 2] << 8 |
mdhd[index + 3]
) >>> 0;
}
const stbl = findBox(mdia, ['minf', 'stbl'])[0];
const stsd = findBox(stbl, ['stsd'])[0];
let descriptionCount = bytesToNumber(stsd.subarray(4, 8));
let offset = 8;
// add codec and codec info
while (descriptionCount--) {
const len = bytesToNumber(stsd.subarray(offset, offset + 4));
const sampleDescriptor = stsd.subarray(offset + 4, offset + 4 + len);
addSampleDescription(track, sampleDescriptor);
offset += 4 + len;
}
if (frameTable) {
track.frameTable = buildFrameTable(stbl, track.timescale);
}
// codec has no sub parameters
tracks.push(track);
});
return tracks;
};
export const parseMediaInfo = function(bytes) {
const mvhd = findBox(bytes, ['moov', 'mvhd'], true)[0];
if (!mvhd || !mvhd.length) {
return;
}
const info = {};
// ms to ns
// mvhd v1 has 8 byte duration and other fields too
if (mvhd[0] === 1) {
info.timestampScale = bytesToNumber(mvhd.subarray(20, 24));
info.duration = bytesToNumber(mvhd.subarray(24, 32));
} else {
info.timestampScale = bytesToNumber(mvhd.subarray(12, 16));
info.duration = bytesToNumber(mvhd.subarray(16, 20));
}
info.bytes = mvhd;
return info;
};

109
node_modules/@videojs/vhs-utils/src/nal-helpers.js generated vendored Normal file
View file

@ -0,0 +1,109 @@
import {bytesMatch, toUint8} from './byte-helpers.js';
export const NAL_TYPE_ONE = toUint8([0x00, 0x00, 0x00, 0x01]);
export const NAL_TYPE_TWO = toUint8([0x00, 0x00, 0x01]);
export const EMULATION_PREVENTION = toUint8([0x00, 0x00, 0x03]);
/**
* Expunge any "Emulation Prevention" bytes from a "Raw Byte
* Sequence Payload"
*
* @param data {Uint8Array} the bytes of a RBSP from a NAL
* unit
* @return {Uint8Array} the RBSP without any Emulation
* Prevention Bytes
*/
export const discardEmulationPreventionBytes = function(bytes) {
const positions = [];
let i = 1;
// Find all `Emulation Prevention Bytes`
while (i < bytes.length - 2) {
if (bytesMatch(bytes.subarray(i, i + 3), EMULATION_PREVENTION)) {
positions.push(i + 2);
i++;
}
i++;
}
// If no Emulation Prevention Bytes were found just return the original
// array
if (positions.length === 0) {
return bytes;
}
// Create a new array to hold the NAL unit data
const newLength = bytes.length - positions.length;
const newData = new Uint8Array(newLength);
let sourceIndex = 0;
for (i = 0; i < newLength; sourceIndex++, i++) {
if (sourceIndex === positions[0]) {
// Skip this byte
sourceIndex++;
// Remove this position index
positions.shift();
}
newData[i] = bytes[sourceIndex];
}
return newData;
};
export const findNal = function(bytes, dataType, types, nalLimit = Infinity) {
bytes = toUint8(bytes);
types = [].concat(types);
let i = 0;
let nalStart;
let nalsFound = 0;
// keep searching until:
// we reach the end of bytes
// we reach the maximum number of nals they want to seach
// NOTE: that we disregard nalLimit when we have found the start
// of the nal we want so that we can find the end of the nal we want.
while (i < bytes.length && (nalsFound < nalLimit || nalStart)) {
let nalOffset;
if (bytesMatch(bytes.subarray(i), NAL_TYPE_ONE)) {
nalOffset = 4;
} else if (bytesMatch(bytes.subarray(i), NAL_TYPE_TWO)) {
nalOffset = 3;
}
// we are unsynced,
// find the next nal unit
if (!nalOffset) {
i++;
continue;
}
nalsFound++;
if (nalStart) {
return discardEmulationPreventionBytes(bytes.subarray(nalStart, i));
}
let nalType;
if (dataType === 'h264') {
nalType = (bytes[i + nalOffset] & 0x1f);
} else if (dataType === 'h265') {
nalType = (bytes[i + nalOffset] >> 1) & 0x3f;
}
if (types.indexOf(nalType) !== -1) {
nalStart = i + nalOffset;
}
// nal header is 1 length for h264, and 2 for h265
i += nalOffset + (dataType === 'h264' ? 1 : 2);
}
return bytes.subarray(0, 0);
};
export const findH264Nal = (bytes, type, nalLimit) => findNal(bytes, 'h264', type, nalLimit);
export const findH265Nal = (bytes, type, nalLimit) => findNal(bytes, 'h265', type, nalLimit);

27
node_modules/@videojs/vhs-utils/src/ogg-helpers.js generated vendored Normal file
View file

@ -0,0 +1,27 @@
import {bytesMatch, toUint8} from './byte-helpers';
const SYNC_WORD = toUint8([0x4f, 0x67, 0x67, 0x53]);
export const getPages = function(bytes, start, end = Infinity) {
bytes = toUint8(bytes);
const pages = [];
let i = 0;
while (i < bytes.length && pages.length < end) {
// we are unsynced,
// find the next syncword
if (!bytesMatch(bytes, SYNC_WORD, {offset: i})) {
i++;
continue;
}
const segmentLength = bytes[i + 27];
pages.push(bytes.subarray(i, i + 28 + segmentLength));
i += pages[pages.length - 1].length;
}
return pages.slice(start, end);
};

61
node_modules/@videojs/vhs-utils/src/opus-helpers.js generated vendored Normal file
View file

@ -0,0 +1,61 @@
export const OPUS_HEAD = new Uint8Array([
// O, p, u, s
0x4f, 0x70, 0x75, 0x73,
// H, e, a, d
0x48, 0x65, 0x61, 0x64
]);
// https://wiki.xiph.org/OggOpus
// https://vfrmaniac.fushizen.eu/contents/opus_in_isobmff.html
// https://opus-codec.org/docs/opusfile_api-0.7/structOpusHead.html
export const parseOpusHead = function(bytes) {
const view = new DataView(bytes.buffer, bytes.byteOffset, bytes.byteLength);
const version = view.getUint8(0);
// version 0, from mp4, does not use littleEndian.
const littleEndian = version !== 0;
const config = {
version,
channels: view.getUint8(1),
preSkip: view.getUint16(2, littleEndian),
sampleRate: view.getUint32(4, littleEndian),
outputGain: view.getUint16(8, littleEndian),
channelMappingFamily: view.getUint8(10)
};
if (config.channelMappingFamily > 0 && bytes.length > 10) {
config.streamCount = view.getUint8(11);
config.twoChannelStreamCount = view.getUint8(12);
config.channelMapping = [];
for (let c = 0; c < config.channels; c++) {
config.channelMapping.push(view.getUint8(13 + c));
}
}
return config;
};
export const setOpusHead = function(config) {
const size = config.channelMappingFamily <= 0 ? 11 : (12 + config.channels);
const view = new DataView(new ArrayBuffer(size));
const littleEndian = config.version !== 0;
view.setUint8(0, config.version);
view.setUint8(1, config.channels);
view.setUint16(2, config.preSkip, littleEndian);
view.setUint32(4, config.sampleRate, littleEndian);
view.setUint16(8, config.outputGain, littleEndian);
view.setUint8(10, config.channelMappingFamily);
if (config.channelMappingFamily > 0) {
view.setUint8(11, config.streamCount);
config.channelMapping.foreach(function(cm, i) {
view.setUint8(12 + i, cm);
});
}
return new Uint8Array(view.buffer);
};

51
node_modules/@videojs/vhs-utils/src/resolve-url.js generated vendored Normal file
View file

@ -0,0 +1,51 @@
import URLToolkit from 'url-toolkit';
import window from 'global/window';
const DEFAULT_LOCATION = 'http://example.com';
const resolveUrl = (baseUrl, relativeUrl) => {
// return early if we don't need to resolve
if ((/^[a-z]+:/i).test(relativeUrl)) {
return relativeUrl;
}
// if baseUrl is a data URI, ignore it and resolve everything relative to window.location
if ((/^data:/).test(baseUrl)) {
baseUrl = window.location && window.location.href || '';
}
// IE11 supports URL but not the URL constructor
// feature detect the behavior we want
const nativeURL = typeof window.URL === 'function';
const protocolLess = (/^\/\//.test(baseUrl));
// remove location if window.location isn't available (i.e. we're in node)
// and if baseUrl isn't an absolute url
const removeLocation = !window.location && !(/\/\//i).test(baseUrl);
// if the base URL is relative then combine with the current location
if (nativeURL) {
baseUrl = new window.URL(baseUrl, window.location || DEFAULT_LOCATION);
} else if (!(/\/\//i).test(baseUrl)) {
baseUrl = URLToolkit.buildAbsoluteURL(window.location && window.location.href || '', baseUrl);
}
if (nativeURL) {
const newUrl = new URL(relativeUrl, baseUrl);
// if we're a protocol-less url, remove the protocol
// and if we're location-less, remove the location
// otherwise, return the url unmodified
if (removeLocation) {
return newUrl.href.slice(DEFAULT_LOCATION.length);
} else if (protocolLess) {
return newUrl.href.slice(newUrl.protocol.length);
}
return newUrl.href;
}
return URLToolkit.buildAbsoluteURL(baseUrl, relativeUrl);
};
export default resolveUrl;

75
node_modules/@videojs/vhs-utils/src/riff-helpers.js generated vendored Normal file
View file

@ -0,0 +1,75 @@
import {toUint8, stringToBytes, bytesMatch} from './byte-helpers.js';
const CONSTANTS = {
LIST: toUint8([0x4c, 0x49, 0x53, 0x54]),
RIFF: toUint8([0x52, 0x49, 0x46, 0x46]),
WAVE: toUint8([0x57, 0x41, 0x56, 0x45])
};
const normalizePath = function(path) {
if (typeof path === 'string') {
return stringToBytes(path);
}
if (typeof path === 'number') {
return path;
}
return path;
};
const normalizePaths = function(paths) {
if (!Array.isArray(paths)) {
return [normalizePath(paths)];
}
return paths.map((p) => normalizePath(p));
};
export const findFourCC = function(bytes, paths) {
paths = normalizePaths(paths);
bytes = toUint8(bytes);
let results = [];
if (!paths.length) {
// short-circuit the search for empty paths
return results;
}
let i = 0;
while (i < bytes.length) {
let type = bytes.subarray(i, i + 4);
let size = ((bytes[i + 7] << 24 | bytes[i + 6] << 16 | bytes[i + 5] << 8 | bytes[i + 4]) >>> 0);
// skip LIST/RIFF and get the actual type
if (bytesMatch(type, CONSTANTS.LIST) || bytesMatch(type, CONSTANTS.RIFF) || bytesMatch(type, CONSTANTS.WAVE)) {
type = bytes.subarray(i + 8, i + 12);
i += 4;
size -= 4;
}
const data = bytes.subarray(i + 8, i + 8 + size);
if (bytesMatch(type, paths[0])) {
if (paths.length === 1) {
// this is the end of the path and we've found the box we were
// looking for
results.push(data);
} else {
// recursively search for the next box along the path
const subresults = findFourCC(data, paths.slice(1));
if (subresults.length) {
results = results.concat(subresults);
}
}
}
i += 8 + data.length;
}
// we've finished searching all of bytes
return results;
};

108
node_modules/@videojs/vhs-utils/src/stream.js generated vendored Normal file
View file

@ -0,0 +1,108 @@
/**
* @file stream.js
*/
/**
* A lightweight readable stream implemention that handles event dispatching.
*
* @class Stream
*/
export default class Stream {
constructor() {
this.listeners = {};
}
/**
* Add a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener the callback to be invoked when an event of
* the specified type occurs
*/
on(type, listener) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(listener);
}
/**
* Remove a listener for a specified event type.
*
* @param {string} type the event name
* @param {Function} listener a function previously registered for this
* type of event through `on`
* @return {boolean} if we could turn it off or not
*/
off(type, listener) {
if (!this.listeners[type]) {
return false;
}
const index = this.listeners[type].indexOf(listener);
// TODO: which is better?
// In Video.js we slice listener functions
// on trigger so that it does not mess up the order
// while we loop through.
//
// Here we slice on off so that the loop in trigger
// can continue using it's old reference to loop without
// messing up the order.
this.listeners[type] = this.listeners[type].slice(0);
this.listeners[type].splice(index, 1);
return index > -1;
}
/**
* Trigger an event of the specified type on this stream. Any additional
* arguments to this function are passed as parameters to event listeners.
*
* @param {string} type the event name
*/
trigger(type) {
const callbacks = this.listeners[type];
if (!callbacks) {
return;
}
// Slicing the arguments on every invocation of this method
// can add a significant amount of overhead. Avoid the
// intermediate object creation for the common case of a
// single callback argument
if (arguments.length === 2) {
const length = callbacks.length;
for (let i = 0; i < length; ++i) {
callbacks[i].call(this, arguments[1]);
}
} else {
const args = Array.prototype.slice.call(arguments, 1);
const length = callbacks.length;
for (let i = 0; i < length; ++i) {
callbacks[i].apply(this, args);
}
}
}
/**
* Destroys the stream and cleans up.
*/
dispose() {
this.listeners = {};
}
/**
* Forwards all `data` events on this stream to the destination stream. The
* destination stream should provide a method `push` to receive the data
* events as they arrive.
*
* @param {Stream} destination the stream that will receive all `data` events
* @see http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
*/
pipe(destination) {
this.on('data', function(data) {
destination.push(data);
});
}
}

View file

@ -0,0 +1,321 @@
import QUnit from 'qunit';
import {
bytesToString,
stringToBytes,
toUint8,
concatTypedArrays,
toHexString,
toBinaryString,
bytesToNumber,
numberToBytes,
bytesMatch
} from '../src/byte-helpers.js';
import window from 'global/window';
const arrayNames = [];
const BigInt = window.BigInt;
[
'Array',
'Int8Array',
'Uint8Array',
'Uint8ClampedArray',
'Int16Array',
'Uint16Array',
'Int32Array',
'Uint32Array',
'Float32Array',
'Float64Array'
].forEach(function(name) {
if (window[name]) {
arrayNames.push(name);
}
});
QUnit.module('bytesToString');
const testString = 'hello竜';
const testBytes = toUint8([
// h
0x68,
// e
0x65,
// l
0x6c,
// l
0x6c,
// o
0x6f,
// 竜
0xe7, 0xab, 0x9c
]);
const rawBytes = toUint8([0x47, 0x40, 0x00, 0x10, 0x00, 0x00, 0xb0, 0x0d, 0x00, 0x01]);
QUnit.test('should function as expected', function(assert) {
arrayNames.forEach(function(name) {
const testObj = name === 'Array' ? testBytes : new window[name](testBytes);
assert.equal(bytesToString(testObj), testString, `testString work as a string arg with ${name}`);
assert.equal(bytesToString(new window[name]()), '', `empty ${name} returns empty string`);
});
assert.equal(bytesToString(), '', 'undefined returns empty string');
assert.equal(bytesToString(null), '', 'null returns empty string');
assert.equal(bytesToString(stringToBytes(testString)), testString, 'stringToBytes -> bytesToString works');
});
QUnit.module('stringToBytes');
QUnit.test('should function as expected', function(assert) {
assert.deepEqual(stringToBytes(testString), testBytes, 'returns an array of bytes');
assert.deepEqual(stringToBytes(), toUint8(), 'empty array for undefined');
assert.deepEqual(stringToBytes(null), toUint8(), 'empty array for null');
assert.deepEqual(stringToBytes(''), toUint8(), 'empty array for empty string');
assert.deepEqual(stringToBytes(10), toUint8([0x31, 0x30]), 'converts numbers to strings');
assert.deepEqual(stringToBytes(bytesToString(testBytes)), testBytes, 'bytesToString -> stringToBytes works');
assert.deepEqual(stringToBytes(bytesToString(rawBytes), true), rawBytes, 'equal to original with raw bytes mode');
assert.notDeepEqual(stringToBytes(bytesToString(rawBytes)), rawBytes, 'without raw byte mode works, not equal');
});
QUnit.module('toUint8');
QUnit.test('should function as expected', function(assert) {
const tests = {
undef: {
data: undefined,
expected: new Uint8Array()
},
null: {
data: null,
expected: new Uint8Array()
},
string: {
data: 'foo',
expected: new Uint8Array()
},
NaN: {
data: NaN,
expected: new Uint8Array()
},
object: {
data: {},
expected: new Uint8Array()
},
number: {
data: 0x11,
expected: new Uint8Array([0x11])
}
};
Object.keys(tests).forEach(function(name) {
const {data, expected} = tests[name];
const result = toUint8(data);
assert.ok(result instanceof Uint8Array, `obj is a Uint8Array for ${name}`);
assert.deepEqual(result, expected, `data is as expected for ${name}`);
});
arrayNames.forEach(function(name) {
const testObj = name === 'Array' ? testBytes : new window[name](testBytes);
const uint = toUint8(testObj);
assert.ok(uint instanceof Uint8Array && uint.length > 0, `converted ${name} to Uint8Array`);
});
});
QUnit.module('concatTypedArrays');
QUnit.test('should function as expected', function(assert) {
const tests = {
undef: {
data: concatTypedArrays(),
expected: toUint8([])
},
empty: {
data: concatTypedArrays(toUint8([])),
expected: toUint8([])
},
single: {
data: concatTypedArrays([0x01]),
expected: toUint8([0x01])
},
array: {
data: concatTypedArrays([0x01], [0x02]),
expected: toUint8([0x01, 0x02])
},
uint: {
data: concatTypedArrays(toUint8([0x01]), toUint8([0x02])),
expected: toUint8([0x01, 0x02])
},
buffer: {
data: concatTypedArrays(toUint8([0x01]).buffer, toUint8([0x02]).buffer),
expected: toUint8([0x01, 0x02])
},
manyarray: {
data: concatTypedArrays([0x01], [0x02], [0x03], [0x04]),
expected: toUint8([0x01, 0x02, 0x03, 0x04])
},
manyuint: {
data: concatTypedArrays(toUint8([0x01]), toUint8([0x02]), toUint8([0x03]), toUint8([0x04])),
expected: toUint8([0x01, 0x02, 0x03, 0x04])
}
};
Object.keys(tests).forEach(function(name) {
const {data, expected} = tests[name];
assert.ok(data instanceof Uint8Array, `obj is a Uint8Array for ${name}`);
assert.deepEqual(data, expected, `data is as expected for ${name}`);
});
});
QUnit.module('toHexString');
QUnit.test('should function as expected', function(assert) {
assert.equal(toHexString(0xFF), 'ff', 'works with single value');
assert.equal(toHexString([0xFF, 0xaa]), 'ffaa', 'works with array');
assert.equal(toHexString(toUint8([0xFF, 0xaa])), 'ffaa', 'works with uint8');
assert.equal(toHexString(toUint8([0xFF, 0xaa]).buffer), 'ffaa', 'works with buffer');
assert.equal(toHexString(toUint8([0xFF, 0xaa, 0xbb]).subarray(1, 3)), 'aabb', 'works with subarray');
assert.equal(toHexString([0x01, 0x02, 0x03]), '010203', 'works with single digits');
});
QUnit.module('toBinaryString');
QUnit.test('should function as expected', function(assert) {
const ff = '11111111';
const aa = '10101010';
const bb = '10111011';
const zerof = '00001111';
const one = '00000001';
const zero = '00000000';
const fzero = '11110000';
assert.equal(toBinaryString(0xFF), ff, 'works with single value');
assert.equal(toBinaryString([0xFF, 0xaa]), ff + aa, 'works with array');
assert.equal(toBinaryString(toUint8([0xFF, 0xbb])), ff + bb, 'works with uint8');
assert.equal(toBinaryString(toUint8([0xFF, 0xaa]).buffer), ff + aa, 'works with buffer');
assert.equal(toBinaryString(toUint8([0xFF, 0xaa, 0xbb]).subarray(1, 3)), aa + bb, 'works with subarray');
assert.equal(toBinaryString([0x0F, 0x01, 0xF0, 0x00]), zerof + one + fzero + zero, 'works with varying digits digits');
});
QUnit.module('bytesToNumber');
QUnit.test('sanity', function(assert) {
assert.equal(bytesToNumber(0xFF), 0xFF, 'single value');
assert.equal(bytesToNumber([0xFF, 0x01]), 0xFF01, 'works with array');
assert.equal(bytesToNumber(toUint8([0xFF, 0xbb])), 0xFFBB, 'works with uint8');
assert.equal(bytesToNumber(toUint8([0xFF, 0xaa]).buffer), 0xFFAA, 'works with buffer');
assert.equal(bytesToNumber(toUint8([0xFF, 0xaa, 0xbb]).subarray(1, 3)), 0xAABB, 'works with subarray');
});
QUnit.test('unsigned and littleEndian work', function(assert) {
// works with any number of bits
assert.equal(bytesToNumber([0xFF]), 0xFF, 'u8');
assert.equal(bytesToNumber([0xFF, 0xAA]), 0xFFAA, 'u16');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB]), 0xFFAABB, 'u24');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC]), 0xFFAABBCC, 'u32');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD]), 0xFFAABBCCDD, 'u40');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE]), 0xFFAABBCCDDEE, 'u48');
assert.equal(bytesToNumber([0xFF], {le: true}), 0xFF, 'u8 le');
assert.equal(bytesToNumber([0xFF, 0xAA], {le: true}), 0xAAFF, 'u16 le');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB], {le: true}), 0xBBAAFF, 'u24 le');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC], {le: true}), 0xCCBBAAFF, 'u32 le');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD], {le: true}), 0xDDCCBBAAFF, 'u40 le');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE], {le: true}), 0xEEDDCCBBAAFF, 'u48 le');
if (BigInt) {
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0x99]), 0xFFAABBCCDDEE99, 'u56');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0x99, 0x88]), 0xFFAABBCCDDEE9988, 'u64');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0x99], {le: true}), 0x99EEDDCCBBAAFF, 'u56 le');
assert.equal(bytesToNumber([0xFF, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0x99, 0x88], {le: true}), 0x8899EEDDCCBBAAFF, 'u64 le');
}
});
QUnit.test('signed and littleEndian work', function(assert) {
assert.equal(bytesToNumber([0xF0], {signed: true}), -16, 'i8');
assert.equal(bytesToNumber([0x80, 0x70], {signed: true}), -32656, 'i16');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f], {signed: true}), -8359777, 'i24');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF], {signed: true}), -2140102657, 'i32');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF, 0x10], {signed: true}), -547866280176, 'i40');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF, 0x10, 0x89], {signed: true}), -140253767724919, 'i48');
assert.equal(bytesToNumber([0xF0], {signed: true, le: true}), -16, 'i8 le');
assert.equal(bytesToNumber([0x80, 0x70], {signed: true, le: true}), 0x7080, 'i16 le');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f], {signed: true, le: true}), -6328192, 'i24 le');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF], {signed: true, le: true}), -6328192, 'i32 le');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF, 0x10], {signed: true, le: true}), 73008115840, 'i40 le');
assert.equal(bytesToNumber([0x80, 0x70, 0x9f, 0xFF, 0x10, 0x89], {signed: true, le: true}), -130768875589504, 'i48 le');
if (BigInt) {
assert.equal(bytesToNumber([0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF], {signed: true}), -1, 'i56');
assert.equal(bytesToNumber([0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF], {signed: true}), -1, 'i64');
assert.equal(bytesToNumber([0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF], {signed: true, le: true}), -1, 'i56 le');
assert.equal(bytesToNumber([0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF], {signed: true, le: true}), -1, 'i64 le');
}
});
QUnit.module('numberToBytes');
QUnit.test('unsigned negative and positive', function(assert) {
assert.deepEqual(numberToBytes(), toUint8([0x00]), 'no bytes');
assert.deepEqual(numberToBytes(0xFF), toUint8([0xFF]), 'u8');
assert.deepEqual(numberToBytes(0xFFaa), toUint8([0xFF, 0xaa]), 'u16');
assert.deepEqual(numberToBytes(0xFFaabb), toUint8([0xFF, 0xaa, 0xbb]), 'u24');
assert.deepEqual(numberToBytes(0xFFaabbcc), toUint8([0xFF, 0xaa, 0xbb, 0xcc]), 'u32');
assert.deepEqual(numberToBytes(0xFFaabbccdd), toUint8([0xFF, 0xaa, 0xbb, 0xcc, 0xdd]), 'u40');
assert.deepEqual(numberToBytes(0xFFaabbccddee), toUint8([0xFF, 0xaa, 0xbb, 0xcc, 0xdd, 0xee]), 'u48');
assert.deepEqual(numberToBytes(-16), toUint8([0xF0]), 'negative to u8');
assert.deepEqual(numberToBytes(-32640), toUint8([0x80, 0x80]), 'negative to u16');
assert.deepEqual(numberToBytes(-3264062), toUint8([0xce, 0x31, 0xc2]), 'negative to u24');
assert.deepEqual(numberToBytes(-2139062144), toUint8([0x80, 0x80, 0x80, 0x80]), 'negative to u32');
assert.deepEqual(numberToBytes(-3139062144), toUint8([0xff, 0x44, 0xe5, 0xb6, 0x80]), 'negative u40');
assert.deepEqual(numberToBytes(-3139062144444), toUint8([0xfd, 0x25, 0x21, 0x50, 0xe2, 0x44]), 'negative u48');
if (BigInt) {
assert.deepEqual(numberToBytes(BigInt('0xFFaabbccddee99')), toUint8([0xFF, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0x99]), 'u56');
assert.deepEqual(numberToBytes(BigInt('0xFFaabbccddee9988')), toUint8([0xFF, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0x99, 0x88]), 'u64');
assert.deepEqual(numberToBytes(BigInt('-31390621444448812')), toUint8([0x90, 0x7a, 0x65, 0x67, 0x86, 0x5d, 0xd4]), 'negative to u56');
assert.deepEqual(numberToBytes(BigInt('-9187201950435737472')), toUint8([0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80]), 'u64');
}
});
QUnit.test('unsigned littleEndian negative and positive', function(assert) {
assert.deepEqual(numberToBytes(0xFF, {le: true}), toUint8([0xFF]), 'u8');
assert.deepEqual(numberToBytes(0xFFaa, {le: true}), toUint8([0xaa, 0xFF]), 'u16');
assert.deepEqual(numberToBytes(0xFFaabb, {le: true}), toUint8([0xbb, 0xaa, 0xFF]), 'u24');
assert.deepEqual(numberToBytes(0xFFaabbcc, {le: true}), toUint8([0xcc, 0xbb, 0xaa, 0xff]), 'u32');
assert.deepEqual(numberToBytes(0xFFaabbccdd, {le: true}), toUint8([0xdd, 0xcc, 0xbb, 0xaa, 0xff]), 'u40');
assert.deepEqual(numberToBytes(0xFFaabbccddee, {le: true}), toUint8([0xee, 0xdd, 0xcc, 0xbb, 0xaa, 0xff]), 'u48');
assert.deepEqual(numberToBytes(-16, {le: true}), toUint8([0xF0]), 'negative to u8');
assert.deepEqual(numberToBytes(-32640, {le: true}), toUint8([0x80, 0x80]), 'negative to u16');
assert.deepEqual(numberToBytes(-3264062, {le: true}), toUint8([0xc2, 0x31, 0xce]), 'negative to u24');
assert.deepEqual(numberToBytes(-2139062144, {le: true}), toUint8([0x80, 0x80, 0x80, 0x80]), 'negative to u32');
assert.deepEqual(numberToBytes(-3139062144, {le: true}), toUint8([0x80, 0xb6, 0xe5, 0x44, 0xff]), 'negative u40');
assert.deepEqual(numberToBytes(-3139062144444, {le: true}), toUint8([0x44, 0xe2, 0x50, 0x21, 0x25, 0xfd]), 'negative u48');
if (BigInt) {
assert.deepEqual(numberToBytes(BigInt('0xFFaabbccddee99'), {le: true}), toUint8([0x99, 0xee, 0xdd, 0xcc, 0xbb, 0xaa, 0xff]), 'u56');
assert.deepEqual(numberToBytes(BigInt('0xFFaabbccddee9988'), {le: true}), toUint8([0x88, 0x99, 0xee, 0xdd, 0xcc, 0xbb, 0xaa, 0xff]), 'u64');
assert.deepEqual(numberToBytes(BigInt('-31390621444448812'), {le: true}), toUint8([0xd4, 0x5d, 0x86, 0x67, 0x65, 0x7a, 0x90]), 'negative to u56');
assert.deepEqual(numberToBytes(BigInt('-9187201950435737472'), {le: true}), toUint8([0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80]), 'u64');
}
});
QUnit.module('bytesMatch');
QUnit.test('should function as expected', function(assert) {
assert.equal(bytesMatch(), false, 'no a or b bytes, false');
assert.equal(bytesMatch(null, []), false, 'no a bytes, false');
assert.equal(bytesMatch([]), false, 'no b bytes, false');
assert.equal(bytesMatch([0x00], [0x00, 0x02]), false, 'not enough bytes');
assert.equal(bytesMatch([0x00], [0x00], {offset: 1}), false, 'not due to offset');
assert.equal(bytesMatch([0xbb, 0xaa], [0xaa]), false, 'bytes do not match');
assert.equal(bytesMatch([0xaa], [0xaa], {mask: [0x10]}), false, 'bytes do not match due to mask');
assert.equal(bytesMatch([0xaa], [0xaa]), true, 'bytes match');
assert.equal(bytesMatch([0xbb, 0xaa], [0xbb]), true, 'bytes match more a');
assert.equal(bytesMatch([0xbb, 0xaa], [0xaa], {offset: 1}), true, 'bytes match with offset');
assert.equal(bytesMatch([0xaa], [0x20], {mask: [0x20]}), true, 'bytes match with mask');
assert.equal(bytesMatch([0xbb, 0xaa], [0x20], {mask: [0x20], offset: 1}), true, 'bytes match with mask and offset');
assert.equal(bytesMatch([0xbb, 0xaa, 0xaa], [0x20, 0x20], {mask: [0x20, 0x20], offset: 1}), true, 'bytes match with many masks and offset');
});

472
node_modules/@videojs/vhs-utils/test/codecs.test.js generated vendored Normal file
View file

@ -0,0 +1,472 @@
import window from 'global/window';
import QUnit from 'qunit';
import {
mapLegacyAvcCodecs,
translateLegacyCodecs,
parseCodecs,
codecsFromDefault,
isVideoCodec,
isAudioCodec,
muxerSupportsCodec,
browserSupportsCodec,
getMimeForCodec
} from '../src/codecs';
const supportedMuxerCodecs = [
'mp4a',
'avc1'
];
const unsupportedMuxerCodecs = [
'hvc1',
'ac-3',
'ec-3',
'mp3'
];
QUnit.module('Legacy Codecs');
QUnit.test('maps legacy AVC codecs', function(assert) {
assert.equal(
mapLegacyAvcCodecs('avc1.deadbeef'),
'avc1.deadbeef',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('avc1.dead.beef, mp4a.something'),
'avc1.dead.beef, mp4a.something',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('avc1.dead.beef,mp4a.something'),
'avc1.dead.beef,mp4a.something',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('mp4a.something,avc1.dead.beef'),
'mp4a.something,avc1.dead.beef',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('mp4a.something, avc1.dead.beef'),
'mp4a.something, avc1.dead.beef',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('avc1.42001e'),
'avc1.42001e',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('avc1.4d0020,mp4a.40.2'),
'avc1.4d0020,mp4a.40.2',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('mp4a.40.2,avc1.4d0020'),
'mp4a.40.2,avc1.4d0020',
'does nothing for non legacy pattern'
);
assert.equal(
mapLegacyAvcCodecs('mp4a.40.40'),
'mp4a.40.40',
'does nothing for non video codecs'
);
assert.equal(
mapLegacyAvcCodecs('avc1.66.30'),
'avc1.42001e',
'translates legacy video codec alone'
);
assert.equal(
mapLegacyAvcCodecs('avc1.66.30, mp4a.40.2'),
'avc1.42001e, mp4a.40.2',
'translates legacy video codec when paired with audio'
);
assert.equal(
mapLegacyAvcCodecs('mp4a.40.2, avc1.66.30'),
'mp4a.40.2, avc1.42001e',
'translates video codec when specified second'
);
});
QUnit.test('translates legacy codecs', function(assert) {
assert.deepEqual(
translateLegacyCodecs(['avc1.66.30', 'avc1.66.30']),
['avc1.42001e', 'avc1.42001e'],
'translates legacy avc1.66.30 codec'
);
assert.deepEqual(
translateLegacyCodecs(['avc1.42C01E', 'avc1.42C01E']),
['avc1.42C01E', 'avc1.42C01E'],
'does not translate modern codecs'
);
assert.deepEqual(
translateLegacyCodecs(['avc1.42C01E', 'avc1.66.30']),
['avc1.42C01E', 'avc1.42001e'],
'only translates legacy codecs when mixed'
);
assert.deepEqual(
translateLegacyCodecs(['avc1.4d0020', 'avc1.100.41', 'avc1.77.41',
'avc1.77.32', 'avc1.77.31', 'avc1.77.30',
'avc1.66.30', 'avc1.66.21', 'avc1.42C01e']),
['avc1.4d0020', 'avc1.640029', 'avc1.4d0029',
'avc1.4d0020', 'avc1.4d001f', 'avc1.4d001e',
'avc1.42001e', 'avc1.420015', 'avc1.42C01e'],
'translates a whole bunch'
);
});
QUnit.module('parseCodecs');
QUnit.test('parses text only codec string', function(assert) {
assert.deepEqual(
parseCodecs('stpp.ttml.im1t'),
[{mediaType: 'text', type: 'stpp.ttml.im1t', details: ''}],
'parsed text only codec string'
);
});
QUnit.test('parses video only codec string', function(assert) {
assert.deepEqual(
parseCodecs('avc1.42001e'),
[{mediaType: 'video', type: 'avc1', details: '.42001e'}],
'parsed video only codec string'
);
});
QUnit.test('parses audio only codec string', function(assert) {
assert.deepEqual(
parseCodecs('mp4a.40.2'),
[{mediaType: 'audio', type: 'mp4a', details: '.40.2'}],
'parsed audio only codec string'
);
});
QUnit.test('parses video, audio, and text codec string', function(assert) {
assert.deepEqual(
parseCodecs('avc1.42001e, mp4a.40.2, stpp.ttml.im1t'),
[
{mediaType: 'video', type: 'avc1', details: '.42001e'},
{mediaType: 'audio', type: 'mp4a', details: '.40.2'},
{mediaType: 'text', type: 'stpp.ttml.im1t', details: ''}
],
'parsed video, audio, and text codec string'
);
});
QUnit.test('parses video, audio, and text codec with mixed case', function(assert) {
assert.deepEqual(
parseCodecs('AvC1.42001E, Mp4A.40.E, stpp.TTML.im1T'),
[
{mediaType: 'video', type: 'AvC1', details: '.42001E'},
{mediaType: 'audio', type: 'Mp4A', details: '.40.E'},
{mediaType: 'text', type: 'stpp.TTML.im1T', details: ''}
],
'parsed video, audio, and text codec string'
);
});
QUnit.test('parses two unknown codec', function(assert) {
assert.deepEqual(
parseCodecs('fake.codec, other-fake'),
[
{mediaType: 'unknown', type: 'fake.codec', details: ''},
{mediaType: 'unknown', type: 'other-fake', details: ''}
],
'parsed faked codecs as video/audio'
);
});
QUnit.test('parses an unknown codec with a known audio', function(assert) {
assert.deepEqual(
parseCodecs('fake.codec, mp4a.40.2'),
[
{mediaType: 'unknown', type: 'fake.codec', details: ''},
{mediaType: 'audio', type: 'mp4a', details: '.40.2'}
],
'parsed audio and unknwon'
);
});
QUnit.test('parses an unknown codec with a known video', function(assert) {
assert.deepEqual(
parseCodecs('avc1.42001e, other-fake'),
[
{mediaType: 'video', type: 'avc1', details: '.42001e'},
{mediaType: 'unknown', type: 'other-fake', details: ''}
],
'parsed video and unknown'
);
});
QUnit.test('parses an unknown codec with a known text', function(assert) {
assert.deepEqual(
parseCodecs('stpp.ttml.im1t, other-fake'),
[
{mediaType: 'text', type: 'stpp.ttml.im1t', details: ''},
{mediaType: 'unknown', type: 'other-fake', details: ''}
],
'parsed text and unknown'
);
});
QUnit.test('parses an unknown codec with a known audio/video/text', function(assert) {
assert.deepEqual(
parseCodecs('fake.codec, avc1.42001e, mp4a.40.2, stpp.ttml.im1t'),
[
{mediaType: 'unknown', type: 'fake.codec', details: ''},
{mediaType: 'video', type: 'avc1', details: '.42001e'},
{mediaType: 'audio', type: 'mp4a', details: '.40.2'},
{mediaType: 'text', type: 'stpp.ttml.im1t', details: ''}
],
'parsed video/audio/text and unknown codecs'
);
});
QUnit.module('codecsFromDefault');
QUnit.test('returns falsey when no audio group ID', function(assert) {
assert.notOk(
codecsFromDefault(
{ mediaGroups: { AUDIO: {} } },
'',
),
'returns falsey when no audio group ID'
);
});
QUnit.test('returns falsey when no matching audio group', function(assert) {
assert.notOk(
codecsFromDefault(
{
mediaGroups: {
AUDIO: {
au1: {
en: {
default: false,
playlists: [{
attributes: { CODECS: 'mp4a.40.2' }
}]
},
es: {
default: true,
playlists: [{
attributes: { CODECS: 'mp4a.40.5' }
}]
}
}
}
}
},
'au2'
),
'returned falsey when no matching audio group'
);
});
QUnit.test('returns falsey when no default for audio group', function(assert) {
assert.notOk(
codecsFromDefault(
{
mediaGroups: {
AUDIO: {
au1: {
en: {
default: false,
playlists: [{
attributes: { CODECS: 'mp4a.40.2' }
}]
},
es: {
default: false,
playlists: [{
attributes: { CODECS: 'mp4a.40.5' }
}]
}
}
}
}
},
'au1'
),
'returned falsey when no default for audio group'
);
});
QUnit.test('returns parsed audio codecs for default in audio group', function(assert) {
assert.deepEqual(
codecsFromDefault(
{
mediaGroups: {
AUDIO: {
au1: {
en: {
default: false,
playlists: [{
attributes: { CODECS: 'mp4a.40.2, mp4a.40.20' }
}]
},
es: {
default: true,
playlists: [{
attributes: { CODECS: 'mp4a.40.5, mp4a.40.7' }
}]
}
}
}
}
},
'au1'
),
[
{mediaType: 'audio', type: 'mp4a', details: '.40.5'},
{mediaType: 'audio', type: 'mp4a', details: '.40.7'}
],
'returned parsed codec audio profile'
);
});
QUnit.module('isVideoCodec');
QUnit.test('works as expected', function(assert) {
[
'av1',
'avc01',
'avc1',
'avc02',
'avc2',
'vp09',
'vp9',
'vp8',
'vp08',
'hvc1',
'hev1',
'theora',
'mp4v'
].forEach(function(codec) {
assert.ok(isVideoCodec(codec), `"${codec}" is seen as a video codec`);
assert.ok(isVideoCodec(` ${codec} `), `" ${codec} " is seen as video codec`);
assert.ok(isVideoCodec(codec.toUpperCase()), `"${codec.toUpperCase()}" is seen as video codec`);
});
['invalid', 'foo', 'mp4a', 'opus', 'vorbis'].forEach(function(codec) {
assert.notOk(isVideoCodec(codec), `${codec} is not a video codec`);
});
});
QUnit.module('isAudioCodec');
QUnit.test('works as expected', function(assert) {
[
'mp4a',
'flac',
'vorbis',
'opus',
'ac-3',
'ac-4',
'ec-3',
'alac',
'speex',
'aac',
'mp3'
].forEach(function(codec) {
assert.ok(isAudioCodec(codec), `"${codec}" is seen as an audio codec`);
assert.ok(isAudioCodec(` ${codec} `), `" ${codec} " is seen as an audio codec`);
assert.ok(isAudioCodec(codec.toUpperCase()), `"${codec.toUpperCase()}" is seen as an audio codec`);
});
['invalid', 'foo', 'bar', 'avc1', 'av1'].forEach(function(codec) {
assert.notOk(isAudioCodec(codec), `${codec} is not an audio codec`);
});
});
QUnit.module('muxerSupportsCodec');
QUnit.test('works as expected', function(assert) {
const validMuxerCodecs = [];
const invalidMuxerCodecs = [];
unsupportedMuxerCodecs.forEach(function(badCodec) {
invalidMuxerCodecs.push(badCodec);
supportedMuxerCodecs.forEach(function(goodCodec) {
invalidMuxerCodecs.push(`${goodCodec}, ${badCodec}`);
});
});
// generate all combinations of valid codecs
supportedMuxerCodecs.forEach(function(codec, i) {
validMuxerCodecs.push(codec);
supportedMuxerCodecs.forEach(function(_codec, z) {
if (z === i) {
return;
}
validMuxerCodecs.push(`${codec}, ${_codec}`);
validMuxerCodecs.push(`${codec},${_codec}`);
});
});
validMuxerCodecs.forEach(function(codec) {
assert.ok(muxerSupportsCodec(codec), `"${codec}" is supported`);
assert.ok(muxerSupportsCodec(` ${codec} `), `" ${codec} " is supported`);
assert.ok(muxerSupportsCodec(codec.toUpperCase()), `"${codec.toUpperCase()}" is supported`);
});
invalidMuxerCodecs.forEach(function(codec) {
assert.notOk(muxerSupportsCodec(codec), `${codec} not supported`);
});
});
QUnit.module('browserSupportsCodec', {
beforeEach() {
this.oldMediaSource = window.MediaSource;
},
afterEach() {
window.MediaSource = this.oldMediaSource;
}
});
QUnit.test('works as expected', function(assert) {
window.MediaSource = {isTypeSupported: () => true};
assert.ok(browserSupportsCodec('test'), 'isTypeSupported true, browser does support codec');
window.MediaSource = {isTypeSupported: () => false};
assert.notOk(browserSupportsCodec('test'), 'isTypeSupported false, browser does not support codec');
window.MediaSource = null;
assert.notOk(browserSupportsCodec('test'), 'no MediaSource, browser does not support codec');
window.MediaSource = {isTypeSupported: null};
assert.notOk(browserSupportsCodec('test'), 'no isTypeSupported, browser does not support codec');
});
QUnit.module('getMimeForCodec');
QUnit.test('works as expected', function(assert) {
// mp4
assert.equal(getMimeForCodec('vp9,mp4a'), 'video/mp4;codecs="vp9,mp4a"', 'mp4 video/audio works');
assert.equal(getMimeForCodec('vp9'), 'video/mp4;codecs="vp9"', 'mp4 video works');
assert.equal(getMimeForCodec('mp4a'), 'audio/mp4;codecs="mp4a"', 'mp4 audio works');
// webm
assert.equal(getMimeForCodec('vp8,opus'), 'video/webm;codecs="vp8,opus"', 'webm video/audio works');
assert.equal(getMimeForCodec('vp8'), 'video/webm;codecs="vp8"', 'webm video works');
assert.equal(getMimeForCodec('vorbis'), 'audio/webm;codecs="vorbis"', 'webm audio works');
// ogg
assert.equal(getMimeForCodec('theora,vorbis'), 'video/ogg;codecs="theora,vorbis"', 'ogg video/audio works');
assert.equal(getMimeForCodec('theora'), 'video/ogg;codecs="theora"', 'ogg video works');
// ogg will never be selected for audio only
// mixed
assert.equal(getMimeForCodec('opus'), 'audio/mp4;codecs="opus"', 'mp4 takes priority over everything');
assert.equal(getMimeForCodec('vorbis'), 'audio/webm;codecs="vorbis"', 'webm takes priority over ogg');
assert.equal(getMimeForCodec('foo'), 'video/mp4;codecs="foo"', 'mp4 is the default');
assert.notOk(getMimeForCodec(), 'invalid codec returns undefined');
assert.equal(getMimeForCodec('Mp4A.40.2,AvC1.42001E'), 'video/mp4;codecs="Mp4A.40.2,AvC1.42001E"', 'case is preserved');
assert.equal(getMimeForCodec('stpp.ttml.im1t'), 'application/mp4;codecs="stpp.ttml.im1t"', 'text is parsed');
});

197
node_modules/@videojs/vhs-utils/test/container.test.js generated vendored Normal file
View file

@ -0,0 +1,197 @@
import QUnit from 'qunit';
import {detectContainerForBytes, isLikelyFmp4MediaSegment} from '../src/containers.js';
import {stringToBytes, concatTypedArrays, toUint8} from '../src/byte-helpers.js';
const filler = (size) => {
const view = new Uint8Array(size);
for (let i = 0; i < size; i++) {
view[i] = 0;
}
return view;
};
const otherMp4Data = concatTypedArrays([0x00, 0x00, 0x00, 0x00], stringToBytes('stypiso'));
const id3Data = Array.prototype.slice.call(concatTypedArrays(
stringToBytes('ID3'),
// id3 header is 10 bytes without footer
// 10th byte is length 0x23 or 35 in decimal
// so a total length of 45
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x23],
// add in the id3 content
filler(35)
));
const id3DataWithFooter = Array.prototype.slice.call(concatTypedArrays(
stringToBytes('ID3'),
// id3 header is 20 bytes with footer
// "we have a footer" is the sixth byte
// 10th byte is length of 0x23 or 35 in decimal
// so a total length of 55
[0x00, 0x00, 0xFF, 0x00, 0x00, 0x00, 0x23],
// add in the id3 content
filler(45)
));
const testData = {
// EBML tag + dataSize
// followed by DocType + dataSize and then actual data for that tag
'mkv': concatTypedArrays([0x1a, 0x45, 0xdf, 0xa3, 0x99, 0x42, 0x82, 0x88], stringToBytes('matroska')),
'webm': concatTypedArrays([0x1a, 0x45, 0xdf, 0xa3, 0x99, 0x42, 0x82, 0x88], stringToBytes('webm')),
'flac': stringToBytes('fLaC'),
'ogg': stringToBytes('OggS'),
'aac': toUint8([0xFF, 0xF1]),
'ac3': toUint8([0x0B, 0x77]),
'mp3': toUint8([0xFF, 0xFB]),
'3gp': concatTypedArrays([0x00, 0x00, 0x00, 0x00], stringToBytes('ftyp3g')),
'mp4': concatTypedArrays([0x00, 0x00, 0x00, 0x00], stringToBytes('ftypiso')),
'mov': concatTypedArrays([0x00, 0x00, 0x00, 0x00], stringToBytes('ftypqt')),
'avi': toUint8([0x52, 0x49, 0x46, 0x46, 0x00, 0x00, 0x00, 0x00, 0x41, 0x56, 0x49]),
'wav': toUint8([0x52, 0x49, 0x46, 0x46, 0x00, 0x00, 0x00, 0x00, 0x57, 0x41, 0x56, 0x45]),
'ts': toUint8([0x47]),
// seq_parameter_set_rbsp
'h264': toUint8([0x00, 0x00, 0x00, 0x01, 0x67, 0x42, 0xc0, 0x0d, 0xd9, 0x01, 0xa1, 0xfa, 0x10, 0x00, 0x00, 0x03, 0x20, 0x00, 0x00, 0x95, 0xe0, 0xf1, 0x42, 0xa4, 0x80, 0x00, 0x00, 0x00, 0x01]),
// video_parameter_set_rbsp
'h265': toUint8([0x00, 0x00, 0x00, 0x01, 0x40, 0x01, 0x0c, 0x01, 0xff, 0xff, 0x24, 0x08, 0x00, 0x00, 0x00, 0x9c, 0x08, 0x00, 0x00, 0x00, 0x00, 0x78, 0x95, 0x98, 0x09, 0x00, 0x00, 0x00, 0x01])
};
// seq_parameter_set_rbsp
const h265seq = toUint8([
0x00, 0x00, 0x00, 0x01,
0x42, 0x01, 0x01, 0x21,
0x60, 0x00, 0x00, 0x00,
0x90, 0x00, 0x00, 0x00,
0x00, 0x00, 0x78, 0xa0,
0x0d, 0x08, 0x0f, 0x16,
0x59, 0x59, 0xa4, 0x93,
0x2b, 0x9a, 0x02, 0x00,
0x00, 0x00, 0x64, 0x00,
0x00, 0x09, 0x5e, 0x10,
0x00, 0x00, 0x00, 0x01
]);
const h264shortnal = Array.prototype.slice.call(testData.h264);
// remove 0x00 from the front
h264shortnal.splice(0, 1);
// remove 0x00 from the back
h264shortnal.splice(h264shortnal.length - 2, 1);
const h265shortnal = Array.prototype.slice.call(testData.h265);
// remove 0x00 from the front
h265shortnal.splice(0, 1);
// remove 0x00 from the back
h265shortnal.splice(h265shortnal.length - 2, 1);
QUnit.module('detectContainerForBytes');
QUnit.test('should identify known types', function(assert) {
Object.keys(testData).forEach(function(key) {
const data = new Uint8Array(testData[key]);
assert.equal(detectContainerForBytes(testData[key]), key, `found ${key} with Array`);
assert.equal(detectContainerForBytes(data.buffer), key, `found ${key} with ArrayBuffer`);
assert.equal(detectContainerForBytes(data), key, `found ${key} with Uint8Array`);
});
const mp4Bytes = concatTypedArrays([0x00, 0x00, 0x00, 0x00], stringToBytes('styp'));
assert.equal(detectContainerForBytes(mp4Bytes), 'mp4', 'styp mp4 detected as mp4');
// mp3/aac/flac/ac3 audio can have id3 data before the
// signature for the file, so we need to handle that.
['mp3', 'aac', 'flac', 'ac3'].forEach(function(type) {
const dataWithId3 = concatTypedArrays(id3Data, testData[type]);
const dataWithId3Footer = concatTypedArrays(id3DataWithFooter, testData[type]);
const recursiveDataWithId3 = concatTypedArrays(
id3Data,
id3Data,
id3Data,
testData[type]
);
const recursiveDataWithId3Footer = concatTypedArrays(
id3DataWithFooter,
id3DataWithFooter,
id3DataWithFooter,
testData[type]
);
const differentId3Sections = concatTypedArrays(
id3DataWithFooter,
id3Data,
id3DataWithFooter,
id3Data,
testData[type]
);
assert.equal(detectContainerForBytes(dataWithId3), type, `id3 skipped and ${type} detected`);
assert.equal(detectContainerForBytes(dataWithId3Footer), type, `id3 + footer skipped and ${type} detected`);
assert.equal(detectContainerForBytes(recursiveDataWithId3), type, `id3 x3 skipped and ${type} detected`);
assert.equal(detectContainerForBytes(recursiveDataWithId3Footer), type, `id3 + footer x3 skipped and ${type} detected`);
assert.equal(detectContainerForBytes(differentId3Sections), type, `id3 with/without footer skipped and ${type} detected`);
});
const notTs = concatTypedArrays(testData.ts, filler(188));
const longTs = concatTypedArrays(testData.ts, filler(187), testData.ts);
const unsyncTs = concatTypedArrays(filler(187), testData.ts, filler(187), testData.ts);
const badTs = concatTypedArrays(filler(188), testData.ts, filler(187), testData.ts);
assert.equal(detectContainerForBytes(longTs), 'ts', 'long ts data is detected');
assert.equal(detectContainerForBytes(unsyncTs), 'ts', 'unsynced ts is detected');
assert.equal(detectContainerForBytes(badTs), '', 'ts without a sync byte in 188 bytes is not detected');
assert.equal(detectContainerForBytes(notTs), '', 'ts missing 0x47 at 188 is not ts at all');
assert.equal(detectContainerForBytes(otherMp4Data), 'mp4', 'fmp4 detected as mp4');
assert.equal(detectContainerForBytes(new Uint8Array()), '', 'no type');
assert.equal(detectContainerForBytes(), '', 'no type');
assert.equal(detectContainerForBytes(h265seq), 'h265', 'h265 with only seq_parameter_set_rbsp, works');
assert.equal(detectContainerForBytes(h265shortnal), 'h265', 'h265 with short nals works');
assert.equal(detectContainerForBytes(h264shortnal), 'h264', 'h265 with short nals works');
});
const createBox = function(type) {
const size = 0x20;
return concatTypedArrays(
// size bytes
[0x00, 0x00, 0x00, size],
// box identfier styp
stringToBytes(type),
// filler data for size minus identfier and size bytes
filler(size - 8)
);
};
QUnit.module('isLikelyFmp4MediaSegment');
QUnit.test('works as expected', function(assert) {
const fmp4Data = concatTypedArrays(
createBox('styp'),
createBox('sidx'),
createBox('moof')
);
const mp4Data = concatTypedArrays(
createBox('ftyp'),
createBox('sidx'),
createBox('moov')
);
const fmp4Fake = concatTypedArrays(
createBox('test'),
createBox('moof'),
createBox('fooo'),
createBox('bar')
);
assert.ok(isLikelyFmp4MediaSegment(fmp4Data), 'fmp4 is recognized as fmp4');
assert.ok(isLikelyFmp4MediaSegment(fmp4Fake), 'fmp4 with moof and unknown boxes is still fmp4');
assert.ok(isLikelyFmp4MediaSegment(createBox('moof')), 'moof alone is recognized as fmp4');
assert.notOk(isLikelyFmp4MediaSegment(mp4Data), 'mp4 is not recognized');
assert.notOk(isLikelyFmp4MediaSegment(concatTypedArrays(id3DataWithFooter, testData.mp3)), 'bad data is not recognized');
assert.notOk(isLikelyFmp4MediaSegment(new Uint8Array()), 'no errors on empty data');
assert.notOk(isLikelyFmp4MediaSegment(), 'no errors on empty data');
});

View file

@ -0,0 +1,13 @@
import QUnit from 'qunit';
import decodeB64ToUint8Array from '../src/decode-b64-to-uint8-array.js';
QUnit.module('decodeB64ToUint8Array');
// slightly modified version of m3u8 test
// 'parses Widevine #EXT-X-KEY attributes and attaches to manifest'
QUnit.test('can decode', function(assert) {
const b64 = 'AAAAPnBzc2gAAAAA7e+LqXnWSs6jyCfc1R0h7QAAAB4iFnNoYWthX2NlYzJmNjRhYTc4OTBhMTFI49yVmwY';
const result = decodeB64ToUint8Array(b64);
assert.deepEqual(result.byteLength, 62, 'decoded');
});

View file

@ -0,0 +1,57 @@
import QUnit from 'qunit';
import formatFiles from 'create-test-data!formats';
import parsingFiles from 'create-test-data!parsing';
import {parseData} from '../src/ebml-helpers.js';
import {doesCodecMatch, codecsFromFile} from './test-helpers.js';
const files = [];
// seperate files into modules by extension
Object.keys(formatFiles).forEach((file) => {
const extension = file.split('.').pop();
if (extension === 'webm' || extension === 'mkv') {
files.push(file);
}
});
QUnit.module('parseData');
files.forEach((file) => QUnit.test(`${file} can be parsed for tracks and blocks`, function(assert) {
const {blocks, tracks} = parseData(formatFiles[file]());
const codecs = codecsFromFile(file);
assert.equal(tracks.length, Object.keys(codecs).length, 'tracks as expected');
tracks.forEach(function(track) {
assert.ok(doesCodecMatch(track.codec, codecs[track.type]), `${track.codec} is ${codecs[track.type]}`);
});
assert.ok(blocks.length, `has ${blocks.length} blocks`);
assert.notOk(blocks.filter((b) => !b.frames.length).length, 'all blocks have frame data');
}));
QUnit.test('xiph and ebml lacing', function(assert) {
const {blocks} = parseData(parsingFiles['xiph-ebml-lacing.mkv']());
assert.ok(blocks.length, `has ${blocks.length} blocks`);
assert.notOk(blocks.filter((b) => !b.frames.length).length, 'all blocks have frame data');
assert.equal(blocks[1].lacing, 1, 'xiph lacing');
assert.equal(blocks[2].lacing, 3, 'ebml lacing');
});
QUnit.test('fixed lacing', function(assert) {
const {blocks} = parseData(parsingFiles['fixed-lacing.mkv']());
assert.ok(blocks.length, `has ${blocks.length} blocks`);
assert.notOk(blocks.filter((b) => !b.frames.length).length, 'all blocks have frame data');
assert.equal(blocks[12].lacing, 2, 'fixed lacing');
});
QUnit.test('live data', function(assert) {
const {blocks} = parseData(parsingFiles['live.mkv']());
assert.ok(blocks.length, 6, 'has 6 blocks, even with "infinite" cluster dataSize');
assert.notOk(blocks.filter((b) => !b.frames.length).length, 'all blocks have frame data');
});

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show more