First release
This commit is contained in:
commit
fa6c85266e
2339 changed files with 761050 additions and 0 deletions
37
node_modules/mux.js/docs/captions.md
generated
vendored
Normal file
37
node_modules/mux.js/docs/captions.md
generated
vendored
Normal file
|
@ -0,0 +1,37 @@
|
|||
# In-Band Captions
|
||||
Captions come in two varieties, based on their relationship to the
|
||||
video. Typically on the web, captions are delivered as a separate file
|
||||
and associated with a video through the `<track>` element. This type
|
||||
of captions are sometimes referred to as *out-of-band*.
|
||||
|
||||
The alternative method involves embedding the caption data directly into
|
||||
the video content and is sometimes called *in-band captions*. In-band
|
||||
captions exist in many videos today that were originally encoded for
|
||||
broadcast and they are also a standard method used to provide captions
|
||||
for live events. In-band HLS captions follow the CEA-708 standard.
|
||||
|
||||
In this project, in-band captions are parsed using a [CaptionStream][caption-stream]. For MPEG2-TS sources, the CaptionStream is used as part of the [Transmuxer TS Pipeline][transmuxer]. For ISOBMFF sources, the CaptionStream is used as part of the [MP4 CaptionParser][mp4-caption-parser].
|
||||
|
||||
## Is my stream CEA-608/CEA-708 compatible?
|
||||
|
||||
If you are having difficulties getting caption data as you expect out of Mux.js, take a look at our [Troubleshooting Guide](/docs/troubleshooting.md#608/708-caption-parsing) to ensure your content is compatible.
|
||||
|
||||
# Useful Tools
|
||||
|
||||
- [CCExtractor][cc-extractor]
|
||||
- [Thumbcoil][thumbcoil]
|
||||
|
||||
# References
|
||||
- [Rec. ITU-T H.264][h264-spec]: H.264 video data specification. CEA-708 captions are encapsulated in supplemental enhancement information (SEI) network abstraction layer (NAL) units within the video stream.
|
||||
- [ANSI/SCTE 128-1][ansi-scte-spec]: the binary encapsulation of caption data within an SEI user_data_registered_itu_t_t35 payload.
|
||||
- CEA-708-E: describes the framing and interpretation of caption data reassembled out of the picture user data blobs.
|
||||
- CEA-608-E: specifies the hex to character mapping for extended language characters.
|
||||
- [Closed Captioning Intro by Technology Connections](https://www.youtube.com/watch?v=6SL6zs2bDks)
|
||||
|
||||
[h264-spec]: https://www.itu.int/rec/T-REC-H.264
|
||||
[ansi-scte-spec]: https://www.scte.org/documents/pdf/Standards/ANSI_SCTE%20128-1%202013.pdf
|
||||
[caption-stream]: /lib/m2ts/caption-stream.js
|
||||
[transmuxer]: /lib/mp4/transmuxer.js
|
||||
[mp4-caption-parser]: /lib/mp4/caption-parser.js
|
||||
[thumbcoil]: http://thumb.co.il/
|
||||
[cc-extractor]: https://github.com/CCExtractor/ccextractor
|
BIN
node_modules/mux.js/docs/diagram.png
generated
vendored
Normal file
BIN
node_modules/mux.js/docs/diagram.png
generated
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 47 KiB |
26
node_modules/mux.js/docs/streams.md
generated
vendored
Normal file
26
node_modules/mux.js/docs/streams.md
generated
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
# Streams
|
||||
|
||||
mux.js uses a concept of `streams` to allow a flexible architecture to read and manipulate video bitstreams. The `streams` are loosely based on [Node Streams][node-streams] and are meant to be composible into a series of streams: a `pipeline`.
|
||||
|
||||
## Stream API
|
||||
|
||||
Take a look at the base [Stream][stream] to get an idea of the methods and events available. In general, data is `push`ed into a stream and is `flush`ed out of a stream. Streams can be connected by calling `pipe` on the source `Stream` and passing in the destination `Stream`. `data` events correspond to `push`es and `done` events correspond to `flush`es.
|
||||
|
||||
## MP4 Transmuxer
|
||||
|
||||
An example of a `pipeline` is contained in the [MP4 Transmuxer][mp4-transmuxer]. This is a diagram showing the whole pipeline, including the flow of data from beginning to end:
|
||||
|
||||

|
||||
|
||||
You can gain a better understanding of what is going on by using our [debug page][debug-demo] and following the bytes through the pipeline:
|
||||
|
||||
```bash
|
||||
npm start
|
||||
```
|
||||
|
||||
and go to `http://localhost:9999/debug/`
|
||||
|
||||
[node-streams]: https://nodejs.org/api/stream.html
|
||||
[mp4-transmuxer]: ../lib/mp4/transmuxer.js
|
||||
[stream]: ../lib/utils/stream.js
|
||||
[debug-demo]: ../debug/index.html
|
61
node_modules/mux.js/docs/test-content.md
generated
vendored
Normal file
61
node_modules/mux.js/docs/test-content.md
generated
vendored
Normal file
|
@ -0,0 +1,61 @@
|
|||
# Creating Test Content
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [CEA-608 Content](#creating-cea-608-content)
|
||||
|
||||
## Creating CEA-608 Content
|
||||
|
||||
- Use ffmpeg to create an MP4 file to start with:
|
||||
|
||||
`ffmpeg -f lavfi -i testsrc=duration=300:size=1280x720:rate=30 -profile:v baseline -pix_fmt yuv420p output.mp4` (no audio)
|
||||
|
||||
`ffmpeg -f lavfi -i testsrc=duration=300:size=1280x720:rate=30 -profile:v baseline -pix_fmt yuv420p -filter_complex "anoisesrc=d=300" output.mp4` (audio + video)
|
||||
|
||||
This uses ffmpeg's built-in `testsrc` source which generates a test video pattern with a color and timestamp. For this example, we are using a duration of `300` seconds, a size of `1280x720` and a framerate of `30fps`. We also specify extra settings `profile` and `pix_fmt` to force the output to be encoded using `avc1.42C01F`.
|
||||
|
||||
- Create an [srt file][srt] with the captions you would like to see with their timestamps.
|
||||
|
||||
- Use ffmpeg to convert `ouput.mp4` to a flv file:
|
||||
|
||||
`ffmpeg -i output.mp4 -acodec copy -vcodec copy output.flv`
|
||||
|
||||
- Use [libcaption][] to embed the captions into the flv:
|
||||
|
||||
`flv+srt output.flv captions.srt with-captions.flv`
|
||||
|
||||
- Use ffmpeg to convert `with-captions.flv` to mp4
|
||||
|
||||
`ffmpeg -i with-captions.flv -acodec copy -vcodec copy with-captions.mp4`
|
||||
|
||||
- Use [Bento4][bento4] to convert the file into a FMP4 file:
|
||||
|
||||
`bento4 mp4fragment with-captions.mp4 \
|
||||
--verbosity 3 \
|
||||
--fragment-duration 4000 \
|
||||
--timescale 90000 \
|
||||
with-captions-fragment.mf4`
|
||||
|
||||
Then do *either* of the following:
|
||||
|
||||
- Use [Bento4][bento4] to split the file into an init segment and a fmp4 media segments:
|
||||
|
||||
`bento4 mp4split --verbose \
|
||||
--init-segment with-captions-init.mp4 \
|
||||
--media-segment segs/with-captions-segment-%llu.m4s \
|
||||
with-captions-fragment.mf4`
|
||||
|
||||
- Use [Bento4][bento4] to create a DASH manifest:
|
||||
|
||||
`bento4 mp4dash -v \
|
||||
--mpd-name=with-captions.mpd \
|
||||
--init-segment=with-captions-init.mp4 \
|
||||
--subtitles
|
||||
with-captions-fragment.mf4`
|
||||
|
||||
This will create a DASH MPD and media segments in a new directory called `output`.
|
||||
|
||||
|
||||
[srt]: https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format
|
||||
[libcaption]: https://github.com/szatmary/libcaption
|
||||
[bento4]: https://www.bento4.com/documentation/
|
20
node_modules/mux.js/docs/troubleshooting.md
generated
vendored
Normal file
20
node_modules/mux.js/docs/troubleshooting.md
generated
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
# Troubleshooting Guide
|
||||
|
||||
## Table of Contents
|
||||
- [608/708 Caption Parsing](caption-parsing)
|
||||
|
||||
## 608/708 Caption Parsing
|
||||
|
||||
**I have a stream with caption data in more than one field, but only captions from one field are being returned**
|
||||
|
||||
You may want to confirm the SEI NAL units are constructed according to the CEA-608 or CEA-708 specification. Specifically:
|
||||
|
||||
- that control codes/commands are doubled
|
||||
- control codes starting from 0x14, 0x20 and ending with 0x14, 0x2f in field 1 are replaced with 0x15, 0x20 to 0x15, 0x2f when used in field 2
|
||||
- control codes starting from 0x1c, 0x20 and ending with 0x1c, 0x2f in field 1 are replaced with 0x1d, 0x20 to 0x1d, 0x2f when used in field 2
|
||||
|
||||
**I am getting the wrong times for only some captions, other captions are correct**
|
||||
|
||||
If your content includes B-frames, there is an existing [issue](https://github.com/videojs/mux.js/issues/214) to fix this problem. We recommend that you ensure that all the segments in your source start with keyframes, to allow proper caption timing.
|
||||
|
||||
[caption-parsing]: /docs/troubleshooting.md#608/708-caption-parsing
|
Loading…
Add table
Add a link
Reference in a new issue