Online Video Streaming Dictionary: Common Terms

Adaptive Streaming

Adaptive streaming is a technique used in video streaming that dynamically adjusts the quality of the video stream based on the viewer's internet connection speed. This ensures a smooth viewing experience with minimal buffering or interruptions.


Analytics in video streaming involves collecting, analyzing, and interpreting data on viewer behavior, engagement, and video performance. This information helps content creators optimize their streams and make data-driven decisions to enhance user experience.

Aspect Ratio

The aspect ratio of a video refers to the ratio between its width and height. Common aspect ratios include 16:9, which is widescreen, and 4:3, which is more traditional. This ratio affects how a video is displayed on different screens.

Audio Bitrate

Audio bitrate measures the amount of data used to encode audio per second, typically expressed in kilobits per second (kbps). Higher bitrates generally provide better sound quality, but they also require more bandwidth.


Bandwidth refers to the maximum rate of data transfer across a network, typically measured in megabits per second (Mbps). In video streaming, higher bandwidth allows for faster data transfer, enabling smoother and higher-quality video playback.

Bit Depth

Bit depth refers to the number of bits used to represent the color of each pixel in a video or image. Higher bit depth allows for more color variations and greater detail, resulting in better overall image quality.


Bonding in the context of streaming involves combining multiple internet connections to increase the overall bandwidth and provide a more stable and reliable connection. This is especially useful for live streaming in areas with poor internet coverage.


A buffer in video streaming is a temporary storage area in the device's memory where video data is held before being played. This helps ensure continuous playback by compensating for fluctuations in internet speed.

Capture Cards

Capture cards are hardware devices used to convert video signals from cameras, game consoles, or other sources into a digital format that can be recorded or streamed. They are essential for high-quality video capture and streaming setups.

Closed Captions

Closed captions are textual representations of the spoken dialogue and other audio elements in a video. They appear on the screen to assist viewers who are deaf or hard of hearing, and can also be useful in noisy environments.

Cloud Multi-Platform Streaming

Cloud multi-platform streaming allows content creators to stream their videos to multiple platforms simultaneously using cloud-based services. This simplifies the process of reaching audiences on various social media and streaming platforms.


A codec is a software or hardware tool used to compress and decompress digital video and audio files. Codecs reduce file size and make it easier to transmit and store media. Common codecs include H.264, H.265, and VP9.

Content Delivery Network (CDN)

A Content Delivery Network (CDN) is a network of distributed servers that deliver web content, including video, more efficiently to users based on their geographic location. CDNs help reduce latency and improve load times.


DSLR stands for Digital Single-Lens Reflex, a type of camera known for its high image quality and versatility. DSLRs are often used in professional video production and live streaming due to their interchangeable lenses and manual controls.

Data Rate

The data rate, or bitrate, of a video stream refers to the amount of data transferred per second, usually measured in megabits per second (Mbps). Higher data rates can result in better video quality but require more bandwidth.

Digital Rights Management (DRM)

Digital Rights Management (DRM) is a set of technologies used to protect copyrighted digital content from unauthorized use. DRM controls how digital media, like videos and music, can be accessed, copied, and distributed.


Embedding involves integrating a video player or live stream into a web page using HTML code. This allows viewers to watch videos directly on the website without needing to navigate to an external platform.


Encoding in video streaming is the process of converting raw video files into a digital format that is compatible with various devices and platforms. This process compresses the video data to reduce file size while maintaining quality.


Encryption in the context of video streaming is the process of converting video data into a secure format that can only be accessed by authorized users. This ensures that the video content is protected from unauthorized viewing.


A frame in video refers to a single image in a sequence of images that make up a video. Videos are typically displayed at a rate of multiple frames per second (fps), creating the illusion of motion. Higher frame rates result in smoother motion.


Geoblocking is a technology used to restrict access to internet content based on the user's geographical location. This is often used to enforce regional licensing agreements or content restrictions. For example, some video streaming services may only be available in certain countries.


H.264, also known as AVC (Advanced Video Coding), is a widely used video compression standard that provides high-quality video at relatively low bitrates. It is commonly used for streaming, recording, and distributing high-definition video content across various devices and platforms.


H.265, also known as HEVC (High Efficiency Video Coding), is a video compression standard that offers better compression efficiency than H.264, allowing for higher quality video at lower bitrates.


HLS (HTTP Live Streaming) is a streaming protocol developed by Apple for delivering live and on-demand video content. It works by breaking the video stream into small segments and delivering them over HTTP, ensuring smooth playback.

HTML5 Video

HTML5 Video refers to the video element introduced in HTML5, which allows for embedding video content directly into web pages without the need for external plugins like Flash. It supports various video formats and provides a standardized way to play videos across different web browsers.

Input Device

An input device in the context of video streaming is any hardware used to capture video or audio signals. Examples include cameras, microphones, and capture cards. These devices convert real-world audio and video into digital formats that can be streamed or recorded.

Integration API

An Integration API (Application Programming Interface) allows different software applications to communicate and work together. In video streaming, APIs enable integration with various platforms, services, and tools for enhanced functionality.

Internet Protocol (IP)

Internet Protocol (IP) is a set of rules governing the format of data sent over the internet or local network. It enables devices to communicate with each other by defining how data packets should be addressed, transmitted, and received. IP addresses are used to identify devices on the network.

Live Encoding

Live encoding is the process of converting live video and audio signals into digital formats in real-time for streaming over the internet. This involves compressing the data to reduce its size while maintaining quality, enabling it to be transmitted efficiently to viewers.

Lossless Compression

Lossless compression is a method of data compression that reduces file size without losing any original data. When the file is decompressed, it is restored to its original state. This is essential for applications where preserving the exact original quality is important, such as in archiving or professional video editing.

Lossy Compression

Lossy compression reduces file size by permanently eliminating certain data, which can result in a loss of quality. This method is commonly used for streaming and storing media where some loss of detail is acceptable to achieve smaller file sizes and faster transmission speeds.

Lower Thirds

Lower thirds are graphical overlays placed in the lower portion of a video screen, often used to display text information such as names, titles, and other relevant details. They are commonly used in news broadcasts and live streams.


M3U8 is a file format used for HLS streaming that contains the playlist of video segments. It directs the video player to the locations of the media files for smooth playback of the video stream.


MPEG-DASH (Dynamic Adaptive Streaming over HTTP) is a streaming protocol that enables adaptive bitrate streaming of video content over the internet. It works by segmenting the video and delivering it over HTTP, similar to HLS.

Managed Cloud Interface

A managed cloud interface is a web-based platform that allows users to manage and control their cloud-based video streaming services. It provides tools for uploading, encoding, and distributing video content.


Metadata is data that provides information about other data. In video streaming, metadata can include details such as the title, description, duration, and tags of a video. This information helps organize, search, and manage video content more effectively.

Multi-bitrate Streaming

Multi-bitrate streaming delivers video streams at multiple bitrates simultaneously, allowing the video player to switch between different quality levels based on the viewer's internet connection. This ensures an optimal viewing experience for all users.


Multicast is a method of transmitting data to multiple recipients simultaneously over a network. Instead of sending individual streams to each user, a single stream is distributed to a group of users, which is more efficient and reduces bandwidth usage compared to unicast transmission.

OBS Studio

OBS Studio (Open Broadcaster Software) is a free and open-source software for video recording and live streaming. It provides powerful features for mixing video and audio, adding overlays, and streaming to multiple platforms.


An overlay in video streaming refers to additional elements, such as text, graphics, or animations, that are displayed on top of the video content. Overlays are often used for branding, providing information, or enhancing the visual experience of the stream.

Packet Loss

Packet loss occurs when data packets traveling across a network fail to reach their destination. In video streaming, this can result in interruptions, buffering, or degraded video quality. Causes of packet loss include network congestion, faulty hardware, and poor signal strength.


A pixel is the smallest unit of a digital image or video display. It is a tiny square of color that, when combined with millions of others, forms the complete image. The quality and detail of an image or video are determined by the number of pixels it contains, commonly referred to as resolution.

Platform-as-a-Service (PaaS)

Platform-as-a-Service (PaaS) is a cloud computing model that provides a platform and environment for developers to build, deploy, and manage applications. PaaS simplifies the development process by handling infrastructure and runtime management.


Playback refers to the act of watching or listening to a video or audio file. In the context of streaming, it involves retrieving the media data from a server and presenting it to the viewer in real-time or on-demand. The quality of playback can be affected by factors like internet speed and buffering.


Post-production is the stage in video production that occurs after the initial recording. It involves editing, adding special effects, color correction, sound mixing, and other processes to enhance the final video product. This stage is crucial for ensuring the video meets the desired quality and artistic vision.


A protocol is a set of rules and conventions for transmitting data across a network. In video streaming, protocols like RTMP, HLS, and WebRTC define how video data is packaged, transmitted, and received. These protocols ensure that video content is delivered smoothly and efficiently to viewers.


RTMP (Real-Time Messaging Protocol) is a protocol used for live streaming video, audio, and data over the internet. It is commonly used for transmitting video from an encoder to a streaming server or platform.


Resolution refers to the number of pixels displayed on a screen, typically measured as width x height (e.g., 1920x1080). Higher resolution means more pixels and better image clarity, which results in sharper and more detailed video quality.

Return Audio Channel (IFB)

An IFB (Interruptible Foldback) is a return audio channel used in broadcasting to provide direct communication between the producer and on-air talent. It allows real-time instructions and feedback during live broadcasts.


An SDK (Software Development Kit) is a collection of software tools and libraries that developers use to create applications for specific platforms. In video streaming, SDKs provide functionalities for integrating streaming capabilities into apps.


SRT (Secure Reliable Transport) is a protocol for high-quality, low-latency video streaming over unreliable networks. It ensures secure and efficient transmission, making it ideal for live streaming applications.


Simulcast is the simultaneous broadcasting of the same video stream to multiple platforms or channels. This allows content creators to reach a wider audience by streaming live content on different websites or services at the same time.

Stream Health

Stream health indicates the overall quality and stability of a live video stream. It involves metrics such as bitrate, frame rate, latency, and buffering. Good stream health ensures a smooth and uninterrupted viewing experience for the audience.

Streaming Protocol

A streaming protocol is a set of rules that defines how video data is transmitted over the internet. Examples include RTMP, HLS, and WebRTC. These protocols ensure efficient delivery of video content from the server to the viewer's device.


Subtitles are text overlays on a video that provide a transcription or translation of the spoken dialogue. They assist viewers who are deaf or hard of hearing and can also be used for translating content into different languages.


Thumbnails are small, representative images of videos used as previews. They provide a quick visual summary of the video content, helping viewers decide whether to watch the video. Thumbnails are often displayed on video platforms and in search results.


A transcoder is a tool or software that converts video files from one format to another. This process, known as transcoding, allows videos to be played on various devices and platforms by changing the file format, bitrate, or resolution.


Unicast is a method of sending data over a network where each packet is transmitted from the source to a single, specific destination. In video streaming, unicast means each viewer receives an individual stream, as opposed to multicast, which sends a single stream to multiple users.

Upload Speed

Upload speed measures the rate at which data is sent from a user's device to the internet, typically measured in megabits per second (Mbps). In video streaming, a higher upload speed is essential for transmitting high-quality video without interruptions or buffering.

User Experience (UX)

User Experience (UX) refers to the overall experience and satisfaction a user has when interacting with a product or service. In video streaming, UX includes the ease of navigation, video quality, load times, and the effectiveness of features like search and recommendations.


VMIX is a software-based live video production tool that enables mixing, switching, recording, and live streaming of video content. It offers advanced features for creating professional-quality live productions.

Video Bitrate

Video bitrate is the amount of data processed per unit of time in a video stream, usually measured in kilobits per second (kbps) or megabits per second (Mbps). Higher bitrates generally result in better video quality but require more bandwidth.

Video Capture

Video capture involves recording video from a source, such as a camera, game console, or computer screen, and converting it into a digital format for streaming or storage. Capture devices and software are used to facilitate this process.

Video Player

A video player is software or hardware that allows users to view digital video files. It supports various video formats and provides controls for playing, pausing, rewinding, and adjusting playback settings. Examples include VLC Media Player and embedded web players.

Video Transcoding

Video transcoding is the process of converting a video file from one format or codec to another. This allows the video to be compatible with different devices, platforms, and quality requirements, ensuring broad accessibility.


Watermarking is the process of embedding a logo, text, or other identifier into a video to indicate ownership or provide branding. Watermarks can be visible or invisible and are used to protect content from unauthorized use and to promote brand recognition.


WebRTC (Web Real-Time Communication) is a technology that enables real-time audio, video, and data communication directly between web browsers and devices without the need for plugins. It is widely used for video conferencing and live streaming.


A webcast is a live or on-demand broadcast of video content over the internet. Webcasts are commonly used for events like conferences, webinars, and live performances, allowing viewers to watch the content remotely in real-time or at their convenience.

White Label

White label streaming refers to a service that allows companies to brand and customize the streaming platform as their own. This means the streaming service can be fully branded with the company's logo, colors, and domain, providing a seamless experience for their audience.
Contact us

Looking for more details? Get in touch.

By submitting, the data provided will be used to perform your request according to the Privacy Policy

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.