Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. Our JIT Packager—A New Solution for Low-Latency HTTP Streaming

Our JIT Packager—A New Solution for Low-Latency HTTP Streaming

  • By Gcore
  • May 31, 2023
  • 7 min read
Our JIT Packager—A New Solution for Low-Latency HTTP Streaming

We are excited to announce a new Gcore development: our JIT (Just-In-Time) Packager. This solution facilitates simultaneous streaming across six protocols: HLS, DASH, L-HLS, Chunked CMAF/DASH, Apple Low Latency HLS, and HESP. In this article, we’ll explain why HLS and DASH streaming make low latency a challenge, dive into alternative technologies with exciting latency reduction potential, and then tell you about our JIT Packager—why we developed it, how it works, its capabilities, benefits, and results.

Why Is It Difficult to Achieve Low Latency Using Standard HLS and DASH Technologies?

The difficulty in achieving low latency with standard HLS and DASH technologies stems from their recommended segment length and buffer size guidelines, which can result in latency of twenty seconds or more. Let’s explore why this is the case.

In conventional internet streaming, technologies such as HLS (HTTP Live Streaming) and DASH (Dynamic Adaptive Streaming over HTTP) are commonly employed. These protocols are based on HTTP and divide video and audio content into small segments spanning a few seconds. This segmentation facilitates fast navigation, bitrate switching, and caching. The client receives a textual document containing the sequence of segments, their addresses, and additional metadata such as resolution, codecs, bitrate, duration, and language. However, adhering to the recommended segment length and buffer size guidelines of HLS and DASH protocols makes achieving low latency challenging. The latency can be upwards of twenty seconds.

Here’s an example: Let’s say we initiate transcoding and create segments with a duration of 6 seconds. Next, we start playing this stream. This means that the player first needs to fill its buffer by loading three segments. By the time we start, we observe that the three fully formed and closest-to-real-time segments are segments 3, 4, and 5. Consequently, playback will begin with the 3rd segment, and it’s easy to calculate the delay based on the segment duration, which will be a minimum of 18 seconds.

Using shorter segments, such as 1-2 seconds, we can reduce the delay. Segments with a duration of 2 seconds would result in a minimum delay of 6 seconds. However, this would require reducing the GOP (group of pictures) size. Reducing SOP size reduces encoding efficiency and leads to increased traffic overhead, because each segment contains not only video and audio but also additional overhead. Additionally, there is overhead from the HTTP protocol with each segment request.

This means that shorter segments lead to a larger number of segments and, consequently, higher overhead. With a large number of viewers constantly requesting segments, this would result in significant traffic consumption.

Which Technologies Make It Possible to Reduce Latency?

To achieve lower latency in streaming, several specialized solutions can be utilized:

  1. L-HLS
  2. Chunked CMAF/DASH
  3. Apple Low Latency HLS
  4. HESP

These solutions differ from traditional HLS and DASH protocols in that they are specifically tailored for low-latency streaming.

Now, let’s dive into these protocols in more detail.

L-HLS

In the case of L-HLS, the client receives new fragments of the last segment as they become available. This is achieved by declaring the address of this segment in the playlist using a special PREFETCH tag. By so doing, it is possible to significantly reduce latency, and the data path is shortened according to the following steps:

  1. The server declares the address of the new live segment in the playlist in advance.
  2. The player requests this segment and starts receiving its first chunk as soon as it becomes available on the server.
  3. Without waiting for the next set of data, the player proceeds to play the received chunk.

Chunked CMAF/DASH

When it comes to chunked CMAF/DASH, the standard includes fields that control the timeline, update frequency, delay, and distance to the edge of the playlist. The key enhancements in the Dash.js v2.6.8 reference player version are the support for chunked transfer encoding and Fetch API wherever possible, as well as delivering data to the player as soon as it becomes available.

The indication of a low-latency stream is achieved through use of the Latency target and availabilityTimeOffset tags, which signal the target delay and allow for fragment loading to begin before the full segment formation is completed.

By utilizing these technologies, it is possible to achieve delays in the range of 2-6 seconds, depending on the configuration and settings of both the server-side and player-side components. Furthermore, there is backward compatibility, allowing devices that do not understand low-latency formats to playback full segments as before.

Apple Low Latency HLS

Apple LL-HLS offers several latency optimization solutions, including:

  1. Generating partial segments as short as 200 ms, marked by X-PART in the playlist and available before the full segment forms. Outdated partial segments are regularly removed and replaced by full ones.
  2. Sending updated playlists after updates occur, rather than upon direct request, allows the server to hold back delivery.
  3. Transmitting only playlist differences to reduce data transfer volume.
  4. Announcing soon-to-be-available partial segments with the new PRELOAD-HINT tag, enabling clients to request early and servers to respond once data is available.
  5. Facilitating faster video quality switching with the RENDITION-REPORT tag, which records information about the last segments and fragments of adjacent playlists.

Only Apple LL-HLS works natively on Apple devices, making its implementation necessary for low-latency streaming on these devices.

HESP

HESP (High Efficiency Stream Protocol) is an adaptive video streaming protocol based on HTTP designed to deliver ultra-low latency streaming. It is capable of delivering video with a delay of up to 2 seconds. Unlike previous solutions, HESP requires 10-20% less bandwidth for streaming by allowing the use of longer GOP (group of pictures) durations.

Using chunked transfer encoding, the player first receives a JSON manifest containing stream information and timing. The streaming process occurs in two streams: the Initialization Stream and the Continuation Stream.

From the Initialization Stream, the player can request images at any given time to initiate playback, as it only contains I frames (keyframes.) Once playback starts, the Continuation Stream is used, and the player can begin playback after receiving any image from the Initialization Stream.

This enables fast and uninterrupted video transmission and playback in the user’s player, as well as seamless quality switching. The illustration demonstrates an example where one video quality is initially played and then switched to another, with the Initialization Stream requested once.

Why We Decided to Develop Our Own JIT Packager

To implement all these protocols, we decided to create our own solution. There are several reasons behind this decision:

  1. Independence from vendors: Relying on the quality of a third-party solution comes with challenges. For example, if any issues were to arise, we would be unable to address them until the vendor resolves them—assuming they are even willing to make the necessary changes and/or improvements.
  2. Gcore’s infrastructure: We have our own global infrastructure, which spans from processing servers to content delivery network. Our development team has the expertise and resources needed to implement our own solution.
  3. Common features among technologies integrated: The shared characteristics of the technologies we evaluated allow for seamless integration within a unified system.
  4. Customizable metrics and monitoring: With our own solution, we can set up metrics and monitoring according to our preferences and with our own customization options.
  5. Adaptability to our and our clients’ needs: Having our own solution enables us to quickly adapt and customize it to specific tasks and client requirements.
  6. Future development opportunities: Developing our own solution empowers us to evolve in any direction. As new protocols and technologies emerge, we can seamlessly add them to our existing stack.
  7. Backward compatibility with existing solutions: Backward compatibility with existing solutions is essential. We can carefully assess how any new innovations may impact clients who previously relied on our prior solution.

When considering the specific technologies, not all third-party solutions support Apple LL-HLS and HESP. For instance, the Apple Media Stream Segmenter is limited to MPEG-2 TS over UDP and only functions on MacOS, while it uploads files to the file system. The HESP packager + HTTP origin, on the other hand, transmits files via Redis and is written in TypeScript.

It’s important to note that relying on these external solutions consumes resources, introduces delays and dependencies, and can impact parallelism and scalability. Moreover, managing a diverse array of solutions can complicate maintenance and support.

Operation Principle of our JIT Packager

The operation of our JIT Packager can be outlined as follows:

  1. The transcoder sends streams to our Packager.
  2. The Packager generates all the necessary segments and playlists on the fly.
  3. Clients request streams from the EDGE node of the CDN.
  4. Thanks to the API, CDN already knows which server to fetch the content from and retrieves it.
  5. The response is cached in the chunked-proxy’s memory.
  6. For all other clients, it is served directly from the cache.

On average, we achieved an approximate caching rate of 80%.

Results

Let’s take a look at what we have accomplished with our JIT Packager.

Simultaneous Video Streaming in HLS, DASH, and Low-Latency Formats

We have successfully developed a unique JIT Packager capable of simultaneously streaming video in HLS, DASH, and all currently available low-latency streaming formats. Primarily, it accepts video and audio streams in fragmented MP4 format from the transcoder. The server directly extracts all necessary media data from the MP4 files and dynamically generates initialization segments, corresponding playlists, and video fragments for streaming in all mentioned streaming modes with minimal delays. Subsequently, the streams become available for distribution via a CDN.

HTTP Server for Video Distribution in Various Streaming Formats

Our solution operates within an internal network using HTTP/1.1 without TLS. TLS offers no benefits in this context and would only introduce unnecessary overhead, requiring us to encrypt the entire stream once again. Instead, data is transmitted using chunked transfer encoding.

As a result, we have not only developed a Packager but also an HTTP server capable of delivering video in all the previously mentioned formats. Moreover, the same video and audio streams are utilized for each format, ensuring efficient resource utilization.

DVR Functionality for Catch-Up Viewing

We have implemented DVR functionality to allow users who have missed a live broadcast to rewind and catch up. All microsegments are stored in a separate cache on the server’s memory. Subsequently, they are merged and cached on disk as complete video fragments. These complete segments are then served during backward playback. DVR segments are automatically deleted after a certain period of time has elapsed.

Efficient Caching with Chunked-Proxy on CDN Nodes

When it comes to protocols utilizing chunked transfer encoding, it is important to note that not all CDNs support caching files before they are fully downloaded from the origin server. While nginx, acting as a proxy server, is capable of handling sources with chunked transfer and proxying their responses chunk by chunk, subsequent requests are bypassed and sent directly to the source until the entire response is completed. The cache is only utilized when the complete response is available. However, this approach proves ineffective for efficient scaling of low-latency video streaming, particularly when a significant number of viewers are likely to access the last segment simultaneously.

To address this challenge, we have implemented a separate caching service for chunked-proxy requests on each CDN node. Its key feature lies in the ability to cache partial HTTP responses. This means that while the first client initiating the request to the source receives its response, any number of clients desiring the same response will be served by our server with minimal overall delay. The already-received portions will be immediately delivered, while the rest will be provided as they arrive from the source. This caching service stores the passing requests in the server’s memory, allowing us to reduce latency compared to storing fragments on disk.

Memory usage limits are also taken into account. If the total cache size reaches the limit, elements are evicted based on the least recently accessed order. Furthermore, we have developed a specialized API that enables CDN edge nodes to proactively determine the content’s location in advance.

Conclusion

The development of our JIT Packager has allowed us to achieve our goals in low-latency streaming. We can stream through multiple advanced protocols simultaneously without relying on third-party vendors, significantly improving the user experience. We can promptly respond to incidents and adapt the solution to meet client needs more efficiently.

But we’re not stopping there. Our plans include further reducing latency while maintaining quality and playback stability. We are also working on optimizing the system as a whole, adding more metrics for monitoring and control, and continuing pushing the boundaries of innovation in the field.

We are excited about the possibilities ahead and remain dedicated to delivering our users high-quality, low-latency streaming experiences.

Related articles

How gaming studios can use technology to safeguard players

Online gaming can be an enjoyable and rewarding pastime, providing a sense of community and even improving cognitive skills. During the pandemic, for example, online gaming was proven to boost many players’ mental health and provided a vital social outlet at a time of great isolation. However, despite the overall benefits of gaming, there are two factors that can seriously spoil the gaming experience for players: toxic behavior and cyber attacks.Both toxic behavior and cyberattacks can lead to players abandoning games in order to protect themselves. While it’s impossible to eradicate harmful behaviors completely, robust technology can swiftly detect and ban bullies as well as defend against targeted cyberattacks that can ruin the gaming experience.This article explores how gaming studios can leverage technology to detect toxic behavior, defend against cyber threats, and deliver a safer, more engaging experience for players.Moderating toxic behavior with AI-driven technologyToxic behavior—including harassment, abusive messages, and cheating—has long been a problem in the world of gaming. Toxic behavior not only affects players emotionally but can also damage a studio’s reputation, drive churn, and generate negative reviews.The online disinhibition effect leads some players to behave in ways they may not in real life. But even when it takes place in a virtual world, this negative behavior has real long-term detrimental effects on its targets.While you can’t control how players behave, you can control how quickly you respond.Gaming studios can implement technology that makes dealing with toxic incidents easier and makes gaming a safer environment for everyone. While in the past it may have taken days to verify a complaint about a player’s behavior, today, with AI-driven security and content moderation, toxic behavior can be detected in real time, and automated bans can be enforced. The tool can detect inappropriate images and content and includes speech recognition to detect derogatory or hateful language.In gaming, AI content moderation analyzes player interactions in real time to detect toxic behavior, harmful content, and policy violations. Machine learning models assess chat, voice, and in-game media against predefined rules, flagging or blocking inappropriate content. For example, let’s say a player is struggling with in-game harassment and cheating. With AI-powered moderation tools, chat logs and gameplay behavior are analyzed in real time, identifying toxic players for automated bans. This results in healthier in-game communities, improved player retention, and a more pleasant user experience.Stopping cybercriminals from ruining the gaming experienceAnother factor negatively impacting the gaming experience on a larger scale is cyberattacks. Our recent Radar Report showed that the gaming industry experienced the highest number of DDoS attacks in the last quarter of 2024. The sector is also vulnerable to bot abuse, API attacks, data theft, and account hijacking.Prolonged downtime damages a studio’s reputation—something hackers know all too well. As a result, gaming platforms are prime targets for ransomware, extortion, and data breaches. Cybercriminals target both servers and individual players’ personal information. This naturally leads to a drop in player engagement and widespread frustration.Luckily, security solutions can be put in place to protect gamers from this kind of intrusion:DDoS protection shields game servers from volumetric and targeted attacks, guaranteeing uptime even during high-profile launches. In the event of an attack, malicious traffic is mitigated in real-time, preventing zero downtime and guaranteeing seamless player experiences.WAAP secures game APIs and web services from bot abuse, credential stuffing, and data breaches. It protects against in-game fraud, exploits, and API vulnerabilities.Edge security solutions reduce latency, protecting players without affecting game performance. The Gcore security stack helps ensure fair play, protecting revenue and player retention.Take the first steps to protecting your customersGaming should be a positive and fun experience, not fraught with harassment, bullying, and the threat of cybercrime. Harmful and disruptive behaviors can make it feel unsafe for everyone to play as they wish. That’s why gaming studios should consider how to implement the right technology to help players feel protected.Gcore was founded in 2014 with a focus on the gaming industry. Over the years, we have thwarted many large DDoS attacks and continue to offer robust protection for companies such as Nitrado, Saber, and Wargaming. Our gaming specialization has also led us to develop game-specific countermeasures. If you’d like to learn more about how our cybersecurity solutions for gaming can help you, get in touch.Speak to our gaming solutions experts today

How to choose the right technology tools to combat digital piracy

One of the biggest challenges facing the media and entertainment industry is digital piracy, where stolen content is redistributed without authorization. This issue causes significant revenue and reputational losses for media companies. Consumers who use these unregulated services also face potential threats from malware and other security risks.Governments, regulatory bodies, and private organizations are increasingly taking the ramifications of digital piracy seriously. In the US, new legislation has been proposed that would significantly crack down on this type of activity, while in Europe, cloud providers are being held liable by the courts for enabling piracy. Interpol and authorities in South Korea have also teamed up to stop piracy in its tracks.In the meantime, you can use technology to help stop digital piracy and safeguard your company’s assets. This article explains anti-piracy technology tools that can help content providers, streaming services, and website owners safeguard their proprietary media: geo-blocking, digital rights management (DRM), secure tokens, and referrer validation.Geo-blockingGeo-blocking (or country access policy) restricts access to content based on a user’s geographic location, preventing unauthorized access and limiting content distribution to specific regions. It involves setting rules to allow or deny access based on the user’s IP address and location in order to comply with regional laws or licensing agreements.Pros:Controls access by region so that content is only available in authorized marketsHelps comply with licensing agreementsCons:Can be bypassed with VPNs or proxiesRequires additional security measures to be fully effectiveTypical use cases: Geo-blocking is used by streaming platforms to restrict access to content, such as sports events or film premieres, based on location and licensing agreements. It’s also helpful for blocking services in high-risk areas but should be used alongside other anti-piracy tools for better and more comprehensive protection.Referrer validationReferrer validation is a technique that checks where a content request is coming from and prevents unauthorized websites from directly linking to and using content. It works by checking the “referrer” header sent by the browser to determine the source of the request. If the referrer is from an unauthorized domain, the request is blocked or redirected. This allows only trusted sources to access your content.Pros:Protects bandwidth by preventing unauthorized access and misuse of resourcesGuarantees content is only accessed by trusted sources, preventing piracy or abuseCons:Can accidentally block legitimate requests if referrer headers are not correctly sentMay not work as intended if users access content via privacy-focused methods that strip referrer data, leading to false positivesTypical use cases: Content providers commonly use referrer validation to prevent unauthorized streaming or hotlinking, which involves linking to media from another website or server without the owner’s permission. It’s especially useful for streamers who want to make sure their content is only accessed through their official platforms. However, it should be combined with other security measures for more substantial protection.Secure tokensSecure tokens and protected temporary links provide enhanced security by granting temporary access to specific resources so only authorized users can access sensitive content. Secure tokens are unique identifiers that, when linked to a user’s account, allow them to access protected resources for a limited time. Protected temporary links further restrict access by setting expiration dates, meaning the link becomes invalid after a set time.Pros:Provides a high level of security by allowing only authorized users to access contentTokens are time-sensitive, which prevents unauthorized access after they expireHarder to circumvent compared to traditional password protection methodsCons:Risk of token theft if they’re not managed or stored securelyRequires ongoing management and rotation of tokens, adding complexityCan be challenging to implement properly, especially in high-traffic environmentsTypical use cases: Streaming platforms use secure tokens and protected temporary links so only authenticated users can access premium content, like movies or live streams. They are also useful for secure file downloads or limiting access to exclusive resources, making them effective for protecting digital content and preventing unauthorized sharing or piracy.Digital rights managementDigital rights management (DRM) refers to a set of technologies designed to protect digital content from unauthorized use so that only authorized users can access, copy, or share it, according to licensing agreements. DRM uses encryption, licensing, and authentication mechanisms to control access to digital resources so that only authorized users can view or interact with the content. While DRM offers strong protection against piracy, it comes with higher complexity and setup costs than other security methods.Pros:Robust protection against unauthorized copying, sharing, and piracyHelps safeguard intellectual property and revenue streamsEnforces compliance with licensing agreementsCons:Can be complex and expensive to implementMay cause inconvenience for users, such as limiting playback on unauthorized devices or restricting sharingPotential system vulnerabilities or compatibility issuesTypical use cases: DRM is commonly used by streaming services to protect movies, TV shows, and music from piracy. It can also be used for e-books, software, and video games, ensuring that content is only used by licensed users according to the terms of the agreement. DRM solutions can vary, from software-based solutions for media files to hardware-based or cloud-based DRM for more secure distribution.Protect your content from digital piracy with GcoreDigital piracy remains a significant challenge for the media and entertainment industry as it poses risks in terms of both revenue and security. To combat this, partnering with a cloud provider that can actively monitor and protect your digital assets through advanced multi-layer security measures is essential.At Gcore, our CDN and streaming solutions give rights holders peace of mind that their assets are protected, offering the features mentioned in this article and many more besides. We also offer advanced cybersecurity tools, including WAAP (web application and API protection) and DDoS protection, which further integrate with and enhance these security measures. We provide trial limitations for streamers to curb piracy attempts and respond swiftly to takedown requests from rights holders and authorities, so you can rest assured that your assets are in safe hands.Get in touch to learn more about combatting digital piracy

The latest updates for Gcore Video Streaming: lower latency, smarter AI, and seamless scaling

At Gcore, we’re committed to continuous innovation in video streaming. This month, we’re introducing significant advancements in low-latency streaming, AI-driven enhancements, and infrastructure upgrades, helping you deliver seamless, high-quality content at scale.Game-changing low-latency streamingOur latest low-latency live streaming solutions are now fully available in production, delivering real-time engagement with unmatched precision:WebRTC to HLS/DASH: Now in production, enabling real-time transcoding and delivery for WebRTC streams using HTTP-based LL-HLS and LL-DASH.LL-DASH with two-second latency: Optimized for ultra-fast content delivery via our global CDN, enabling minimal delay for seamless streaming.LL-HLS with three-second latency: Designed to deliver an uninterrupted and near-real-time live streaming experience.Gcore’s live streaming dashboard with OBS Studio integration, enabling real-time transcoding and delivery with low-latency HLS/DASHWhat this means for youWith glass-to-glass latency as low as 2–3 seconds, these advancements unlock new possibilities for real-time engagement. Whether you’re hosting live auctions, powering interactive gaming experiences, or enabling seamless live shopping, Gcore Video Streaming’s low-latency options keep your viewers connected without delay.Our solution integrates effortlessly with hls.js, dash.js, native Safari support, and our HTML web player, guaranteeing smooth playback across devices. Backed by our global CDN infrastructure, you can count on reliable, high-performance streaming at scale, no matter where your audience is.Exciting enhancements: AI and live streaming featuresWe’re making live streaming smarter with cutting-edge AI capabilities:Live stream recording with overlays: Record live streams while adding dynamic overlays such as webcam pop-ups, chat, alerts, advertisement banners, and time or weather widgets. This feature allows you to create professional, branded content without post-production delays. Whether you’re broadcasting events, tutorials, or live commerce streams, overlays help maintain a polished and engaging viewer experience.AI-powered VOD subtitles: Advanced AI automatically generates and translates subtitles into more than 100 languages, helping you expand your content’s reach to global audiences. This ensures accessibility while improving engagement across different regions.Deliver seamless live experiences with GcoreOur commitment to innovation continues, bringing advancements to enhance performance, efficiency, and streaming quality. Stay tuned for even lower latency and more AI-driven enhancements coming soon!Gcore Video Streaming empowers you to deliver seamless live experiences for auctions, gaming, live shopping, and other real-time applications. Get reliable, high-performance content delivery—whether you’re scaling to reach global audiences or delivering unique experiences to niche communities.Try Gcore Video Streaming today

How we optimized our CDN infrastructure for paid and free plans

At Gcore, we’re dedicated to delivering top-tier performance and reliability. To further enhance performance for all our customers, we recently made a significant change: we moved our CDN free-tier customers to a separate, physically isolated infrastructure. By isolating free-tier traffic, customers on paid plans receive uninterrupted, premium-grade service, while free users benefit from an environment tailored to their needs.Why we’ve separated free and paid plan infrastructureThis optimization has been driven by three key factors: performance, stability and scalability, and improved reporting.Providing optimal performanceFree-tier users are essential to our ecosystem, helping to stress-test our systems and extend our reach. However, their traffic can be unpredictable. By isolating free traffic, we provide premium customers with consistently high performance, minimizing disruption risks.Enhancing stability and scalabilityWith separate infrastructures, we can better manage traffic spikes and load balancing without impacting premium services. This improves overall platform stability and scalability, guaranteeing that both customer groups will enjoy a reliable experience.Improving reporting and performance insightsAlongside infrastructure enhancements, we’ve upgraded our reports page to offer clearer visibility into traffic and performance:New 95th percentile bandwidth graph: Helps users analyze traffic patterns more effectively.Improved aggregated bandwidth view: Makes it easier to assess usage trends at a glance.These tools empower you to make more informed decisions with accurate and accessible data.95th percentile bandwidth usage over the last three months, highlighting a significant increase in January 2025Strengthening content delivery with query string forwardingWe’ve also introduced a standardized query string forwarding feature to boost content delivery stability. By replacing our previous custom approach, we achieved the following:Increased stability: Reducing the risk of disruptionsLower maintenance requirements: Freeing up engineering resourcesSmoother content delivery: Enhancing experiences for streaming and content-heavy applicationsQuery string forwarding settings allow seamless parameter transfer for media deliveryWhat this means for our customersFor customers on paid plans: You can expect a more stable, high-performance service without the disruptions caused by fluctuating free-tier activity. Enhanced reporting and streamlined content delivery also empower you to make better, data-driven decisions.For free-tier customers: You will continue to have access to our services on a dedicated infrastructure that has been specifically optimized for your needs. This setup allows us to innovate and improve performance without compromising service quality.Strengthening Gcore CDN for long-term growthAt Gcore, we continuously refine our CDN to enable top-tier performance, reliability, and scalability. The recent separation of free-tier traffic, improved reporting capabilities, and optimized content delivery are key to strengthening our infrastructure. These updates enhance service quality for all users, minimizing disruptions and improving traffic management.We remain committed to pushing the boundaries of CDN efficiency, delivering faster load times, robust security, and seamless scalability. Stay tuned for more enhancements as we continue evolving our platform to meet the growing demands of businesses worldwide.Explore Gcore CDN

Introducing low-latency live streams with LL-HLS and LL-DASH

We are thrilled to introduce low-latency live streams for Gcore Video Streaming using LL-HLS and LL-DASH protocols. With a groundbreaking glass-to-glass delay of just 2.2–3.0 seconds, this improvement brings unparalleled speed to your viewers’ live-streaming experience.Alt: Video illustrating the workflow of low-latency live streaming using LL-HLS and LL-DASH protocolsThis demonstration shows the minimal latency of our live streaming solution—just three seconds between the original broadcast (left) and what viewers see online (right).Key use cases and benefits of low-latency streamingOur low-latency streaming solutions address the specific needs of content providers, broadcasters, and developers, enabling seamless experiences for diverse use cases.Ultra-fast live streamingGet real-time delivery with glass-to-glass latency of ±2.2 seconds for LL-DASH and ±3.0 seconds for LL-HLS.Deliver immediate viewer engagement, ideal for industries such as live sports, e-sports tournaments, and news broadcasting.Meet the expectations of audiences who demand instant access to live events without noticeable delays.Enhanced viewer interactionReduce the delay between live actions and audience reactions, fostering a more immersive viewing experience.Support real-time interaction for use cases like virtual conferences, live auctions, Q&A sessions, and live shopping platforms.Flexible player supportSeamlessly integrate with your existing player setups, including popular options like hls.js, dash.js, and native Safari support.Use our new HTML web player for effortless integration or maintain your current custom player workflows.Global scalability and reliabilityLeverage our robust CDN network with 200+ Tbps capacity and 180+ PoPs to enable low-latency streams worldwide.Deliver a consistent, high-quality experience for global audiences, even during peak traffic events.Cost-efficiencyMinimize operational overhead with a streamlined solution that combines advanced encoding, efficient packaging, and reliable delivery.How it worksOur real-time transcoder and JIT packager generate streaming manifests and chunks optimized for low latency:For LL-HLS: The HLS manifest (.m3u8) and chunks comply with the latest standards. Tags like #EXT-X-PART, #EXT-X-PRELOAD-HINT, and others are dynamically generated with best-in-class parameters. Chunks are loaded instantaneously as they appear at the origin.For LL-DASH: The DASH manifest (.mpd) leverages advanced MPEG-DASH features. Chunks are transmitted to viewers as soon as encoding begins, with caching finalized once the chunk is fully fetched.Combined with our fast and reliable CDN delivery, live streams are accessible globally with minimal delay. Our CDN network has an extensive capacity and 180+ PoPs to deliver exceptional performance, even for high-traffic events.See a live demo in action!Try WebRTC to HLS/DASH todayWe’re also excited to remind you about our WebRTC to HLS/DASH delivery functionality. This innovative feature allows streams created in a standard browser via WebRTC to be:Transcoded on our servers.Delivered with low latency to viewers using HTTP-based LL-HLS and LL-DASH protocols through our CDN.Try it now in the Gcore Customer Portal.Shaping the future of streamingBy nearly halving the glass-to-glass delivery time compared to our previous solution, Gcore Video Streaming enables you to deliver a seamless experience for live events, real-time interactions, and other latency-sensitive applications. Whether you’re broadcasting to a global audience or engaging niche communities, our platform provides the tools you need to thrive in today’s dynamic streaming landscape.Watch our demo to see the difference and explore how this solution fits into your workflows.Visit our demo player

Gcore 2024 round-up: 10 highlights from our 10th year

It’s been a busy and exciting year here at Gcore, not least because we celebrated our 10th anniversary back in February. Starting in 2014 with a focus on gaming, Gcore is now a global edge AI, cloud, network, and security solutions provider, supporting businesses from a wide range of industries worldwide.As we start to look forward to the new year, we took some time to reflect on ten of our highlights from 2024.1. WAAP launchIn September, we launched our WAAP security solution (web application and API protection) following the acquisition of Stackpath’s edge WAAP. Gcore WAAP is a genuinely innovative product that offers customers DDoS protection, bot management, and a web application firewall, helping protect businesses from the ever-increasing threat of cyber attacks. It brings next-gen AI features to customers while remaining intuitive to use, meaning businesses of all sizes can futureproof their web app and API protection against even the most sophisticated threats.My highlight of the year was the Stackpath WAAP acquisition, which enabled us to successfully deliver an enterprise-grade web security solution at the edge to our customers in a very short time.Itamar Eshet, Senior Product Manager, Security2. Fundraising round: investing in the futureIn July, we raised $60m in Series A funding, reflecting investors’ confidence in the continued growth and future of Gcore. Next year will be huge for us in terms of AI development, and this funding will accelerate our growth in this area and allow us to bring even more innovative solutions to our customers.3. Innovations in AIIn 2024, we upped our AI offerings, including improved AI services for Gcore Video Streaming: AI ASR for transcription and translation, and AI content moderation. As AI is at the forefront of our products and services, we also provided insights into how regulations are changing worldwide and how AI will likely affect all aspects of digital experiences. We already have many new AI developments in the pipeline for 2025, so watch this space…4. Global expansionsWe had some exciting expansions in terms of new cloud capabilities. We expanded our Edge Cloud offerings in new locations, including Vietnam and South Korea, and in Finland, we boosted our Edge AI capabilities with a new AI cluster and two cutting-edge GPUs. Our AI expansion was further bolstered when we introduced the H200 and GB200 in Luxembourg. We also added new PoPs worldwide in locations such as Munich, Riyadh, and Casablanca, demonstrating our dedication to providing reliable and fast content delivery globally.5. FastEdge launchWe kicked off the year with the launch of FastEdge. This lightweight edge computing solution runs on our global Edge Network and delivers exceptional performance for serverless apps and scripts. This new solution makes handling dynamic content even faster and smoother. We ran an AI image recognition model on FastEdge in an innovative experiment. The Gcore team volunteered their pets to test FastEdge’s performance. Check out the white paper and discover our pets and our technological edge.6. PartnershipsWe formed some exciting global partnerships in 2024. In November, we launched a joint venture with Ezditek, an innovator in data center and digital infrastructure services in Saudi Arabia. The joint venture will build, train, and deploy generative AI solutions locally and globally. We also established some important strategic partnerships. Together with Sesterce, a leading European provider of AI infrastructure, we can help more businesses meet the rising challenges of scaling from AI pilot projects to full-scale implementation. We also partnered with LetzAI, a Luxembourg-based AI startup, to accelerate its mission of developing one of the world’s most comprehensive generative AI platforms.7. EventsIt wasn’t all online. We also ventured out into the real world, making new connections at global technology events, including the WAICF AI conference and Viva Tech in Cannes and Paris, respectively; Mobile World Congress in Barcelona; Gamescom in Cologne in August; IBC (the International Broadcasting Convention) in Amsterdam; and Connected World KSA in Saudi Arabia just last month. We look forward to meeting even more of you next year. Here are a few snapshots from 2024.GamescomIBC8. New container registry solutionSeptember kicked off with the beta launch of Gcore Container Registry, one of the backbones of our cloud offering. It streamlines your image storage and management, keeping your applications running smoothly and consistently across various environments.9. GigaOm recognitionBeing recognized by outside influences is always a moment to remember. In August, we were thrilled to receive recognition from tech analyst GigaOm, which noted Gcore as an outperformer in its field. The prestigious accolade highlights Gcore as a leader in platform capability, innovation, and market impact, as assessed by GigaOm’s rigorous criteria.10. New customer success storiesWe were delighted to share some of the work we’ve done for our customers this year: gaming company Fawkes Games and Austrian sports broadcaster and streaming platform fan.at, helping them with mitigating DDoS attacks and providing the infrastructure for their sports technology offering respectively.And as a bonus number 11, if you’re looking for something to read in the new year lull, download our informative long reads on topics including selecting a modern content delivery network, cyber attack trends, and using Kubernetes to enhance AI. Download the ebook of your choice below.The essential guide to selecting a modern CDN eBookGcore Radar: DDoS attack trends in Q1-Q2 2024 reportAccelerating AI with KubernetesHere’s to 2025!And that’s it for our 2024 highlights. It’s been a truly remarkable year, and we thank you for being a part of it. We’ll leave you with some words from our CEO and see you in 2025.2024 has been a year of highs, from our tenth anniversary celebrations to the launch of various new products, and from expansion into new markets to connecting with customers (new and old) at events worldwide. Happy New Year to all our readers who are celebrating, and see you for an even bigger and better 2025!Andre Reitenbach, CEOChat with us about your 2025 needs

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.