Press "Enter" to skip to content

Posts tagged as “hlg”

HDR Brightness Standards vs. Reality

花花:

1000nit hlg推荐的纸白95% reflection 只有203nit,给高光留了2.5档左右。但203nit远远低于一般的sdr显示器的标准亮度,300-450nit,会让人觉得HDR怎么这么暗啊。Apple MBP 1600nits xdr的屏幕,在显示75% HLG信号的时候大概是450nits,给高光留了大概留了2档,个人觉得是不太够了。而最新的iPad Pro和Studio Display XDR SDR都干到了1000nits,HDR则是2000nits,差距只有1档(当然HDR时的纸白应该不会到1000nits)你是怎么看待这个问题的。

Gemini:

这是一个非常核心且击中当前 HDR 发展痛点的问题。你观察到的现象,正是目前影视工业界标准与消费级显示设备(尤其是追求高亮度的移动端和 PC 端设备)之间产生的剧烈摩擦。

我们可以把这个问题拆解为“规范与环境的脱节”、“厂商的妥协”以及“高亮时代的 HDR 究竟应该怎么做”三个层面来看。

1. 规范与环境的脱节:为什么 203 nits 会让人觉得暗?

ITU-R BT.2408 规范将 HDR 的参考白(Diffuse White / 纸白)定在 203 nits,并在上面留出巨大的空间给高光(Specular Highlights)。这个标准的制定有一个非常严格的隐藏前提:暗室环境(通常是 5 nits 甚至更低的背景光)。

在符合标准的专业调色室里,203 nits 的纸白是非常舒适且有足够反差的。但问题在于,绝大多数消费者是在明亮的办公室、客厅甚至户外使用屏幕。在这些高环境光下,人眼瞳孔会缩小,人们早已经习惯了将 SDR 显示器的亮度拉到 300-450 nits 甚至更高来对抗反光。

当播放严格按照 203 nits 纸白制作的 HDR 视频时,整体的平均图像亮度(APL)瞬间掉到了消费者习惯的 SDR 亮度的一半以下。所以,普通观众最直观的感受就是:“怎么一切换到 HDR,画面反而变暗了?”

2. Apple 的妥协:以牺牲动态范围换取环境适应性

你提到 MBP 1600-nit XDR 屏幕在显示 75% HLG 信号时把亮度推到了约 450 nits,这正是 Apple 针对上述痛点做出的“强行干预”。

这其实是一种基于环境光和设备特性的映射策略(Tone Mapping)。Apple 知道用户在日常环境中无法接受 203 nits 的纸白,所以他们拉高了 APL,让中性灰和纸白看起来和高亮 SDR 一样通透。

但正如你敏锐察觉到的,这个做法付出了惨痛的代价:高光余量(Headroom)被严重挤压。

  • 如果纸白是 203 nits,到 1600 nits 峰值,你拥有接近 3 档 的高光动态范围,画面中的阳光、金属反光、火焰会极具爆发力和立体感。
  • 如果纸白被拉到 450 nits,到 1600 nits 峰值,只剩下不到 2 档

这就导致 HDR 失去了它原本的意义——它不再是“高动态范围”,而变成了“全局高亮度的 SDR”,失去了那种扎实的中耀度和刺眼的高光反差带来的真实临场感。

3. 1000 nits 纸白?高亮面板时代的 HDR 悖论

关于最新的 iPad Pro(Tandem OLED)和 Pro Display XDR,SDR 全局能推到 1000 nits,HDR 峰值 2000 nits。如果真的把纸白放在 1000 nits,距离峰值确实只剩下了可怜的 1 档

但从影像创作和调色逻辑来看,我们绝不能把纸白推到 1000 nits。 * HDR 的本质是对比度,不是瞎眼: 1000 nits 的漫反射白(比如画面里的一张白纸、一件白衬衫)在视觉上是非常刺眼且不自然的。面板提供 1000 nits 的全屏持续亮度,其核心目的是为了在户外强光下依然能看清屏幕内容(对抗极端的环境光),或者是为了真实还原雪山、明亮天空等极高 APL 的宏大场景,而不是用来提高 UI 或基准肤色的亮度。

  • 正确的映射逻辑: 即使在能输出 2000 nits 峰值的屏幕上,如果你在进行影视化创作,基准的肤色和纸白依然应该被压制在 200-300 nits 左右(最多根据环境光做轻微的动态补偿)。这样你才能保留足足 3-4 档 的空间,去展现画面里真正的光源和高光细节。

总结

你看到的实际上是“显示器硬件狂飙”和“影像创作者的意图”之间的冲突。面板厂商(如 Apple)为了让产品在各种复杂光线下都“看起来很亮很清晰”,往往会篡改映射曲线,拉高纸白,但这破坏了 Cinematography 中对于光影反差的精心设计。

对于习惯了专业影像标准的人来说,这种为了迎合大众观感而牺牲高光细节的做法,确实让人觉得“不太够”甚至有些遗憾。

Nikon Z8/Z9 HLG Deepdive | HLG 详解 VS N-LOG

网上关于HLG的资料相对来说还是比较匮乏的,更别说是针对Nikon Z系列了。之前做过一期简单的HLG直出视频,但是没人感兴趣…

首先HLG是一条用于交付的Gamma曲线,向下兼容SDR内容,这也就注定了它的上限比较低,这也是为什么大部分人对它不感兴趣的原因(另一个原因是它是HDR)。HLG的动态范围(尤其高光)比不过N-LOG,更别说Slog3了。暗部表现则比其他LOG曲线要好一些。它最大的优点就是所见即所得,不需要LUT还原,也(基本)不需要向右曝光,特别适合HDR直出。不过如果你只做SDR的话,还是选N-LOG吧。只是2025年了,拍N-LOG/NRAW获得12+档动态范围,然后再压缩到8档,我只能说历史的包袱太沉重了。可以参见:再谈苹果XDR显示器与HDR之殇

响应曲线

我在Z8上实测响应曲线,测试方法:调整曝光,记录不同曝光值下中灰的IRE,并和理论值做对比。为了统一起见,我将中灰(Stop 0)都曝光到36%。Note: HLG中灰的理论值应该是38%,不过差别不大,大概0.2档左右。结果实测N-LOG的时候,同样用了95 (37.2%)的斑马线,最后得到的曝光是38%,不管了,差不多就行了。

Z8 HLG实测曲线和理论曲线在[-7, +3]的范围内贴合的非常不错。实测的HLG底噪大约是3%,。两者在高光部分差别比较大,HLG中灰以上一档就开始变成对数曲线,和Stops应该是线性关系,但实测下来,Nikon对+3档以上的高光做了一些保护,曲线非常平滑,因此也比理论多了1档的高光,中灰以上5.3档才会完全过曝。 但比起N-LOG还是差了一档多。另外HLG模式下,IRE最大值不是100,而是97左右,斑马线需要设置在245才会有效果。

N-LOG的曲线大致上贴合的还是不错的。noise floor,13%左右。-7档开始就贴着了。只有在+1档到+2档的附近有点波动。高光部分的斜率还是太大了,竟然比理论值还大一些。中灰以上6档就过曝了。只比HLG好了不到1档。

名义ISO

众所周知,N-LOG的基础ISO是800,第二档则是ISO 4000。HLG的基础ISO是400,第二档是ISO 2000。无论是N-LOG还是HLG,它们的ISO都是名义ISO,或者说等效ISO。在相同照度下,我使用相同的参数(1/400s F/4 ISO800)拍摄灰卡,N-LOG和HLG的IRE是36%左右,SDR则为52%。三者还原之后中灰的亮度大体上相同(SDR稍微亮了一点),证实了”名义”ISO。但实际上传感器使用的ISO是多少呢?其实都是ISO 100左右,如何证明呢?拍摄同样的白卡,记录刚刚过曝时的曝光参数。
N-LOG: 1/30s F/4 ISO 800
HLG: 1/30s F/4 ISO 400
SDR: 1/30s F/4 ISO 80 (NL profile, Active D-lighting off)
进光量相同,传感器同时过曝了,表明实际ISO是相同的。
等效ISO也可以这么理解:
N-LOG拍摄时欠曝3档,后期还原时把中灰提亮3档,以起到保护高光的目的。
HLG拍摄时欠曝2档,后期还原时把中灰提亮2档,以起到保护高光的目的。
这也就是为什么按照标准曝光(将中灰曝到36% IRE),N-LOG暗部噪点爆炸,需要使用向右曝光的原因之一。
HLG则相当于找了一个平衡点,欠曝2档,等效于使用ISO 400拍视频,对于全画幅来说勉强可以接受吧。

HLG quality 设置

由于是面向直出的交付曲线,只有H.265编码支持HLG。虽然Nikon贴心的提供了HLG质量选项,但这里有个坑,在默认的设置下,相机是会对HLG的画面进行锐化的,导致使用超级锐的Z卡扣镜头时会产生过度锐化的情况。需要将Quick Sharp设置成-1才能获得未锐化的画面。有需要的话,也可以将对比度和饱和度也降低一些,推荐-1。

不同Quick Sharp模式下的波形图

调整对比度

调整饱和度

Why SDR looks so good on Apple’s XDR display that even shadows HDR?

Apple released their Apple Sillicon based Macbook Pro in October 26, 2021 which includes a new MicroLED display which they call XDR with a peak brightness of 1600 nits. Three years later, it’s still one of the best display for both SDR and HDR content consuming and creation. However, SDR content looks so good on the new XDR displays, sometime I even think that they were HDR content. Why is that? Let us figure out together.

Presets

The new XDR display includes a few presets:

  • Apple XDR Display (P3-1600 nits)

The default one for daily usage which has a peak brightness of 1600 nits for HDR content and 500 nits for SDR / UI.

  • Apple Display (P3-500 nits)

Peek brightness of 500 nits.

  • HDR Video (P3-ST 2084)

HDR reference mode. Can not adjust the brightness which peaks at ~1000 nits for HDR content and ~100 nits for SDR content / UI.

  • HDTV Video (BT.709 – BT.1886)

SDR reference mode. Can not adjust the brightness which peaks at ~100 nits for all content and UI.

500 nits for SDR?

Apple has been using 500 nits for SDR / UI for a very long time. Wait, shouldn’t SDR be 100 nits max? Yes, in theory and in some reference modes. Morden displays have a peak brightness of 300+ nits, not to mention the latest M4 iPad Pro that has a peak brightness of 1000 nits for SDR!!! In today’s standard, 100 nits is too dark to watch even in normally lit indoor environment.

Let’s see how Apple displays SDR content in their XDR displays:

Setup: I created a video with a black square and incrased IRE value of it until it becomes white in Rec. 709 colorspace / gamma. Then used a SM208 screen luminance meter to measure the brightness of the XDR display under different presets.

Here’re curves of screen brightness v.s Rec. 709 IRE values:

After the test, I found that Apple XDR Display (P3-1600 nits) and Apple Display (P3-500 nits) have the same response curve for SDR content, so only drew one line here. They (the red curve) peaked around 450 nits, close to claimed 500 nits (my screen might be degraded a bit after two and half years), middle gray (~40% IRE) is aboud 77 nits, black (0% IRE) is 0.05 nits.

In SDR reference mode (HDTV BT.709-BT.1886 preset), the blue curve, peek brightness is 94 nits, very close to the 100 nits for SDR. Middle gray is a little bit darker at 11 nits, black (0% IRE) is all the way down to 0.02 nits!

I also plot a Gamma 2.4 curve for a 100 nits reference monitor (the yellow curve), you can see that it overlaps well with the SDR reference mode for bright part (60%+ IRE), it lays in between of two modes, its dark region (<1 IRE%) is brighter than both modes and will be even brighter on a real CRT monitor that the SDR standard was designed for.

For comparison, I also measured my BenQ PD3200U which still looks great for most of the SDR content. In sRGB mode, it peaks at 350 nits, the 1% IRE signial is clearly visible and measured at 0.91 nits, pure black is 0.34 nits, contrast ratio is just over 1000:1, no true black is the major drawback of LCD displays. XDR is brighter for the highlights and darker in the shadows.

The curve itself explains why properly graded SDR (Rec. 709) content looks so good in Apple’s XDR Displays: it tracks the gamma curve pretty well for the most part (30%+ IRE), pure white is very bright (450+ nits), and pure black is very dark (0.05 nits), the constrat ratio is around 9000:1 (13+ stops dynamic range).

What about HDR?

I did a similar test for HDR, using ITU-R BT.2100 (HLG) gamma. Here’re the results:

Let’s first look at the HDR reference mode (HDR Video P3-ST2048 preset), the yellow curve, it tracks the HLG curve very well, it’s a straight line (in log scale) after 50% IRE. HDR reference mode peaks at 881 nits (100% IRE), diffuse white 183 nits (75% IRE), both are a little bit darker than the reference values which is 1000 nits and 203 nits respectfully, middle gray is accurate though, around 26 nits (38% IRE), black is 0.02 nits! (0% IRE), It gives us a contrast ratio of ~44000:1, 15 stops+ dyanmic range.

The XDR mode (red curve) is always brighter comparing to the HDR reference mode: 2+ stops in black and deep shadows (0 ~ 10% IRE), 1+ stops (10% ~ 85% IRE) for most of the part and < 1 stops for highlights (85%+ IRE), it curved / saturated after 95% IRE, and peeked at 1450 nits! (100% IRE) , diffuse white 410 nits (75% IRE), close to white in SDR/UI (which is 450 nits), middle gray is around 68 nits (38% IRE), a little bit darker than SDR mode, black is 0.03 nits (0% IRE). It gives us a contrast ratio of ~50000:1, also 15 stops+ dyanmic range.

HLG as a backward compatible curve, I also tested the Apply Display 500 nits mode (the blue curve), it lays between XDR and HDR reference mode for the most of the part and clipped after 90% IRE with a peek brightness of 450 nits (same for SDR content), diffuse white is 219 nits (75% IRE), middle gray is 38 nits (38% IRE) and black is 0.04 nits (0% IRE).

PQ is another story

The HDR reference mode tracks the PQ curve perfectly, it’s a straight line from 0 to 1000 nits and clipped after that.

Both 1600-nits and 500-nits mode didn’t do a good job, they are one stop brighter for shadows and then gradually curved.

Conclusion

Apple’s XDR Display is fantastic, very good in SDR mode: 450+ nits peak brightness, true blacks, 13+ stops of dyanmic range put a lot of “HDR displays” to a shame. You can definitly call it HDR since 100-nits-max, 200:1 constrat ratio SDR is dead for many years. HDR content in XDR mode is also great with a peak brightness of 1450+ nits, 15+ stops dynamic range. However, highlights is only 1.7 stops brighter than UI/SDR white. Idealy, highlights should be at least 3 stops brighter than diffuse white, since people is alreay used to have 500 nits for UI/SDR white, then a 4000+ nits peak brightness display is needed. Setting diffuse white to 203 nits (recommended for HLG masted at 1000 nits), the requirement drops to 1600 nits (it’s not a coincidence), however, (diffuse) white will looks gray since it’s 1.3 stop darker than UI.

I know it’s xxx nits everywhere since human eyes are much more sensitive in luminance than in colors and a lot of “HDR” content are way over saturated!

Nikon Z8 HLG / N-Log Stops vs IRE

Recently I switched from N-RAW to H.265 / HLG for quicker HDR workflow in Final Cut Pro, basically just drop the clips in a HLG timeline and that’s it, just like the iPhone’s Dolby Vision (HLG HDR) footage.

I did some test about IRE vs stops in HLG mode and compared that to the theoretical N-Log curve.

Setup: I expouse the 18% midgray to 36% IRE in HLG mode to match that with N-Log. Adjust the exposure time first then aperture, ISO to over and under expose the midgray for IRE readings.

Note: Techincally, you should expose midgray to 38% IRE in HLG mode (ref), diffuse white will be around 75% IRE, 75%+ is for specular highlights (0~3 stops over diffuse white).

Stops vs IRE

First of all, we can see that HLG curve is more contrastive than N-Log. Highlights will be clipped after 5 stops over midgray which I think is OK for a lot of scences. N-Log retains more infornation in the highlights (if not over-exposed) up to 6.3 stops at the cost of noiser shadows. HLG has more detials in the shadow area, noise floor is about 2% at ISO 400 (12.5% for N-Log).

Another thing intresting is that the HLG curve is very steep from -1 to +3 stops but quickly saturated afterwards (non-linear anymore), highlights are compressed and will clipped around 96% IRE, making sure to set the high zebra to 245 when shooting HLG.

Base ISO for HLG and N-Log

Nikon Z8’s HLG has a “nominal” base ISO 400. Under the hood, the sensor is working at ISO 100, shadows will be boosted by 2 stops while highlights will be intact, bascially you are shooting at ISO 100 with 2 stops under to protect highlights.

N-Log has a “nominal” base ISO 800. Same as HLG, the sensor is working at ISO 100, shadows will be boosted by 3 stops while highlights will be intact, bascially you are shooting at ISO 100 with 3 stops under to protect highlights. However, you will see a lot of noise in shadow area due to insufficient light and 1.3 ~ 1.7 stops “over-exposure” is usually needed to compensate that to get clean image.

Consolution

HLG has a pretty good balance between highlights and shadows, very user friendly for HDR workflow. Unlike N-Log, there is no need to over-expose (even 1 stop under looks good to me), no camera LUT needed for restoration, zero or minimum color grading needed. Another advantage of HLG over N-Log is monitoring, most of the time you can rely on the screen/viewfinder (with zebra on) to adjust the exposure and only open the waveform if needed.