-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling Scanline Simulation (future improvements) #16373
Comments
The current problem is that we don't -know- a good way to improve it that doesn't have fairly bad artifacting or other major issues of its own. I personally think the rolling scan feature as it is now, will scare people off BFI thinking it's an entirely useless/broken feature. But I didn't want to stand in the way of merging either, as it isn't my place, and as this code should not inhibit the existing full-frame BFI/shader sub-frame code paths from working as intended. Some of the best things we know of, for the issues this feature has, are trying to hide the joint lines behind scanlines in CRT filters, and having some overlap between rolling scan sections with brightness adjustment which replaces some of the tearing problem with horizontal strips of less motion blur reduction. Which in and of itself is a pretty apparent visual artifact. Also, a front-end solution like this wont be aware of what shaders are in use, and the screen resolution and Hz being used will also change where those rolling scan joint lines are in the image. Making trying to build front end code, or a shader specificially meant to be used in conjunction with this feature, need to account for a LOT of different joint line possibilities. If anyone can provide a solution to where the artifacting is minimal enough to compete with the existing full-frame BFI that has zero inherent artifacting (other than strobing itself being a little annoying, obviously), I am all for it though. There are a few side benefits to the rolling scan method over full-frame BFI when/if it works well. This is where @mdrejhon would be very handy. :) |
For the record, I find a double ON to be much less obtrusive than a double OFF flicker. |
Did you mean this response for my last reply on the previous PR regarding the 120hz bfi workaround? |
Yeah, I just put it here instead of there so we could close the lid on that one and continue discussion of improvements here. |
A sub-frame shader solution (to that 120hz workaround) wouldn't be able to inject an 'extra' sub-frame like a driver solution could. But I still think it might be better to 'hide' a feature that is purposefully injecting noticeable annoying artifacting in a shader rather than as a front-end option. So you'd maybe do something more like (100-0)-(100-0)-(50-50)-(0-100)-(0-100) style phase shift on a framecount%(adjustable number of how long you want between phase shifts). And keep in mind framecount intentionally doesn't increment on sub-frames, or sub-frames would mess with anything older that looks at framecount but isn't sub-frame aware. The 50-50 transition frame might be a less noticeable/annoying transition than just a straight flip like 100-0-0-100? Trading some of the very noticeable change in instantaneous average brightness for some transient motion blur, still annoying but maybe a -little- less distracting. |
Hi Roc-Y I presume this only happens when rolling scan line is turned on?
…On Fri, 16 Aug 2024, 17:42 Roc-Y, ***@***.***> wrote:
I don't know why this causes wide black bands in the shader I developed,
but I think if the Rolling Scanline Simulation feature only handles the
native resolution (e.g. 256*244), then my shader will behave normally.
20240817_003458.jpg (view on web)
<https://github.com/user-attachments/assets/546e0f9c-5d53-4801-a4f1-ca496e18e89b>
—
Reply to this email directly, view it on GitHub
<#16373 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVKYGRTCSH6EVMKYMZTDUQDZRYTWLAVCNFSM6AAAAABE53OIC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTHAZDIOJRG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
There are no black lines after turning off the shader. It seems that as long as the resolution is enlarged in the shader, there will be black lines. It has nothing to do with whether it is a CRT shader. |
BTW, in fast horizontal scrolling, there can be tearing artifacts with rolling-scan. You need motion sufficiently fast (about 8 retropixels/frame or faster, which produces 2-pixel-offsets for 4-segment sharp-boundary rolling scan). This is fixed via using alphablend overlaps. However, gamma-correcting the overlaps can be challenging so that all pixels emit the same number of photons is challenging. And to put fadebehind effects (so that a short-shutter photo of rolling scan looks more similar to a short-shutter photo of a CRT). And even LCD GtG distorts the alphablend overlaps. So alphablend overlaps works best on OLEDs of a known gamma (and doing gamma correction, and disabling ABL). For LCD, sharp-boundary rolling scan is better (and tolerating the tearing artifacts during fast platformers). Then again using HDR ABL is wonderful, because you can convert SDR into HDR, and use the 25% window size to make rolling-scan strobe much brighter. This improves a lot if you use 8-segment rolling scan (60fps at 480Hz OLED) to reduce HDR window size per refresh cycle, allowing HDR OLED to have a much brighter picture during rolling BFI! Also, I have a TestUFO version of rolling scan BFI under development that actually simulates the behavior of a CRT beam more accurately (including phosphor fadebehind effect). Related: #10757 |
@songokukakaroto I've been working on subframe BFI shaders that you can try. There's a 240 Hz rolling scan one, but it's not great. As mdrejhon mentioned, the gamma issue means the fades aren't perfect. |
I have a surprise coming December 24, 2024 -- the world's most accurate CRT electron beam simulator shader, the "Temporal HLSL" complete enough to close BFIv3 (except for the beamrace part) if integrated into RetroArch. It looks better than this simulation. MIT license. @hizzlekizzle, you probably can port it quickly into Retroarch. You're welcome. |
@mdrejhon sounds exciting. I can't wait to check it out :D |
Even MAME could port it in, if they wish -- @cuavas put a spike on it mamedev/mame#6762 because they said it was a pipe dream. Ah well, RetroArch code contribution it is!
It's a reality today. I just finished working on a new glsl shader for a CRT electron beam simulator I've ever seen, especially when run on a 240Hz OLED. I'll publicize it with MIT source license via a github you can fork or port on 24th December 2024 -- my release date. When run on a 240Hz-480Hz OLED to emulate a 60Hz tube, it looks good enough that my shader might become the gensis of CRT-replacement OLEDs for arcade machines of 2030s when it's cheaper to buy these soon-cheaper OLEDs than source nearly-extinct tubes 10 years from now. |
Sounds great! |
Sneak preview! Slow mo version of realtime shader. These are genuine screenshots, played back in slow-motion, of it running in real time doing 60fps at 240Hz. Phosphor trailing is adjustable, for brightness-vs-motionclarity. Still works at 2xInHz (Min ~100Hz to emulate PAL CRT, min ~120Hz to emulate NTSC CRT) and up, scales infinitely (>1000Hz+) Yes, it looks like a slow motion video of a CRT tube. But these are PrintScreen screenshots! |
It's out now! I've published the article: I've released a MIT-licensed open source shader: Shadertoy animation demo (for 240Hz) Can easily adjust settings for 120Hz or 144Hz or 165Hz or 360Hz or 480Hz or 540Hz! Please implement it into Retroarch. Pretty please. |
Discussion also at #10757 |
Porting to RetroArch was a breeze. It's available here now: https://github.com/libretro/slang-shaders/blob/master/subframe-bfi/shaders/crt-beam-simulator.slang and will show up in the online updater in a few minutes/hours. I replaced the edge-blended version I had made with it, since this one is superior in every way lol. |
That was damn fast! Nice Christmas surprise. And I can combine CRT filters simultaneously too? Neat! You should rename the menu in RetroArch if possible, to at least catch attention -- that it is a new better shader. Also eventually, I'll add a phase-offset since I can reduce the latency of this CRT simulator by probably 1 frameslice (1 native refresh cycle) simply by adding +time as a constant. I need to experiment with my changes in ShaderToy in the coming week (It's Christmas). But it's absurdly fantastic to see an actual deployment the same day I released my simulation shader! Which releases will have it? PC, Mac, Linux? Can it also be ported to the mobile app for 120Hz OLED iPhone/iPad too? I notice that the shadertoy works great on those, even if not as good as 240Hz. TechSpot Readers:EDIT: TechSpot posted some publicity that contained a permalink to this comment. If you're looking for the original main copy of the shader that will get an improved version in January 2025, please go to my repository: www.github.com/blurbusters/crt-beam-simulator |
I just asked our Apple guy and he says the subframe stuff is available on nightly builds for iOS but will be included in the upcoming release. It doesn't persist, so you have to re-enable it on each launch, which is a drag, but nothing worth doing is ever easy :) But yeah, Mac/Win/Lin should be covered. Thanks for working on this and for designing (and licensing) it around sharing and easy integration into other projects. It was a breeze to port thanks to that foresight and generosity. |
Tim made one of the most important contributions to keep it bright and seam-free (variable-MPRT algorithm). Niche algorithms tend to be ignored by the display industry, so it's nice we could BYOA (Bring Your Own Algorithm) straight into RetroArch, just supply generic Hz, and the software can do the rest. And nice that you kept the LCD Saver Mode (maybe add a boolean toggle for it). OLEDs do not require that, and I kind of prefer it be done at the application level to avoid the slewing latency effect [0...1 native Hz]. Not a biggie for 240-480Hz, but turning it off will create constant latency for evenly-divisible refresh rates. |
Done! libretro/slang-shaders#668 I'm having fun running my subframes up higher than my monitor can push and setting the "FPS Divisor" up accordingly. It looks just like slow-motion camera footage of CRTs. You can get some pretty believable slo-mo captures by pairing it with p_mailn's metaCRT: |
We'd need to see a log of it failing to load to even guess, I'm afraid. This sort of issue is usually handled more effectively via forum/discord/subreddit, though, if you can pop over to one of those. |
How do you load this in RetroArch? When i load the presets nothing happens. I have a 240hz LCD monitor, what other options must i change to make it work? |
@Tasosgemah Enable shader sub-frames in the settings. |
Thanks, it works now. But i assume my monitor isn't good enough for it because even though the motion blur is reduced, it looks really bad. All the colors are very dark, there's some minor ghosting, some noticeable transparent horizontal stripes and random flickering that comes and goes. |
@Tasosgemah Something else must be wrong, I have a 160hz monitor set to 120hz for this and it looks super clean and I experience none of this. How many Shader Sub-Frames did you enable? |
It's not my RetroArch settings. I'm getting the same exact issues with the shader toy preview link. |
@pxdl, I'm having the exact same problem but I cannot find a place to enable sub-frames, where is that option at? Nevermind, fixed it. For anyone else in the future, if you've had retroarch a while, do a fresh install. I was missing the newer menu options. |
I have a new troubleshooting HOWTO HOWTO: For CRT Simulator Artifacts: Fix banding / flicker / chroma ghosting |
Can this be ported to the android build too please? New Android handhelds are coming very soon with 120hz OLED displays and this is simply the best thing happening to emulation since forever! |
We had previously disabled all BFI-type effects on Android simply because we didn't want to deal with people freaking out about how we ruined their $1k phone with "burn-in" and most weren't capable of 120+ Hz anyway, but now that most are AM/OLED and support high refresh rates, we're looking into re-enabling those features. |
Please, yes. Plus, why not do my best practices?
|
I also long since liked the idea of a duplicate frame every X seconds as a safety measure for traditional software BFI in RetroArch in general. It should be hardly noticeable and it's always better to be safe than sorry when working with hardware that can cost anywhere from a couple hundred to over thousand and a lot of potentially uneducated people. |
While it's not part of the BFI options in settings > video, I've incorporated cadence-shifting every X seconds in several of the BFI shaders. |
Only problem is the slewing-latency effect so automatically disable this when not necessary. Like for OLEDs to lower BFI latency. Or when native:emulated is odd, or is not an exact even integer ratio (The ONLY time you need on LCD to actively flip the inversion phase of the AC voltage of pixel inversion to prevent BFI static electricity buildup and its resulting image retention) |
Yeah, please -don't- make the cadence shifting universal. I heavily disagree that it is hardly noticeable. Without bfi, sure, a single dropped or extra frame is pretty hard to pick out, but our eyes are much, much more sensitive to a very short uneven brightness flicker, than they are to a very short temporal stutter with even brightness. Also, for the regular BFI, 120hz on an lcd is the only time it is truly necessary currently. Not even the other even multiples, since the number of bright vs dark frames are adjustable. Ie: 240hz at ON(+)-(ON-)-Off(+)-Off(-) is still perfectly safe on an lcd. |
d> Yeah, please -don't- make the cadence shifting universal. I heavily disagree that it is hardly noticeable. Without bfi, sure, a single dropped or extra frame is pretty hard to pick out, but our eyes are much, much more sensitive to a very short uneven brightness flicker, than they are to a very short temporal stutter with even brightness.
Not necessarilyThat pattern will create image retention on some of MY own LCDs. Not all of them, but some of them. Sometimes, but I wouldn't trust it. I've been working with LCD inversion for 10 years and I work with display manufacturers: https://www.blurbusters.com/area51 -- I am in over 30 research papers. EDIT: In the display side, LCD inversion algorithms can go to a 4-frame sequence instead of 2-frame sequence, as some LCDs automatically do when doing stereoscopic shutter glasses content. Also, sometimes BFI content "looks" like stereoscopic shutter glasses content to these LCDs by accident too. From a Display Industry VeteranAlways safe
Risky
Potentially Risky
Mitigation
|
Not necessarily what? That 240hz at ON(+)-(ON-)-Off(+)-Off(-) isn't guaranteed safe on an lcd? I do only have ~4 240hz capable lcd screens, which granted is probably much much less than you, hah. I was the primary author of the current RA bfi & sub-frame back-end stuff so I did test quite thoroughly however. But at least out of that sample size, all 4 lcd's that did have issues at 120hz, were 100% retention free at 240hz at ON(+)-(ON-)-Off(+)-Off(-). And on my one 360hz capable screen at ON(+)-(ON-)-Off(+)-Off(-)-Off(+)-Off(-). If you did manage to get one that had problems with a well paired +,- strobe length at 240hz or any other even multiple (not including 120hz).. you could still adjust the strobe length away from (what should be) that safe value and then one of the other lengths should be safe for your strange screen. It shouldn't be possible to be unevenly building up charge at -all- different strobe lengths when some of them will have even number of on-off frames and some will have odd, the hardware screen inversion algorithm would be having to dynamically change to match the output for it to interfere with both, wouldn't it? Anyway, I'd take adjusting the strobe length, or more accurately just to an odd Hz multiple but that's not the point for this conversation, and maybe having to sacrifice a bit of clarity over the inserted or dropped frames any day. Others might feel differently but that's why I'd certainly prefer it to stay a choice and not forced either way. Also, what I was talking about was not forcing cadence shifting for the full frame bfi implementation only, of course. As I understand it, though I haven't tested it out myself yet, your rolling scan algorithm just shifts the scan out line slowly to avoid the voltage retention without the flicker of the full screen bfi solution. |
Due to the 3D glasses era (frame sequential stereoscopic glasses), some LCD vendors had to sometimes switch to a EVEN:EVEN:ODD:ODD algorithm than EVEN:ODD:EVEN:ODD algorithm. LCD inversion algorithms can switch away from their standards. So there's still a risk. However, yes, a lot of LCDs are safe with the 1:1:0:0 sequence. I just can't guarantee it it's the bog-standard inversion algorithm. We're lucky they've never used odd-number cadences (it's technically possible, but unlikely, due to the fact there's only two voltage polarities, which lends itself well to even cadences like x2 or x4). |
The "smart" and "safe" 120 Hz BFI shaders swap cadence on a timer, and the "smart" one will also swap whenever the screen transitions to black, so you don't see the stutter (uses shader feedback to store and check the cadence over time): https://github.com/libretro/slang-shaders/blob/master/subframe-bfi/shaders/120hz-smart-BFI/calculations.slang |
That's actually pretty neat too. Remember it doesn't have to be a black frame, but any unbalances of any kind, e.g. average brightness of even frames unbalances out of average brightness of odd frames. Remember, this is a problem with rolling BFI too! So top half black / bottom half notblack, alternatingly. That will create image retention too. So in theory, a watchdog shader could track even/odd balancings on a per pixel basis..but that's a lot of compute wasted. Heh. All that just to dodge image retention from LCD inversion, when we just go with simpler algorithms. Also you can use temporal scaling too, to remove a native:simulated integer divisor ratio, e.g. 60Hz BFI at 280Hz is possible using temporal scaling tricks. Think of bilinear interpolation in spatials, except applied temporally instead. Basically alphablend between black frame and visible frames using a linear-correct gamma blend. (It has to a linearspace alphablend; not the regular blending built into GPUs, but a custom shader fragment instead, to comply with Talbot Plateau Law). The problem with temporal scaling algorithms like the one I did for CRT -- is slow LCD GtG, so use OLED, and also do the gamma2linear / linear2gamma for the Talbot Plateau Law corrections to prevent objectionable flicker. |
That, I am agreed on. I want to be able to force "LCD Saver" off, even for LCDs. Some cadences are safe on many LCDs, like the 1:1:0:0 cadence for BFI (even if 1:0:0:0 or 1:1:1:0 isn't safe). Okay, maybe call it the "Let my LCDs burn, baby" mode, for testing out whether the LCD has better inversion algorithms that takes 2 minutes instead of 2 seconds to show retention, etc. |
Any news on that front by chance? Ayn is waiting for the Android implementation to try this new feature on their upcoming 120hz oled handheld if you need any feedback. |
At first, I thought this project was to reduce the macroscopic brightness flicker, so as to protect the eyes. Because in my opinion, the biggest problem of 60Hz BFI is that flicker hurts the eyes. |
Thank you for warning that it's an opinion; so kudos. But to micdrop the other Armchair Artifactsplainers (1000x, been there, done that) putting "You Prefer This" in other peoples' mouth unceremoniously -- I referee, mythbust & correctly scientifcally yank that out -- and I remind people that people have preferences, you know. Some of us are unusually sensitive to tearing. Some of us are unusually sensitive to flicker. Some of us are unusually sensitive to motion sickness. Not all of us sees the same way you do. I've seen <1% people get motion sick from tearing (e.g. vertigo trigger from tearing artifacts). Sometimes it's a "I dont care" <-> "I notice" <-> "It bothers me" <-> "It makes me motionsick" <-> "It gives me migraines" continuum. You categorize blurs as one, but tearing as another -- but for a different person, it's different. Everybody has different triggers, you don't care, but others may care. Everybody wears different eyeglasses. 12% are color blind. Different people have different motion sensitivities. Not everybody sees identically. You know -- my name sake -- Blur Busters & its science -- means I am a beacon for people who get headaches from motion blur. I've got a supertanker full of anecdotes in my mailbox, buddies... As the (display+temporal) entity, specializing in blurs, GtG, ghosting, tearings, original frames, fake frames, display simulators, input latency, BFI, inversion algoroithms, framerate, Hz, and anything that involves a (screen)+(time-dimension), with cites in over 30 research papers I've become the known Hz Einstein authority in this matter. No worry, I know it's a user preference -- tearing can be preferred! -- but not by all. 👽
Depends on the content. Actually, CRT simulator is preferable to BFI according to hundreds of people telling me. ![]() My algorithm combined with Timothy Lotte's algorithm, was a marriage made in heaven and lowered the "look better than BFI" bar all the way down to mere 120Hz LCDs (as long as reasonably fast IPS). It may not happen to your LCD (especialy if you're using a 6-bit TN LCD), but there are LCDs and OLEDs where the CRT simulator looks better than BFI in the total package deal (comfort / flicker / motion blur reduction). The seams are tiny enough apparently! The jello effect, while extant, is very minor on some content. It's a problem at Sonic Hedgehog speeds, but not all games are Sonic Hedgehog demanding a 240-480Hz OLED to fix the jello effect. There's been a big surge of Retrotink 4K (my logo is on the bottom) thanks to the CRT simulator now released in an early beta on the box. Many 120Hz users are raving about my CRT simulator now being better than BFI on average, even if some turn it off and use BFI instead for faster content. ![]() Also, I have a global phosphor-BFI mode coming to the CRT simulator (infinite velocity scanout), to solve the jello effect problem too, so the scan velocity adjustment (able to go infinity) will allow you to zero-out the jello effect. The neat thing about shaders is that I can invent nonexistent displays, in addition to standard CRT display simulators and plasma display simulators. I can invent a FED-SED-DreamDisplay globalPhosphor BFI without a native:simulated Hz integer requirement, thanks to temporal scaling. For 120Hz globalphosphor variable-MPRT BFI (where 75% of pixels are blur busted and 25% of pixels have a very slight dimframe), to get brighter than regular BFI, I will have to use the alternate LCD Saver algorithm of a sudden extra frame once every 15 seconds (ish), deciding how to implement it (perhaps as a gamma2linear balanced alphablend). As the (display+temporal) genius, my plan is to follow the Blur Busters Open Source DIsplay Shaders Initiative, releasing more shaders over 2025-2030. A single 1000Hz OLED can do all of this eventually: ![]() As you already know I already have a VRR simulator on TestUFO, etc (if not, then read the article). All of that can go into a shader and simulate VRR to a fixed-Hz display too, via temporal scaling algorithms that looks good up to approximately 1/2.5th of the native Hz. Yes, CRT-VRR too. A software-based GSYNC Pulsar, temporally scaling, perhaps a ~48-400Hz VRR range remapped onto a 1000Hz fixed-Hz OLED. As you can see from my shadertoy, my CRT simulator looks smooth doing 60fps at 280Hz -- just play with the shader variables. My CRT simulator has no integer-divisor native:simulated Hz ratio! My goal is opensourcing all my display algorithms. You can see the reasons why in Version 1.01 of the Open Source DIsplay Shaders spec (scroll down to the bottom half of the BlurBusters OSD Initiative page) -- about the cesspool state of the display industry and how it's time to shake things up a bit by Bring Your Own Algorithm approaches. |
We can re-enable it but people will have to use it entirely at their own risk and they'll have to know what they are doing. So if someone is going to try to turn BFI on with some 60Hz LCD phone and their screen keeps being weird for a while after using the BFI feature, that's not our fault. There should maybe be an extra warning added to the sublabel so the user is aware of this. |
Is this shader going to be compatible with librashader in the future? |
librashader author was checking with us about doing arbitrary framerates instead of integer multiples of content framerate. We're not going to be able to do non-integer without completely reworking the entire concept on our end, unfortunately, so that's probably going to be a no-go for us. |
That looks good. I do not get that bad banding on my display, but halftoning / dithering is a good way to get around display limitations to reduce the bands/seams. If you'd like to do a pull request at the github, let me know, as that is a useful change. If you made any source code changes, let me know. I don't have a bright or dark area, when the CRT simulator runs on my screen, so it's hard for me to fix banding that isn't noticeable on my display. What model is your display? As a general rule of thumb, what I am getting is less banding than even that halftone image, but I know that some displays are not as lucky and needs the help... |
Hello, Don't forget, if you hate chroma ghosting in saturated games (and don't want to wait for the Jan 2025 update to CRT simulator algorithm),
This forces the MPRT to finish in the first Hz of the series, e.g. CRT simulator keeps better MPRT symmetry on all color channels. It's much darker but avoids the the majority of chroma ghosting. Then work your way back upwards until the minor chroma ghosting reappears. Everybody has different preference thresholds, and it's less necessary for desaturated games (Tomb Raider) than for saturated games (Super Mario 64). cc: @hizzlekizzle (I'm having difficulty setting to 0.125 ... why? Did you range-limit the setting?) |
I set the granularity to 0.1 so it wouldn't take forever to cycle through it, but I can increase the precision if needed. Conversely, I could change it to a switch case if those are optimal values. That is, just cycle through integer values that would set it to your above values. Or we could make it automatic based on the number of subframes. |
There's little need to have granularity less than (simulated/native). In other words, no need to go less than exactly 0.125 for 480:60 ratio. Also 0.1 granularity step is too coarse for 360-480Hz OLEDs, you should try 0.025 steps instead of 0.1 steps. Hopefully my new version of CRT simulator will be available in a week, with a bunch of new settings. Also, as an interim stopgap for 120Hz banding, try setting SLEW to 1.01 or 1.05 instead of 1.001. This will prevent LCD-retention-related-gamma-shifts creating banding on excessively-sensitive LCDs, during 120 and 240Hz operation. Alternatively, use 180Hz with a 240Hz LCD. UPDATE: I got a success report that a larger LCD_INVERSION_COMPENSATION_SLEW helped some 120Hz+240Hz users having unsolvable band problems. The band got much fainter when the band scrolled faster. Most LCDs work fine with 1.001 but some may need 1.01 or even 1.05 |
|
Inviting all stakeholders @MajorPainTheCactus @Ophidon @mdrejhon and others.
This is to discuss further improving the initial groundwork done in this PR - #16282
The text was updated successfully, but these errors were encountered: