This actually is possible. I’ve placed an oscilloscope probe close to the display where it acts as an antenna which picks up voltage spikes that are in sync with the display’s internal clock frequency. I don’t remember for sure but I think the spike frequency matched the horizontal row timing.
However, to make use of this you would need pickup, amplification and decoding circuitry, which would probably be (unjustifiably) expensive to add and take up a fair amount of real estate.
Hi all, apologies for bringing this back from the dead (lies, I totally loved it) but I have updated the demo in this repo and I think some of you might be interested.
By using the zoom function of the display controller (doubling the line size) the demo can now show 2bit grayscale fullscreen using RAM footprint for the framebuffer. The demo also cycles through different modes when the B button is pressed. half-screen centered -> half-screen top -> half-screen bottom -> half-screen bouncing up and down -> full-screen.
The demo uses 1/3 less CPU by keeping one of the half-frames up twice as long. The demo uses display controller’s RAM as frame buffer and switches between the two halves of the display controller frame buffer using controller commands only, so as long as the image doesn’t change the CPU doesn’t have to do much.
I have a better phone so this time I could capture a video. Still, it does look better on a real device!
Yes, but it’s not that bad because the entire screen is swapped by the controller during vsync, please try it and let me know what you think. The flickering could be further reduced by adding an interactive calibration screen (I tried this already) for the user to adjust a time constant every time they feel there is too much flickering. Upon exiting the calibration screen the time constant is then saved to EEPROM.
It allows to use the current size framebuffer (1024 bytes) for 128x32 pixel grayscale screen mode. The mode would need a separate set of drawing and SPI transfer methods because the layout of the framebuffer is different but it uses very little extra CPU cycles compared to monochrome because, while the number of pixels to be drawn/transferred is the same as monochrome mode, switching between buffers on the controller is done with a two-byte controller command.
Another advantage is that the game’s frame rate is decoupled from updating the display to achieve the gray effect i.e. a game can run at any frame rate, even a very low one, while the switching between buffers on the display can happen at a (fixed) high speed to achieve grayscale display.
The technique boils down to:
1- Start with a good default for the display buffer swap time constant and use the value from EEPROM if available.
2- Provide a calibration screen to fine tune the display buffer swap time constant and save it to EEPROM. The user will use the calibration screen when needed.
3- Use a timer with a delay derived from the display buffer swap time constant to swap the half buffer being displayed (using a short display controller command). If a new framebuffer update has been posted by the application then transfer the buffer to the display controller taking care to write to the half buffer not being currently displayed.
4- The user application paints the framebuffer in the MCU’s RAM and posts it.
I have updated the repository with a couple of changes:
1- A-button toggles suspended rendering i.e. the display’s frame buffer is no longer updated. Bouncing demo mode still bounces the screen up/down because the effect is achieved via display controller commands instead of using the CPU.
2- Adjust display controller buffer swap period using UP/DOWN buttons to make flicker almost disappear. New video attached.
Don’t say this, it’s misleading. The entire screen might be swapped by the controller (which seems to be a win), but there is no way you can guarantee it is during vsync (unless the controller magically does - in which case there would not be any flickering or need to calibrate anything). A software sync can’t stay in sync because the speed of the CPU/display frequency actually changes with the amount of charge in the battery. IE, the frequencies you are trying to synchronize are NOT constant.
This [decoupling] can be done in either case, just at the expense of CPU. You can have a frame rate of 10 and paint the screen 120 times a second, decoupled. Doing it via the zoom mode just makes it a VERY low CPU operation, which is interesting.
You can’t “adjust the sync”. You’re adjusting the speed. There is no real sync. And the problem isn’t the speed, it’s the SYNC. Without true sync you’re fracked. Even if you invented a perfect mapping table and everything was exact it’s too easy to get out of SYNC… and once you’re out of sync even if your timing is PERFECT now it looks every WORSE.
Perfect timing with NO sync is worse than random timing. Play with all the demos and get your timing just a little off then get it perfect again. You can be left with ugly artifacts on the screen for MINUTES - and the closer you are to the perfect speed, the longer your display stays borked.
What your suggesting only makes sense if you can get a TRUE sync PERIODICALLY. Like if you could sync up every 60 seconds… could you keep the display stable inbetween with a carefully calibrated timer - that might be worth a solid effort… but with NO way to capture a true sync, you’re just a ship without a rudder.
Call me crazy but that sounds like if the more off-sync it will be the better the gray will display. It’s obvious that I don’t have any real idea of how that really works.
You’re not crazy. The examples that showed grayscale before that attempted this artificial SYNC also suffered from this same artifact and it would populate as a slowly moving scan line. So you are trading what would otherwise look like a random flicker into something that is traceable with the eye.
This example here is novel as it saves on both cpu overhead and ram.
There is some truth in that. Too bad that “make it perfectly out of sync” is the same type of problem as “make it perfectly in sync”, LOL. You can’t do either well without a sync signal.
I do not believe it is misleading for two reasons: first because I explain in the source code how this work and the fact that flickering cannot be avoided (without VSYNC). It’s just less flickering with less CPU usage (after manual calibration). And second:
because this is exactly what the controller does (and, from memory, I believe I even wrote that in the source code). Also, even when the controller swaps the buffer during vsync, flickering doesn’t go away completely because without VSYNC you cannot guarantee you will send the commands to swap the screen at the right time resulting in some frames staying visible longer then they should.
I know this and I have posted on this topic before. I am one of those annoying “it can’t be done without VSYNC” persons. What I published here is a low-overhead full screen (albeit half-vertical-resolution) gray technique based on double buffering on the display controller that results in less flickering compared to relying on the CPU pushing updates while racing display refresh.
FYI I never claimed the decoupling is only possible done this way. What I said is that the decoupling can be done with low CPU usage. And, if you involve the CPU, the frequency and intensity of gliches is likely to increase.
Phew I did it again: I wrote too much!
Did anyone actually try the demo? While far from perfect or convenient (because of the calibration step) I think it may be usable especially when the screen is not zoomed (i.e. half-height) because even without calibration the flickering is not very annoying (the controller is refreshing the display twice as fast because there is half the number of scanlines).
In pixel it’s 128x32 but with 2 bits per pixel there are different ways of organizing the buffers. The demo is using two 128x4 bytes buffers. Given a gray level enconded as two bits b1b0, buffer0 holds the value of b0 for all pixels and buffer1 holds the value of b1 for all the pixels. Grayscale drawing functions need to take that into account. Note that you could use existing functions to draw the same primitive twice in two different buffers but that wasteful because coordinate to offset computing would happen twice while a layout-aware function can find the position in the second buffer by adding a constant offset.
Conversely if the grayscale level were to be encoded in adjacent bits into the same byte (4 pixels per byte) both the SPI transfer function (because it would have to unpack/select the bits) and grascale drawing functions would have to be designed to support the specific layout.
Yes, it feels that way to me too. WHich reminded me (if memory serves) that the Commodor 64 had a multicolor sprite mode that resulted in sprites with half of the (horizontal) px count.
If speed mattered the only way to do this would be separate sprites and buffers. (as you say you’re doing already) One benefit being you don’t necessarily need any new drawing code. Bit-wise operations (shifting) on AVR are ridiculously slow. Mixing the buffers would require a lot of additional effort at render-time to tear the buffers back apart and render just half the content.
Of course in a lot of cases speed doesn’t matter so much.