SVGA support aside, it will never cease to astound me that you can load Windows 3.x on a modern standards-supporting PC and get basic VGA working out-of-the-box but you cannot do the same on modern Linux/BSD to get a basic software-accelerated VGA framebuffer supported by Xorg/Wayland if you don't have the right drivers installed (and the correct configuration files manually set up).
(the dead xfree86 project was probably the closest attempt to making this "just work" though it had a long way to go; this approach was not preserved in the Xorg fork)
Xfree86 never did anything different to Xorg here. If you boot via CSM on a modern PC (which is really not something you should do, but, well) you should get Xorg running with the VBE backend using x86emu to execute the video BIOS, and if you boot via EFI you should get modesetting running on top of efifb using whatever mode your firmware and bootloader left you in.
But note that this is actually easier for 16 or 32 bit operating systems! Setting VESA modes involves making a real mode 16 bit call (there's nominally a 32 bit entry point for VESA but it was specced late in the standard's life and basically nobody implements a working version), and once you're running in 64-bit mode you've lost the ability to do vm86 so calling 16 bit code from userland becomes impossible. This is why x86emu is required (basically we read the video BIOS code and then run it under an x86 emulator), and it's not always perfect.
I just wanna say that "we read the video BIOS code and then run it under an x86 emulator" sounds like some truly heroic effort by a bunch of engineers and I'm glad they did the work. I hope the people involved are proud of it.
IIRC, the reason it needed to be a VM in the first place was because X (simplifying a bit) "started" at Sun, running on PowerPC boxen. (Here's another subthread mentioning the same thing - https://news.ycombinator.com/item?id=42597613)
I agree it was an engineering marvel, IMO only less impressive than the CPU implemented in JPEG instructions for Pegasus.
X was originally an MIT development, and had no intrinsic ties to the underlying hardware (Sun implemented something called NeWS which ended up losing to X in the long run). Different vendors took the reference code and added device specific code to work, but this was an era where your hardware vendor was also your OS vendor so that was fairly transparent for most users. At this point almost every CPU architecture had their own expansion bus so there wasn't really any way you could plug a card intended for one machine into another. The vendor X server worked just fine.
And then PCI became ubiquitous and it was much cheaper to plug a PC card into a machine than buy an overpriced one from Sun or DEC or whatever, and people were starting to run Linux or *BSD instead of the vendor OS, and suddenly there was an incentive to be able to run graphics card x86 init code even on other CPU architectures.
I ended up stealing the concept and the code to make Ubuntu's usplash boot splash app work on 64-bit, which is an entirely different story.
Linux supports vgafb/vesafb so this is possible if the distribution is configured appropriately.
I think some/most distributions might not enable it out of the box because it would generally result in a low performance/quality experience and the user not realizing what the problem is, and nowadays almost all GPUs are supported natively, so nobody has invested in writing code to show a "Using unaccelerated VGA/VESA, you may want to fix this" popup.
Generic VGA doesn't generally exist on EFI platforms, the firmware doesn't program the card into a state where the VGA registers are going to do anything useful. You should get a working unaccelerated framebuffer from the firmware, though.
Technically not related to Solaris, since Solaris x86 existed and you wouldn't need it there, but yes, this was used on Sparc and any other CPU unable to run the card BIOS (including 64-bit x86 Linux, since virtual 8086 mode goes away when you're on long mode)
I just had a look at the install of ancient NetBSD I have running on my Pocket386, and the X server in there (which is XFree86 3.x) does have a generic VGA driver that does 640x480 at 16 colors just fine. I'm not sure what the "default settings" would be in this context since X that old doesn't even start without a proper XF86Config.
X11 has a "VESA" driver since forever, however its performance scales badly since number of pixels to process grows rapidly with the resolution.
Bootable Linux distributions I use at work (GRML and Clonezilla mainly) automatically resize to the native resolution of the screen or virtual KVM during boot with KMS support, and they work really well. Anaconda (RedHat and derivatives' installer) and Debian's installer also scales to native resolution on boot.
GUI installers use VESA over X11 directly.
> xfree86 project was probably the closest attempt to making this "just work"...
XOrg fork has "configless boot" for a long time. I don't maintain a config file for a very long time now, and I'm happier than ever (see https://www.xkcd.com/963/).
It's specialization. Windows 3.x runs with a GUI on an IBM PC. XFree86-era Linux runs with any UI on an IBM PC. Xorg-era Linux runs with any UI on any computer. Therefore you have to specify what kind of configuration you require, and don't get it out of the box. But note that every distribution that runs on IBM PCs with a VGA GUI does come with that configuration out of the box, because they are re-specialized for that system.
And Wayland requires a GPU as far as I know, since the entire protocol is based on passing GPU memory buffers around and compositing then. Deleting the backwards compatibility for software rendering was half the point of creating Wayland.
Wayland does not require a GPU and the baseline protocol is just shared memory buffers. The protocol requires file descriptor for said buffers and getting gpu drivers to support mapping gpu memory to file descriptors is what allowed Wayland to become efficient through zero copy and just passing handles from clients to compositors
The Wayland base protocol is completely unusable since it does nothing by itself. Doing useful work with Wayland requires more stuff, like a GPU driver.
It requires a GPU in the same sense that windows 3.1 or dos or whatever else requires a GPU to convert the contents of a memory buffer to a signal that a display can actually display.
But you can also just encode the result of your composition to a h264 stream and send that over the network if you so desire. no GPU required in this case.
> The simplest means of getting pixels from client to compositor, and the only one enshrined in wayland.xml, is wl_shm — shared memory. Simply put, it allows you to transfer a file descriptor for the compositor to mmap with MAP_SHARED, then share pixel buffers out of this pool. Add some simple synchronization primitives to keep everyone from fighting over each buffer, and you have a workable — and portable — solution.
In Windows, the text cursor no longer blinks (more accurately it stops blinking after a few seconds, becoming static) in the interests of battery saving. At least I think it was battery saving, been months since I read it and my memory is foggy because it's fucking stupid.
The problem is grey usually means "disabled" which also implies that data may be stale.
As it is you can set a text box read only but active, but it does waste time since it doesn't code that it's not interactable visually (i.e. you have to make up a scheme).
Basically it would've been nice to have a consistent background color or other visual cue which says "this will update, this is live, but you can't type here".
I'd go with Windows 7 as the last "good" version. It is similar to NT/2k, but prettier, thanks to improved graphics hardware. Same thing for XP before it. Even Vista had a decent UI, its flaws were elsewhere.
Windows 8 broke everything and Windows never recovered. I blame it on two reason: the rise of mobile platforms, and laziness from Microsoft.
For the first point, we have now what seems like an unsolvable problem. Many apps now come with a desktop version and a mobile version. Completely different user experience. One has a large screen, a keyboard and mouse. The other has a small touchscreen. A good desktop app would be completely different from a good mobile app. However, you don't want that either, as you want your users familiar with one version to feel at ease with the other. It means that even when you do your best, there are compromises.
But Microsoft probably could have done a decent work, but they didn't. They just become lazy. It is evident by looking at the control panel. The new control panel (settings) dates back from Windows 8, 12 years ago, and they still didn't port all the features from the old one to the new one, so you need both. They intended for a complete switch a few months ago, they weren't ready, will they ever be? In addition they regularly remove popular customization options, the style is inconsistent between the bundled apps, etc... This is not just controversial, it is objectively bad.
Another factor, but Microsoft and Windows are not to blame here is that app developers prioritize branding and internal consistency over OS integration. Many modern UIs are just a web page rendered with a browser engine (ex: Electron). They are not using native OS controls, ignore theming, draw their own Windows decorations, etc... Maybe the OS is inconsistent, but app developers don't help.
FWIW mobile and desktop aren't even the only distinct "UI environments". Steam's had a "big picture mode" aka "tenfoot mode" for a long time which is designed for media center PCs (same hardware, but you're sitting on the other side of the room, ten feet away, so everything had better be big) and of course there are things like VR.
Notably, the way Steam implements its tenfoot experience is an entirely separate UI instead of a single jack-of-all-trades UI. I've never used it since I don't have that type of setup so I can't say how well it works for people accustomed to the desktop UI.
It does seem that as annoying as it might be to get used to a new UI for mobile and desktop, using the same UI for both is even worse.
Again we see this theme, as in other parts of software engineering, that people try to hard to invent abstractions instead of just grinding out the work (a separate design for each platform).
Big picture mode is gone. It might be in there somewhere, but the 10' UI has been replaced by the Steam Deck UI. Not a bad replacement, honestly.
The Steam Deck UI has also solved a problem that Microsoft appears entirely unable to address: mixed input modes. The entire UI can be accessed through the touch screen or the physical controls. Microsoft has been failing horribly at this since Win8, and apparently operate under the impression that every user has a 40" touch screen on their desktop computer and doesn't know what a mouse is.
Good UI is achievable if your goal is to make useful software. That simply isn't what Microsoft's goal is anymore. They're just broadcom now: completely abandoned the product to focus on squeezing the users for everything they're worth.
The long migration from control panel to settings is one of the big tells to me that MS doesn't have the windows UI anywhere near a top priority.
They have a large amount of potential resources to throw at any problem they like, they could have done a v1.0 for win8 that provided a complete set of equivalents in the new style when they judged the OS was ready to release. Then iterate from that v1.0 in later releases as they have with other aspects of the OS, if anything MS seem to keep parts more fluid than long term stable now and aren't afraid to make major changes in service packs, win10 reaching EOL is quite different to the initial release and similar for win11.
I can appreciate the UI being 'uneven' in a project with loose coordination as you find in a lot of linux distros, but for windows it could be better than it is like an orchestra playing together
Lore is Windows 11 is Windows 10X built for phones and tables too. The start menu really appears influenced by Android where it just flat shows everything installed and you're more inclined to search instead.
I do agree however - a beta for Win10 was a hybrid of tiles mixed with a Windows 7 list that throws back to Windows 2000 and I set that as my peak - you could have the best of both. The notifications area has always been weak and the settings panel is crap compared to the control panel (though debatable if control panel ever was the best we could have since it's somewhat cluttered and complex)
Kind of, Windows 10X was going to be the final reboot, also with Win32 sandboxing.
Instead they have been bringing UWP infrastructure into Win32 execution environment, but hardly anyone cares nowadays, unless there is no way to reach the same functionality with classical Win32/COM APIs.
However due to security issues like last year Crowdstrike, Win32 sandboxing is pretty much part of Windows roadmap.
The author mentions the screen being corrupted when a DOS prompt is opened in windowed mode. This can happen because the DOS prompt runs in a separate VM (in V86 mode), and makes calls into the VGA ROM BIOS via INT 10h. The VGA ROM BIOS on this machine is probably a wrapper over VBE; that is, it probably contains IN and OUT instructions that talk to the VBE I/O ports, 0x1CE and 0x1CF. These reads and writes from the DOS VM will, by default, be allowed to reach the physical hardware if they are not virtualized by the VMM.
This is a common problem that authors of Windows 3.x/9x display drivers had to handle, although the specific I/O port numbers to be virtualized vary by graphics adapter. There are samples in the Win95 DDK that show how to use the VMM services Install_IO_Handler and Enable/Disable_Global_Trapping to set up I/O port traps, and VDD_Get_VM_Info from within the trap handler to determine the VM that currently owns the CRTC. This allows the trap handler to reach a decision about whether to allow an I/O to reach the hardware, or to virtualize the access in some way. A good virtualization policy to start with is probably just to drop any writes from non-CRTC-owner VMs. Any additional needed complexity can be added from there.
The Virtual Display Device (VDD) runs as part of the underlying virtual machine manager, and acts somewhat like a multiplexer for the video hardware. If a DOS app is full screen, its commands are directly sent to the ‘real’ VGA adapter; otherwise, they’re emulated by the VDD.
It's interesting to see others (re)discover this architecture, which IMHO was quite ahead of its time, since it predates modern hypervisors with hardware pass-through. The Windows 3.x GUI itself, including its preemptively-multitasked processes, effectively runs as an extended (protected-mode) DOS process inside a VM running DOS, and the hypervisor kernel, VMM32, is what multiplexes between it and other VMs running DOS processes. Thus one part of the display driver sits under GDI and interacts with the "hardware", while the other part in ring 0 virtualises the hardware and multiplexes it with the other VMs.
This would fix it for DOSBox, but that fix would be tied to whichever video adapter it's emulating. I don't want that, I'm trying to make the generic VBE patch work better!
Having written a Win9x VESA framebuffer driver for the Intel GMA950 as well, and added basic acceleration (blitter and fill commands), and encountering basically the same issue, I realised what may be the reason Win9x never came with a generic VESA driver: the VDD needs to know how to save and restore the GPU state, the details of which are obviously vendor-dependent. I did come up with some ideas on how that could be done generically, i.e. emulate/trace the VBIOS and see which ports/MMIOs it touches on each mode switch, but never got around to implementing it.
While on DOSBox I get text mode with lots of corrupted characters, on the Eee PC I get a broken version of the GUI where some of the colours have disappeared.
That looks like the palette registers didn't get saved and restored correctly. Also, the corruption at the top of the screen can be avoided by moving the high-res display plane up by 256K, so as to leave the first 256K of VRAM for the VGA plane and VGA emulation. Fortunately the Intel GMA has a bunch of publicly available documentation (although not for the 900 nor 950, and only for the 810/815 and 965+ but the majority of the registers and commands have not changed) you can refer to for the details.
> it can't even run most up-to-date Linux distros due to its lack of x86_64 support
My Eee is successfully chugging along with 32bit Debian. Firefox is too heavyweight to do much but lag, but mpv works well enough to stream video. But I mostly use it when I'm running behind on a book and basically need a typewriter that can run pandoc, and fewer distractions.
No, it's a PS/2 50Z, which comes with a 10MHz 286. It was my first computer, second hand from my dad. When I got it, I saved up and bought the 386SX/Now!, a 25Mhz 386SX that fits into a 286 slot.
> Selling it in 1996 for a few bucks was probably the worst mistake I did in my computer life.
Why do you say that? They are great machines (mine has lasted forever) but finding parts was hard even when they were new due to the microchannel architecture.
So the interesting thing about that is there is now a dedicated team of enthusiasts porting and cloning cards on MCA[1]. Still not cheap... but at least you can get a sound card now.
Everything still works great, except the original monitor died many years ago. I hope to replace it one day, but $200 for a used 12" monitor is more than I want to spend right now (I have a tractor I am restoring that takes precedence).
So basically you use it with no web or modern apps. That’d be an interesting use case for something such as Haiku OS.
I tried it on my PC. There is obviously not enough available software to be a real daily driver but it struck me as an OS that would be very enjoyable with low connectivity needs like typewriter and mail.
I loved how coherent it felt : the UI, the base software and even the filesystem. If I understood correctly, the filesystem is a representation of all your data and « files » can have arbitrary metadata and you can do pretty much everything from the file manager. It’s like your whole filesystem is a nosql database and apps are ok with that. Your contacts are « files » in a folder, your mails are « files » in a folder etc …
I never touched BeOS back in the day and I can totally see how this paradigm could have worked well in the 90s with low connectivity. Being able to write a mail « file » without internet, drag and dropping it onto a floppy drive and then on another computer, sending it over the Internet and everything with the file manager. It felt amazingly coherent.
Unfortunately this paradigm ceases to be useful when you need interoperability with other computers that aren’t compatible with BeOS/Haiku filesystem which means statistically _any_ computer.
But on your typewriter machine, that could be interesting.
> So basically you use it with no web or modern apps.
I use it with a lot of modern things, actually. And across the 'net.
My books are built with a pandoc/lua scripting system, and not an ancient version. I use the latest features. I also synchronise the sources via git, and the built bundles via an SSH pipeline to my beta readers.
I also already mentioned mpv - I use it to stream music, because I tend to listen whilst writing. Often the one song on repeat for three hours, but I do.
How long ago did you try it out? Haiku is daily driver-ready for a lot of people, especially now that FireFox has been ported (under the name of iceweasel).
They're removing new kernel versions, but existing packages will continue [0]. As the Linux kernel team stopped official support back in 2012, it makes sense.
But as I have an existing install... It should just keep grinding on for a bit longer yet.
That title was... slightly confusing.
That said, I will always feel a sense of awe when I read about how those old, DOS-based versions of Windows worked behind the scenes. Everything is held together by software duct tape, yet somehow it works.
Furries do tend to be more open about kink (as part of the community’s general culture of acceptance), but to describe furry as a “kink subculture” is misleadingly reductive at best.
It's not a kink subculture. There are kink subcultures within it, but the only common factor among all the subcultures inside it is an appreciation for anthropomorphism.
See for example this playlist of videos of his with some videos from 2021 where he implements a terminal emulator. As he says himself, this project is not so much about terminals as it is about software development practices in general.
I remember when the ET4000H came out and it was not supported by Windows 3.1 at the time, I had to call MS tech support and they sent me a driver disk that arrived 8 hours later.
To clarify... the ET4000H was the same ET4000ax that was supported by Windows 3.0 and 3.1 already, but with the HiDAC (15/16bit truecolour) fitted instead of the 256 colour DAC. My recollection is fuzzy, but ISTR that it worked with the default driver in 16 colour mode, but not 256 or better, resolution selection may have been limited too.
MS claims that the driver that supported HiDAC came out the 3rd week of April 1992, which would have been 1-2 weeks after the time period I'm remembering, so it sounds about right.
That is fun, I have an EEEPC 207g, those smaller ones. It still works, but I never thought about retro gaming with it - mine is just "collecting dust", it would be fun to try these things on it!
It may be a 701, but I need to find which box it's in to actually check. It has a 7 inch screen, it's old but still holds charge in the battery for 2h at least, has a long charging cable, my only "mod" was to upgrade the RAM to 2GB, and it has a 4GB SSD I think - I have a card in it with 16GB and it runs a never updated version of crunchbang that was installed I really don't remember when - it doesn't go online ever, so it's fine. The last thing I did in it was to replay Illusion of Gaia using an emulator.
- Broken DOS and Broken GUI are 200 or 250, functional are 100 or 050. What's that address?
- Broken GUI is somehow in M_VGA mode instead of LIN8. How and why did it get like that, and is that related to why it somehow got into 400x600, horizontally half of 800x600 (??). (True "textmode" is actually 720x400, as seen in both DOS modes.)
Unrelated to the actual article, but so refreshing seeing a website with a design/structure that reminded me of the best years of internet. How I miss left sidebars, tables and more!
I appreciated this too. First time I've seen a dark/light selector on the right side which also highlighted how you can blend old and new very effectively.
SVGA support aside, it will never cease to astound me that you can load Windows 3.x on a modern standards-supporting PC and get basic VGA working out-of-the-box but you cannot do the same on modern Linux/BSD to get a basic software-accelerated VGA framebuffer supported by Xorg/Wayland if you don't have the right drivers installed (and the correct configuration files manually set up).
(the dead xfree86 project was probably the closest attempt to making this "just work" though it had a long way to go; this approach was not preserved in the Xorg fork)
Xfree86 never did anything different to Xorg here. If you boot via CSM on a modern PC (which is really not something you should do, but, well) you should get Xorg running with the VBE backend using x86emu to execute the video BIOS, and if you boot via EFI you should get modesetting running on top of efifb using whatever mode your firmware and bootloader left you in.
But note that this is actually easier for 16 or 32 bit operating systems! Setting VESA modes involves making a real mode 16 bit call (there's nominally a 32 bit entry point for VESA but it was specced late in the standard's life and basically nobody implements a working version), and once you're running in 64-bit mode you've lost the ability to do vm86 so calling 16 bit code from userland becomes impossible. This is why x86emu is required (basically we read the video BIOS code and then run it under an x86 emulator), and it's not always perfect.
I just wanna say that "we read the video BIOS code and then run it under an x86 emulator" sounds like some truly heroic effort by a bunch of engineers and I'm glad they did the work. I hope the people involved are proud of it.
IIRC, the reason it needed to be a VM in the first place was because X (simplifying a bit) "started" at Sun, running on PowerPC boxen. (Here's another subthread mentioning the same thing - https://news.ycombinator.com/item?id=42597613)
I agree it was an engineering marvel, IMO only less impressive than the CPU implemented in JPEG instructions for Pegasus.
X was originally an MIT development, and had no intrinsic ties to the underlying hardware (Sun implemented something called NeWS which ended up losing to X in the long run). Different vendors took the reference code and added device specific code to work, but this was an era where your hardware vendor was also your OS vendor so that was fairly transparent for most users. At this point almost every CPU architecture had their own expansion bus so there wasn't really any way you could plug a card intended for one machine into another. The vendor X server worked just fine.
And then PCI became ubiquitous and it was much cheaper to plug a PC card into a machine than buy an overpriced one from Sun or DEC or whatever, and people were starting to run Linux or *BSD instead of the vendor OS, and suddenly there was an incentive to be able to run graphics card x86 init code even on other CPU architectures.
I ended up stealing the concept and the code to make Ubuntu's usplash boot splash app work on 64-bit, which is an entirely different story.
> X (simplifying a bit) "started" at Sun, running on PowerPC boxen.
It didn’t. The X Windows system (https://en.wikipedia.org/wiki/X_Window_System) started at MIT and is from 1984, PowerPC (https://en.wikipedia.org/wiki/PowerPC) from 1992.
Wrong on both accounts, neither Sun nor PowerPC were relevant to X Windows appearance into computing world.
Windows NT did this too. And it was the work of one engineer. (although that engineer was Dave Cutler…)
Linux supports vgafb/vesafb so this is possible if the distribution is configured appropriately.
I think some/most distributions might not enable it out of the box because it would generally result in a low performance/quality experience and the user not realizing what the problem is, and nowadays almost all GPUs are supported natively, so nobody has invested in writing code to show a "Using unaccelerated VGA/VESA, you may want to fix this" popup.
It's been a very long time, but I recall X having a generic VGA driver that "just worked". Are you saying that's not there anymore?
Generic VGA doesn't generally exist on EFI platforms, the firmware doesn't program the card into a state where the VGA registers are going to do anything useful. You should get a working unaccelerated framebuffer from the firmware, though.
Not only that, I remember X.org source tree had a bult-in x86 emulator so VGA bios of some PCI video cards could be run on Solaris.
Technically not related to Solaris, since Solaris x86 existed and you wouldn't need it there, but yes, this was used on Sparc and any other CPU unable to run the card BIOS (including 64-bit x86 Linux, since virtual 8086 mode goes away when you're on long mode)
I remember the "generic" VGA driver having horrible default settings that resulted in 300x200 at 16 colors.
I just had a look at the install of ancient NetBSD I have running on my Pocket386, and the X server in there (which is XFree86 3.x) does have a generic VGA driver that does 640x480 at 16 colors just fine. I'm not sure what the "default settings" would be in this context since X that old doesn't even start without a proper XF86Config.
Old would mean mid-90s XFree86.
That's exactly what I'm talking about here, as well. NetBSD 1.2 is from 1996.
320x200 at 256 colors. X doesn't support any depth lower than 8bit, so VGA 640x480 wouldn't work.
X absolutely supports depths of 1 and 4 bits per pixel, along with 8bpp, with its VGA server: https://www.xfree86.org/4.8.0/vga.4.html
Thanks for the correction.
I hadn't thought of 1bpp because it isn't interesting on VGA hardware. But you're absolutely right about 4bpp having been a thing.
X does, modern X apps may not.
I remember that Xfree86 allowed 1 bit (B&W) modes
X11 has a "VESA" driver since forever, however its performance scales badly since number of pixels to process grows rapidly with the resolution.
Bootable Linux distributions I use at work (GRML and Clonezilla mainly) automatically resize to the native resolution of the screen or virtual KVM during boot with KMS support, and they work really well. Anaconda (RedHat and derivatives' installer) and Debian's installer also scales to native resolution on boot.
GUI installers use VESA over X11 directly.
> xfree86 project was probably the closest attempt to making this "just work"...
XOrg fork has "configless boot" for a long time. I don't maintain a config file for a very long time now, and I'm happier than ever (see https://www.xkcd.com/963/).
Xorg is Xfree86 with a cleaned up build system.
Xorg is a fork of XFree86 with many, many changes. Some XFree86 code is retained as one possible backend.
One of the most user visible things they did, though, was make it work better if you didn't have a config file.
The fun of manually writing modfiles and hoping for the best with startx.
It's specialization. Windows 3.x runs with a GUI on an IBM PC. XFree86-era Linux runs with any UI on an IBM PC. Xorg-era Linux runs with any UI on any computer. Therefore you have to specify what kind of configuration you require, and don't get it out of the box. But note that every distribution that runs on IBM PCs with a VGA GUI does come with that configuration out of the box, because they are re-specialized for that system.
And Wayland requires a GPU as far as I know, since the entire protocol is based on passing GPU memory buffers around and compositing then. Deleting the backwards compatibility for software rendering was half the point of creating Wayland.
Wayland does not require a GPU and the baseline protocol is just shared memory buffers. The protocol requires file descriptor for said buffers and getting gpu drivers to support mapping gpu memory to file descriptors is what allowed Wayland to become efficient through zero copy and just passing handles from clients to compositors
The Wayland base protocol is completely unusable since it does nothing by itself. Doing useful work with Wayland requires more stuff, like a GPU driver.
It requires a GPU in the same sense that windows 3.1 or dos or whatever else requires a GPU to convert the contents of a memory buffer to a signal that a display can actually display.
But you can also just encode the result of your composition to a h264 stream and send that over the network if you so desire. no GPU required in this case.
> The simplest means of getting pixels from client to compositor, and the only one enshrined in wayland.xml, is wl_shm — shared memory. Simply put, it allows you to transfer a file descriptor for the compositor to mmap with MAP_SHARED, then share pixel buffers out of this pool. Add some simple synchronization primitives to keep everyone from fighting over each buffer, and you have a workable — and portable — solution.
https://wayland-book.com/surfaces/shared-memory.html
> [weston] Available back-ends:
> drm – run stand-alone on DRM/KMS and evdev (recommend) (DRM kernel doc)
> wayland – run as a Wayland application, nested in another Wayland compositor instance
> x11 – run as a x11 application, nested in a X11 display server instance
> rdp – run as an RDP server without local input or output
> headless – run without input or output, useful for test suite
> pipewire – run without input, output into a PipeWire node
https://wayland.pages.freedesktop.org/weston/toc/running-wes...
[dead]
That old Windows 3.1 GUI looks so much more intuitive, efficient, and usable compared to what we've got today.
https://wuffs.org/user/pages/02.blog/windows-3x-graphics/640...
What would Win11 even look like on a lower-resolution display such as in TFA?
The Win11 Start Menu is borderline unusable beyond typing in a keyword and praying to the circuits. What happened?
Naive hypothesis: Windows NT and 2k hit a sweet spot, then product managers have been working their magic ever since.
KDE and Gnome are looking more and more appealing over time, despite not changing much. :)
Flat design has been an absolute pox on the industry. Skeumorphism was also pretty bad (since it was basically just flat design with pictures).
Windows Forms got a lot right: I think the one missing symbol was "active but not editable".
Blinking cursor in textbox with gray background? That would be one case of "active and not editable".
In Windows, the text cursor no longer blinks (more accurately it stops blinking after a few seconds, becoming static) in the interests of battery saving. At least I think it was battery saving, been months since I read it and my memory is foggy because it's fucking stupid.
Even on desktops that don't have batteries.
The problem is grey usually means "disabled" which also implies that data may be stale.
As it is you can set a text box read only but active, but it does waste time since it doesn't code that it's not interactable visually (i.e. you have to make up a scheme).
Basically it would've been nice to have a consistent background color or other visual cue which says "this will update, this is live, but you can't type here".
I'd go with Windows 7 as the last "good" version. It is similar to NT/2k, but prettier, thanks to improved graphics hardware. Same thing for XP before it. Even Vista had a decent UI, its flaws were elsewhere.
Windows 8 broke everything and Windows never recovered. I blame it on two reason: the rise of mobile platforms, and laziness from Microsoft.
For the first point, we have now what seems like an unsolvable problem. Many apps now come with a desktop version and a mobile version. Completely different user experience. One has a large screen, a keyboard and mouse. The other has a small touchscreen. A good desktop app would be completely different from a good mobile app. However, you don't want that either, as you want your users familiar with one version to feel at ease with the other. It means that even when you do your best, there are compromises.
But Microsoft probably could have done a decent work, but they didn't. They just become lazy. It is evident by looking at the control panel. The new control panel (settings) dates back from Windows 8, 12 years ago, and they still didn't port all the features from the old one to the new one, so you need both. They intended for a complete switch a few months ago, they weren't ready, will they ever be? In addition they regularly remove popular customization options, the style is inconsistent between the bundled apps, etc... This is not just controversial, it is objectively bad.
Another factor, but Microsoft and Windows are not to blame here is that app developers prioritize branding and internal consistency over OS integration. Many modern UIs are just a web page rendered with a browser engine (ex: Electron). They are not using native OS controls, ignore theming, draw their own Windows decorations, etc... Maybe the OS is inconsistent, but app developers don't help.
FWIW mobile and desktop aren't even the only distinct "UI environments". Steam's had a "big picture mode" aka "tenfoot mode" for a long time which is designed for media center PCs (same hardware, but you're sitting on the other side of the room, ten feet away, so everything had better be big) and of course there are things like VR.
Notably, the way Steam implements its tenfoot experience is an entirely separate UI instead of a single jack-of-all-trades UI. I've never used it since I don't have that type of setup so I can't say how well it works for people accustomed to the desktop UI.
It does seem that as annoying as it might be to get used to a new UI for mobile and desktop, using the same UI for both is even worse.
Again we see this theme, as in other parts of software engineering, that people try to hard to invent abstractions instead of just grinding out the work (a separate design for each platform).
Big picture mode is gone. It might be in there somewhere, but the 10' UI has been replaced by the Steam Deck UI. Not a bad replacement, honestly.
The Steam Deck UI has also solved a problem that Microsoft appears entirely unable to address: mixed input modes. The entire UI can be accessed through the touch screen or the physical controls. Microsoft has been failing horribly at this since Win8, and apparently operate under the impression that every user has a 40" touch screen on their desktop computer and doesn't know what a mouse is.
Good UI is achievable if your goal is to make useful software. That simply isn't what Microsoft's goal is anymore. They're just broadcom now: completely abandoned the product to focus on squeezing the users for everything they're worth.
The long migration from control panel to settings is one of the big tells to me that MS doesn't have the windows UI anywhere near a top priority.
They have a large amount of potential resources to throw at any problem they like, they could have done a v1.0 for win8 that provided a complete set of equivalents in the new style when they judged the OS was ready to release. Then iterate from that v1.0 in later releases as they have with other aspects of the OS, if anything MS seem to keep parts more fluid than long term stable now and aren't afraid to make major changes in service packs, win10 reaching EOL is quite different to the initial release and similar for win11.
I can appreciate the UI being 'uneven' in a project with loose coordination as you find in a lot of linux distros, but for windows it could be better than it is like an orchestra playing together
Lore is Windows 11 is Windows 10X built for phones and tables too. The start menu really appears influenced by Android where it just flat shows everything installed and you're more inclined to search instead.
I do agree however - a beta for Win10 was a hybrid of tiles mixed with a Windows 7 list that throws back to Windows 2000 and I set that as my peak - you could have the best of both. The notifications area has always been weak and the settings panel is crap compared to the control panel (though debatable if control panel ever was the best we could have since it's somewhat cluttered and complex)
Kind of, Windows 10X was going to be the final reboot, also with Win32 sandboxing.
Instead they have been bringing UWP infrastructure into Win32 execution environment, but hardly anyone cares nowadays, unless there is no way to reach the same functionality with classical Win32/COM APIs.
However due to security issues like last year Crowdstrike, Win32 sandboxing is pretty much part of Windows roadmap.
The whole flat thing was pioneered by Microsoft. Boggles the mind as to why Apple and Android copied this nonsense. https://en.wikipedia.org/wiki/Metro_(design_language)
[dead]
The author mentions the screen being corrupted when a DOS prompt is opened in windowed mode. This can happen because the DOS prompt runs in a separate VM (in V86 mode), and makes calls into the VGA ROM BIOS via INT 10h. The VGA ROM BIOS on this machine is probably a wrapper over VBE; that is, it probably contains IN and OUT instructions that talk to the VBE I/O ports, 0x1CE and 0x1CF. These reads and writes from the DOS VM will, by default, be allowed to reach the physical hardware if they are not virtualized by the VMM.
This is a common problem that authors of Windows 3.x/9x display drivers had to handle, although the specific I/O port numbers to be virtualized vary by graphics adapter. There are samples in the Win95 DDK that show how to use the VMM services Install_IO_Handler and Enable/Disable_Global_Trapping to set up I/O port traps, and VDD_Get_VM_Info from within the trap handler to determine the VM that currently owns the CRTC. This allows the trap handler to reach a decision about whether to allow an I/O to reach the hardware, or to virtualize the access in some way. A good virtualization policy to start with is probably just to drop any writes from non-CRTC-owner VMs. Any additional needed complexity can be added from there.
The Virtual Display Device (VDD) runs as part of the underlying virtual machine manager, and acts somewhat like a multiplexer for the video hardware. If a DOS app is full screen, its commands are directly sent to the ‘real’ VGA adapter; otherwise, they’re emulated by the VDD.
It's interesting to see others (re)discover this architecture, which IMHO was quite ahead of its time, since it predates modern hypervisors with hardware pass-through. The Windows 3.x GUI itself, including its preemptively-multitasked processes, effectively runs as an extended (protected-mode) DOS process inside a VM running DOS, and the hypervisor kernel, VMM32, is what multiplexes between it and other VMs running DOS processes. Thus one part of the display driver sits under GDI and interacts with the "hardware", while the other part in ring 0 virtualises the hardware and multiplexes it with the other VMs.
This would fix it for DOSBox, but that fix would be tied to whichever video adapter it's emulating. I don't want that, I'm trying to make the generic VBE patch work better!
Having written a Win9x VESA framebuffer driver for the Intel GMA950 as well, and added basic acceleration (blitter and fill commands), and encountering basically the same issue, I realised what may be the reason Win9x never came with a generic VESA driver: the VDD needs to know how to save and restore the GPU state, the details of which are obviously vendor-dependent. I did come up with some ideas on how that could be done generically, i.e. emulate/trace the VBIOS and see which ports/MMIOs it touches on each mode switch, but never got around to implementing it.
While on DOSBox I get text mode with lots of corrupted characters, on the Eee PC I get a broken version of the GUI where some of the colours have disappeared.
That looks like the palette registers didn't get saved and restored correctly. Also, the corruption at the top of the screen can be avoided by moving the high-res display plane up by 256K, so as to leave the first 256K of VRAM for the VGA plane and VGA emulation. Fortunately the Intel GMA has a bunch of publicly available documentation (although not for the 900 nor 950, and only for the 810/815 and 965+ but the majority of the registers and commands have not changed) you can refer to for the details.
> it can't even run most up-to-date Linux distros due to its lack of x86_64 support
My Eee is successfully chugging along with 32bit Debian. Firefox is too heavyweight to do much but lag, but mpv works well enough to stream video. But I mostly use it when I'm running behind on a book and basically need a typewriter that can run pandoc, and fewer distractions.
I totally agree on having a computer with minimum distractions when writing -- for that I use a PS/2 386SX running WordPerfect 5.1.
I loved my EEE, super portable, but the keyboard is too small for serious typing IMO.
Is that a PS/2 55SX?
That was the computer I had between ages 9 and 14. I wish I had one now.
No, it's a PS/2 50Z, which comes with a 10MHz 286. It was my first computer, second hand from my dad. When I got it, I saved up and bought the 386SX/Now!, a 25Mhz 386SX that fits into a 286 slot.
Oh, I had that too, just never upgraded the CPU. Selling it in 1996 for a few bucks was probably the worst mistake I did in my computer life.
The second worst error was also getting rid of my Asus EEE also featured in the OP.
> Selling it in 1996 for a few bucks was probably the worst mistake I did in my computer life.
Why do you say that? They are great machines (mine has lasted forever) but finding parts was hard even when they were new due to the microchannel architecture.
So the interesting thing about that is there is now a dedicated team of enthusiasts porting and cloning cards on MCA[1]. Still not cheap... but at least you can get a sound card now.
[1] https://texelec.com/product-category/mca-bus/
was that a machine you had lying around, or did you seek it out?
It was my first computer.
Everything still works great, except the original monitor died many years ago. I hope to replace it one day, but $200 for a used 12" monitor is more than I want to spend right now (I have a tractor I am restoring that takes precedence).
So basically you use it with no web or modern apps. That’d be an interesting use case for something such as Haiku OS.
I tried it on my PC. There is obviously not enough available software to be a real daily driver but it struck me as an OS that would be very enjoyable with low connectivity needs like typewriter and mail.
I loved how coherent it felt : the UI, the base software and even the filesystem. If I understood correctly, the filesystem is a representation of all your data and « files » can have arbitrary metadata and you can do pretty much everything from the file manager. It’s like your whole filesystem is a nosql database and apps are ok with that. Your contacts are « files » in a folder, your mails are « files » in a folder etc …
I never touched BeOS back in the day and I can totally see how this paradigm could have worked well in the 90s with low connectivity. Being able to write a mail « file » without internet, drag and dropping it onto a floppy drive and then on another computer, sending it over the Internet and everything with the file manager. It felt amazingly coherent.
Unfortunately this paradigm ceases to be useful when you need interoperability with other computers that aren’t compatible with BeOS/Haiku filesystem which means statistically _any_ computer.
But on your typewriter machine, that could be interesting.
> So basically you use it with no web or modern apps.
I use it with a lot of modern things, actually. And across the 'net.
My books are built with a pandoc/lua scripting system, and not an ancient version. I use the latest features. I also synchronise the sources via git, and the built bundles via an SSH pipeline to my beta readers.
I also already mentioned mpv - I use it to stream music, because I tend to listen whilst writing. Often the one song on repeat for three hours, but I do.
How long ago did you try it out? Haiku is daily driver-ready for a lot of people, especially now that FireFox has been ported (under the name of iceweasel).
Maybe a year ago
You should really try it again! Haiku has come quite far since a year ago.
Debian is removing support for 32-bit x86 in the next release I think.
They're removing new kernel versions, but existing packages will continue [0]. As the Linux kernel team stopped official support back in 2012, it makes sense.
But as I have an existing install... It should just keep grinding on for a bit longer yet.
[0] https://lists.debian.org/debian-devel-announce/2023/12/msg00...
I had the 1215B, but died last year, now an Android tablet has taken over.
It seems the tablets have wiped out the netboooks market segment.
While they still exist as ultraportables and 2-1, that is the other side of price segment.
That title was... slightly confusing. That said, I will always feel a sense of awe when I read about how those old, DOS-based versions of Windows worked behind the scenes. Everything is held together by software duct tape, yet somehow it works.
> That title was... slightly confusing.
There's nothing in the rules that says a dog can't use Ghidra.
"Hey look! It didn't crash!" "I guess Toonces can write a driver!" "Yeah! Just not very well!"
https://www.youtube.com/watch?v=5fvsItXYgzk
Finally an Air Bud movie I'd watch.
I was expecting something about the dog from Microsoft Bob, like a war story from an MS veteran about making the characters work on Windows 3.1
Alright; I’ll bite. Where’s the dog? I’m still confused.
The author is a furry, and the dog being referred to is himself
(Do other kink subcultures do this stuff??)
Furries do tend to be more open about kink (as part of the community’s general culture of acceptance), but to describe furry as a “kink subculture” is misleadingly reductive at best.
It reminded me of the old comic with the phrase "on the Internet, nobody knows you're a dog".
Less kink and more “if you’re on the internet why NOT pretend to be a walking talking dog?”
See also, VTubers, manga artists (who often represent themselves as characters), and our tendency to anthropomorphize computers (“it’s gotta think”)
It's not a kink subculture. There are kink subcultures within it, but the only common factor among all the subcultures inside it is an appreciation for anthropomorphism.
Hi fox.
The author has a fursona, so I think the author himself is the dog.
Famed for the "when you like my post, it's like you're putting a treat in my mouth" tweet and pictures of lonely abandoned objects.
It’s the author.
Go listen to Casey Muratori talk. We've created a giant pile of abstractions but we don't actually need them and all they is make performance worse.
We would already be much better if folks actually bother to learn a bit about data structures, algorithms, and not shipping Electron garbage.
Abstractions aren't the problem, rather how they are used nowadays, by plenty of folks that only went through JavaScript bootcamps.
To note they aren't at fault, rather those that teach on those factories.
Now I'm interested, which talk do you mean?
Edit: I might have read that wrong, but I'm interested in specific recommendations nonetheless.
See for example this playlist of videos of his with some videos from 2021 where he implements a terminal emulator. As he says himself, this project is not so much about terminals as it is about software development practices in general.
https://youtube.com/playlist?list=PLEMXAbCVnmY6zCgpCFlgggRkr...
See also this video of his: “Clean” Code, Horrible Performance
https://youtu.be/tD5NrevFtbU
[flagged]
I remember when the ET4000H came out and it was not supported by Windows 3.1 at the time, I had to call MS tech support and they sent me a driver disk that arrived 8 hours later.
Best support I've ever had for a pirated product.
To clarify... the ET4000H was the same ET4000ax that was supported by Windows 3.0 and 3.1 already, but with the HiDAC (15/16bit truecolour) fitted instead of the 256 colour DAC. My recollection is fuzzy, but ISTR that it worked with the default driver in 16 colour mode, but not 256 or better, resolution selection may have been limited too.
MS claims that the driver that supported HiDAC came out the 3rd week of April 1992, which would have been 1-2 weeks after the time period I'm remembering, so it sounds about right.
That is fun, I have an EEEPC 207g, those smaller ones. It still works, but I never thought about retro gaming with it - mine is just "collecting dust", it would be fun to try these things on it!
You mean the 701? 207g isn't a model of eeepc.
It may be a 701, but I need to find which box it's in to actually check. It has a 7 inch screen, it's old but still holds charge in the battery for 2h at least, has a long charging cable, my only "mod" was to upgrade the RAM to 2GB, and it has a 4GB SSD I think - I have a card in it with 16GB and it runs a never updated version of crunchbang that was installed I really don't remember when - it doesn't go online ever, so it's fine. The last thing I did in it was to replay Illusion of Gaia using an emulator.
Idly comparing the silly little annotations, I notice the following state changes you've probably also stared at to the point of semantic satiation...
Pattern analysis:- Broken DOS and Broken GUI are 200 or 250, functional are 100 or 050. What's that address?
- Broken GUI is somehow in M_VGA mode instead of LIN8. How and why did it get like that, and is that related to why it somehow got into 400x600, horizontally half of 800x600 (??). (True "textmode" is actually 720x400, as seen in both DOS modes.)
Unrelated to the actual article, but so refreshing seeing a website with a design/structure that reminded me of the best years of internet. How I miss left sidebars, tables and more!
I appreciated this too. First time I've seen a dark/light selector on the right side which also highlighted how you can blend old and new very effectively.
Check out Neocities — lots of options when your domain is limited to 2..3 megabytes
[dead]