The Agony, the Ectasy, the Dual Monitors

I am composing this blog entry on the right-hand screen of a brand shiny new dual-monitor rig. That took me the best part of a week to get working. I am going to describe what I went through to get here because I think it contains some useful tips and cautions for the unwary.

I started thinking seriously about upgrading to dual monitors when A&D regular Hedgemage turned me on to the i3 tiling window manager. The thing I like about tiling window managers is that your screen is nearly all working surface; this makes me a bit unlike many of their advocates, who seem more focused on interfaces that are entirely keyboard-driven and allow one to unplug one’s mouse. The thing I like about i3 is that it seems to be the best in class for UI polish and documentation. And one of the things told me was that i3 does multi-monitor very well.

So, Monday morning I went out and bought a twin of the Auria EQ276W flatscreen I already have. I like this display a lot; it’s bright, crisp. and high-contrast. HedgeMage had recommended a particular Radeon-7750-based card available from Newegg under the ungainly designation “VGA HIS|H775FS2G”, but I didn’t want to wait the two days for shipping so I asked the tech at my friendly local computer shop to recommend something. After googling for Linux compatibility I bought an nVidia GeForce GT640.

That was my first mistake. And my most severe. I’m going to explain how I screwed up so you won’t make the same error.

For years I’ve been listening to lots of people sing hosannahs about how much better the nVidia proprietary blobs are than their open-source competition – enough better that you shouldn’t really mind that they’re closed-source and taint your kernel. And so much easier to configure because of the nvidia-settings tool, and generally shiny.

So when the tech pushed an nVidia card at me and I had googled to find reports of Linux people using it, I thought “OK, how bad can it be?”. He didn’t have any ATI dual-head cards. I wanted instant gratification. I didn’t listen to the well-honed instincts that said “closed source – do not trust”, in part because I like to think of myself as a reasonable guy rather than an ideologue and closed-source graphics drivers are low on my harm scale. I took it.

Then I went home and descended into hell.

I’m still not certain I understand all the causal relationships among the symptoms I saw during the next three days. There is post and comments on G+ about these events; I won’t rehash them all here, but do look at the picture.

That bar-chart-like crud on the left-hand flatscreen? For a day and a half I thought it was the result of some sort of configuration error, a mode mismatch or something. It had appeared right after I installed the GT640. I mean immediately on first powerup.

Then, after giving up in the GT640, because nothing I could do would make it do anything with the second head but echo the first, I dropped my single-head card back in. And saw the same garbage.

From the timing, the least hypothesis is that the first time the GT640 powered up, it somehow trashed my left-hand flatscreen. How, I don’t know – overvoltage on some critical pin, maybe? Everything else, including my complete inability to get the setup to enter any dual-head mode over the next 36 hours no matter how ingeniously I poked at it with xrandr, follows logically. I should have smelled a bigger rat when I noticed that xrandr wasn’t reporting a 2650×1440 mode for one of the displays – I think after the left one got trashed it was reporting invalid EDID data.

But I kept assuming I was seeing a software-level problem that, given sufficient ingenuity, I could configure my way out of. Until I dropped back to my single-head card and still saw the garbage.

Should I also mention that the much-vaunted nvidia-settings utility was completely useless? It thought I wasn’t running the nVidia drivers and refused to do a damn thing. It has since been suggested that I wasn’t in fact running the nVidia drivers, but if that’s so it’s because nVidias own installation package didn’t push nouveau (the open-source driver) properly out of the way. Either way, nVidia FAIL.

So, I ordered the Radeon card off Newegg (paying $20 for next-day shipping), got my monitor exchanged, got a refund on the never-to-be-sufficiently-damned GT640, and waited.

The combination of an unfried monitor and a graphics card that isn’t an insidiously destructive hell-bitch worked much better. But it still took a little hackery to get things really working. The major problem was that the combined pixel size of the two 2560×1440 displays won’t fit in X’s default 2560×2560 virtual screen size; this configuration needs a 2650*2×1440*2 = 5120×1440 virtual screen.

OK, so three questions immediately occur. First, if X’s default virtual screen is going to be larger than 2560×1440, why is it not 2x that size already? It’s not like 2560×1440 displays are rare creatures any more.

Second, why doesn’t xrandr just set the virtual-screen size larger itself when it needs to? It’s not like computing a bounding box for the layout is actually difficult.

Second, if there’s some bizarre but valid reason for xrandr not to do this, why doesn’t it have an option to let you force the virtual-screen size?

But no. You have to edit your xorg.conf, or create a custom one, to up that size to the required value. Here’s what I ended up with:

# Config file for snark using a VGA HIS|H775FS2G and two Auria EQ276W
# displays.
#
# Unless the virtual screen size is increased, X cannot map both
# monitors onto screen 0. 
#
# The card is dual-head.
# DFP1 goes out the card's DVI jack, DFP2 out the HDMI jack.
#

Section "Screen"
	Identifier	"Screen0"
	Device    	"Card0"
	SubSection "Display"
		Virtual		5120 1440
	EndSubSection
EndSection

Section "Monitor"
  Identifier     "Monitor0"
EndSection

Section "Monitor"
  Identifier     "Monitor1"
  Option         "RightOf" "Monitor0" 
EndSection

Section "Device"
   Identifier   "Card0"
   Option	"Monitor-DFP2" "Monitor0" 
   Option       "Monitor-DFP1" "Monitor1" 
 EndSection

That finally got things working the way I want them.

What are our lessons for today, class?

Here’s the big one: I will never again install an nVidia card unless forced at gunpoint, and if that happens I will find a way to make my assailant eat the fucking gun afterwards. I had lots better uses for 3.5 days than tearing my hair out over this.

When your instincts tell you not to trust closed source, pay attention. Even if it means you don’t get instant gratification.

While X is 10,000% percent more autoconfiguring than it used to be, it still has embarrassing gaps. The requirement that I manually adjust the virtual-screen size was stupid.

UPDATE: My friend Paula Matuszek rightly comments: “You missed a lesson: When you have a problem in a complex system, the first thing to do is check each component individually, in isolation from as much else as possible. Yes, even if they were working before.”

Now I must get back to doing real work.

147 comments

  1. I am reminded of the “The Luxury of Ignorance”, from years ago. How many times do you have to get burned by the Linux/X combination before you give up on it? Even if this was nVidia’s fault, what sort of operating system gives some random driver designer enough control over the hardware to fry your monitor? (And on nVidia’s part why build hardware that can even do that?) It’s enough to make me want to haul out the Unix Hater’s Guide again.

    It was a few years ago but I installed Mandrake, I think it was, for a particular task, a couple of years after reading “The Luxury of Ignorance”, and was quite impressed with how well the hardware auto-detection worked, KDE set itself up, coped with a virtualised server, all that sort of stuff. So much better than the Yggdrasil distro from my teens. Right up until I tried to do the automatic thing, change the resolution from the default 640×480 to something sensible. Greyed out. That’s strange. Couldn’t find any options to change it. After some Google searching, found the configuration file – ten layers down in the directory hierarchy – where I could turn on “allow desktop reconfiguration”. That was disabled by default in that particular distro, for fuck’s sake. Not in the previous one or the next. I gave up on Linux on the desktop there and then.

    1. >How many times do you have to get burned by the Linux/X combination before you give up on it?

      This is actually the first real Linux+X nightmare I’ve had, though I’ve read about others. Configuring modelines by hand back before EDID was a pain but nowhere near as nasty as this clusterfuck.

      But really, where could I go if I gave up? My experience with closed-source OSes warns me that almost everything would be this bad, all the time. In different ways, sure, but tearing my hair out over registry snafus and malware and data jailed by proprietary products and formats (among hundreds of other evils) would still be tearing my hair out.

  2. Unfortunate you had problems with the nVidia. I myself have had exceptionally good luck with them, dual-head especially. It sounds to me like you ran into a bevy of issues all at the same time, and depending on your order of operations I could make some logical suggestions as to what actually went wrong.
    Did you explicitly blacklist the nouveau driver? If you’re already running it, the nVidia module installer will (should; it does for me) tell you to blacklist it and make sure it isn’t resident before it will proceed.

    Thanks for the virtual display pointer; never had to mess with it but I can see it biting me in the future.

  3. I’ve had decent luck with the one nvidia card I’ve ever installed. I just retained the nouveau driver, and it more or less Just Worked. (Of course, I only thought to do this after munging through kernel config for about three recompiles, but that’s thoroughly my fault rather than the software’s, and shouldn’t come up at all if you’re not doing manual install of everything.)

  4. Your experience is pretty much the opposite of mine. Whether it was a PCI-E card or part of an AMD chipset, I’ve only had almost nothing but headaches from AMD/ATI. I’ve seen Catalyst work correct exactly once, that on old Pardus – which had too many other problems for me to keep using it. I had two monitors on my main system, or tried to. AMD/ATI’s stuff kept crashing. I coudl see my old settings, but couldn’t change them or else *poof*. I gave up after a few days, and started saving up.

    When I swapped up to a GTX570, things “just worked.” In fact, so well that I was stunned. “Hey, I found this other monitor, wanna use it?” It was truly night and day. I have exactly zero plans to ever pay for a Radeon again that isn’t built into an AMD CPU for a laptop or such.

  5. I’ve been using 3 displays on Linux since 2006.

    NVidia GPUs can only handle 2 displays, for more than 2 displays it requires a multi-GPU NVidia board, like NVidia Quadro FX. Quadro FX boards work only in Xinerama mode on Linux with a separate X server running for each connected display, which effectively renders the entire desktop on each X server separately causing a rather noticeable display lag.

    AMD GPUs since a few years ago provide Eyefinity technology when one GPU can handle up to 6 screens. Eyefinity works seamlessly in Linux with both the open source AMD driver and the proprietary AMD Catalyst binary blob. I have three 90°-rotated screens connected to a 5000 series AMD board that I bought for £50 two years ago, forming a neat 3840×1024 desktop.

    Unlike your experience I did not have to touch Xorg config files at all, I set it up in the standard KDE Display control panel.

  6. Like many, I mostly care about 2D, and for that the open source radeon drivers are glorious. I actually replaced all my nVidia cards by AMD cards!

    I experienced the Virtual issue too a few years ago, but now I don’t. I’m not sure why. Also, the trick is to run the xrandr command in one go.

    I have two barbaric commands that could be simpler, but they work, first one being the work setup and second one being the home setup.

    xrandr –output HDMI3 –primary –rotate left –mode 1920×1080 –output HDMI1 –rotate left –mode 1920×1080 –right-of HDMI3

    xrandr –output DVI-1 –primary –mode 1920×1080 –rotate normal –pos 0x0 –output DVI-0 –mode 1920×1080 –rotate normal –pos 1920×0

  7. What’s the real shame here is that the open source drivers suck for 3D applications. Unfortunately, the only manufacturer that does completely open source drivers – Intel – has thoroughly sucky hardware. Both AMD and Nvidia put a lot of their technology in the driver itself, and that means that open-sourcing it would cost them a competitive advantage.

    This means that people who do 3D gaming are stuck with closed source drivers if they want any sort of performance.

    Open source may itself be a competitive advantage, but evidently not much of one…

  8. You got bad hardware. Ranting about nVidia’s drivers in this context makes you seem kinda stupid.

  9. Sitting in front of a pair of Samsung monitors which provide 1920×1200 on each, I am lost when I have to work on the laptop. My main problem nowadays is that the ‘auto-configure’ is totally OTT much of the time. My computers sit in a rack in the workshop and I just have a nice extended KVM to access them. Something which modern Linux installs are totally unable to cope with it seems :( The setup works perfectly, but -NOMODESET does not allow that to be maintained when the monitors are not plugged direct into each computer!
    I still have a 10 screen setup running on W98, something which later versions of windows have lost as well. Why does all progress have to be backwards … even the latest ‘standards’ such as bootstrap seem to have lost the plot on things like COLOUR icons?

  10. @esr: Could you explain numbers in xorg.conf file? You are writing about 2560×2880 virtual screen, but ‘SubSection “Display”‘ has ‘Virtual 5120 1440’

    1. >Could you explain numbers in xorg.conf file?

      Those numbers are correct, I was severely exhausted when I write the text description. I have corrected it.

  11. Could you explain numbers in xorg.conf file? You are writing about 2560×2880 virtual screen, but ‘SubSection “Display”‘ has ‘Virtual 5120 1440?

    I would trust the config file. It’s easy to forget which dimension to multiply when you’re already frazzled. At least one other person in the comment thread has already made a similar error.

  12. As I’ve always had much better results with NVidia products on Linux, Windows, and OpenBSD, this has me intrigued.

    Apologies if I missed it somewhere, but which one was the broken monitor-on-the-left? The existing one, or the new one? I’m trying to think of a way it’s possible for a malfunctioning driver to take out the EDID function of a monitor, and I’m just not coming up with a way of doing so. Yet.

    1. >Apologies if I missed it somewhere, but which one was the broken monitor-on-the-left?

      It was the old one…which fortunately was still under warranty. It had been working for a couple months, which is the main reason I believe the GT640 roached it.

  13. I love i3, but I’m having terrible difficulty getting workspaces to span monitors…perhaps this is just something that it is not designed to do?

  14. You missed a lesson: When you have a problem in a complex system, the _first_ thing to do is check each component individually, in isolation from as much else as possible. Yes, even if they were working before.

  15. Time is money…or more accurately…I’m to damned old to want to waste 3 days on adding a monitor to Linux anymore. Or debugging audio. Or futzing around with X.

    Hence the superiority of OSX for me. Unix without the insane headaches that even someone like esr runs into. I’m running dual head right now and it took no more time than plugging in the monitors into the wall and into the computer.

  16. Hm, I have good luck with the NVidia blobs on my Fedora system, but I’m using RPMFusion’s akmods package which automates the install and the updates, automatically recompiling the NVidia mod when the kernel is upgraded and etc.

    Sorry you had problems.

  17. But really, where could I go if I gave up? My experience with closed-source OSes warns me that almost everything would be this bad, all the time. In different ways, sure, but tearing my hair out over registry snafus and malware and data jailed by proprietary products and formats (among hundreds of other evils) would still be tearing my hair out.

    I sort of see your point, and I’ve been there and done that. But I can plug a second monitor in and it just works. And that’s on XP. Like nigel, I’d rather not spend three days mucking around getting something working. If you’re nice to Windows machines (don’t install every fucking power tool that takes your fancy for instance) they run just fine now – ok that’s a relatively recent development – and you never have to know the registry exists. A Linux server, absolutely, my mail/file server was always such when I was working at home (cheap hardware, and only needed to be rebooted twice a year). If you’re worried about data jailing then why not promote open source projects on a platform that actually does UI well? In the days of data in the cloud and web-based applications, who cares what the underlying OS is, as long as it works?

  18. I can sympathize with you w/r/t multimonitor setups in linux. The advice above about blacklisting the nouveau drivers is spot on when you’re trying to use the binary blobs for Nvidia cards.

    I’m still a fan of NVidia, but mainly through inertia of having used nothing but nvidia for a decade or more. I do like the CUDA side of things, but even that’s falling away as OpenCL picks up steam.

    As for closed source systems (Windows / OS-X), the only thing I’ll agree with you on is “data jailed by proprietary products and formats”. I spend a significant amount of time fighting with data interchange issues, especially among the content creation apps. There’s no excuse for there to be anything BUT a seamless workflow and data flow between professional apps of any kind. That’s a soap box for another day though.

    Glad you found a config that works for you, and ultimately, that’s all that matters. The logo on the box is irrelevant once it’s in the case and powered up. All that matters is that it works.

    Scott

  19. @esr:

    In one of the comments to “Out on the Tiles” you mentioned that Xrandr support was a killer feature for you that caused you to take i3 over StumpWM.

    The thing is, according to http://i3wm.org/docs/multi-monitor.html, NVidia’s binary blobs don’t support Xrandr, so you need to configure i3 to use Xinerama instead.

    Then again, NVidia’s binary blobs are apparently supposed to have supported Xrandr for the past year or so, though, so that may or may not be the issue.

  20. This is actually the first real Linux+X nightmare I’ve had, though I’ve read about others. Configuring modelines by hand back before EDID was a pain but nowhere near as nasty as this clusterfuck.

    X is soon (this year or next) to be supplanted by Wayland on most major distributions. (No, I honestly don’t think Mir will last.) So hopefully this nightmare will go away when X does. However, I’m not exactly thrumming with faith that the Wayland devs will make things much better.

    But really, where could I go if I gave up? My experience with closed-source OSes warns me that almost everything would be this bad, all the time.

    Seriously, Eric, just get a Mac already. Mac OS X doesn’t have “registry snafus” and it has barely any malware. As for data jailing, you don’t have to use iTunes if you don’t want to — though it’s a great option if your time is more valuable than the fight for total digital freedom. But multimonitor, sound, and wireless Just Work on OS X, which is still much, much more than I can say for Linux.

  21. I’ve always managed to get my video cards working eventually, but I’m used to budgeting a week of evenings fiddling until things work. Getting an nVidia card to use its proprietary drivers under Linux does work, but it’s rarely painless. The one that I purchased in my last iBuyPower was no exception. No hardware problems, but major pain getting kernel and drivers working smoothly together.

    On the other hand, I remember back in the Windows 95 days when an upgrade to motherboard and video card worked flawlessly on Linux, but I never did get it to work on Windows 95. Some kind of driver problem seemed to be the flaw, and I had to buy Windows 98 to get Windows working (and it was significantly slower than 95 had been, so I was unhappy about this).

    My own biggest takeaway from my similar experiences: On Linux, you want to do serious research before buying any hardware to confirm that other people have gotten it to work on Linux (preferably on the distro that you run). You can never take it for granted that “standard” PC hardware will work on operating systems with <10% market share.

    For me, it's worth spending a week getting something running that will give me good performance for years afterward. YMMV.

  22. @Scott Bragg: “I can sympathize with you w/r/t multimonitor setups in linux. The advice above about blacklisting the nouveau drivers is spot on when you’re trying to use the binary blobs for Nvidia cards. I’m still a fan of NVidia, but mainly through inertia of having used nothing but nvidia for a decade or more. I do like the CUDA side of things, but even that’s falling away as OpenCL picks up steam.”

    Personally, I am waiting for a company to start selling video cards that (1) match or exceel nVidia performance on all metrics and (2) use completely open source drivers. I will immediately switch to that firm’s products when they’re available. And waiting, and waiting…in the meantime, I manage as best I can.

    I’ll never understand why these firms are so reluctant to put their proprietary magic in firmware and let the drivers just be a thin layer of minimalist interface. I couldn’t care less whether video card firmware is open source.

  23. I’m on Windows, admittedly, but when I set up an nVidia card with dual monitors, it worked instantly. A couple minor config issues required a reboot to work properly, but it’s been utterly flawless since. You may have just gotten a defective card, or a defective screen?

  24. Where could you go? Just about anywhere. Linux wasn’t designed for, and will never be good at, being a desktop platform. I’m quite impressed at how far the community has managed to bring it, they’ve certainly tried hard, but someone like you shouldn’t be tearing your hair out over a simple thing like a monitor upgrade. OS failure.

    1. >OS failure.

      There as actually no point at which Linux could be said to be a problem. I grant you the bit about having to force the virtual screen size was irritating, but that was an easy fix. No; it was some combination of bad nVidia hardware and bad nVidia software that put me in hell for three days; once I removed nVidia from the mix, my troubles were almost over.

      If I had gone the full open-source route from the beginning, total time and hassle expended would have been for running xrandr once, noticing the virtual screen size problem, and writing that xorg.conf. 90 minutes of minor irritation rather than three days of hell.

      nVidia’s decision to publish neither open-source drivers nor enough API information to support us doing our own isn’t an OS fail, it’s an nVidia fail. End of story.

  25. I’ll never understand why these firms are so reluctant to put their proprietary magic in firmware and let the drivers just be a thin layer of minimalist interface. I couldn’t care less whether video card firmware is open source.

    Cost. Software is much cheaper — and less risky — than even firmware. It’s much easier to ship a relatively dumb device and have the OS manage all the smarts on the CPU than to add the complexity and material cost of building intelligence into the device itself.

    Plus, software allows you to add performance tweaks and enhancements later that would be risky to flash onto a PROM in the card. In the case of video cards, both NVIDIA and AMD proprietary drivers come with rendering “cheats” that make specific games and benchmarks run faster; these get updated from time to time and a firmware solution would make them very difficult to implement and keep up to date.

    Virtually everyone interested in high-performance graphics uses Windows, so these are sound business decisions to make. Do not expect any changes to the model in the foreseeable future.

  26. I went from nVidia, to AMD, back to nVidia. The open source drivers for the Radeon HD5570 had power management problems, it ran hot, hot, hot, and I’ve had better luck with the nVidia proprietary drivers than with AMD’s. That was a couple of years ago. My next step is to go with onboard Intel HD Graphics 4000, or whatever is out there, next time I upgrade my hardware. But I haven’t felt that need in the last several years.

  27. @Ltw:
    >If you’re worried about data jailing then why not promote open source projects on a platform that actually does UI well?

    Assuming you actually mean “does UI well” and not “does graphics drivers well”, the answer is “at this point in time, the only projects that have well designed UI’s run on top of X”. Apple, for all their reputation, has always had a horrible UI, and while Microsoft *used* to have a well designed UI, their UI quality slid through the 2000’s and then took a complete dive with Win8.

    If you meant “does graphics drivers well”, see below:

    @Cathy:
    >On Linux, you want to do serious research before buying any hardware to confirm that other people have gotten it to work on Linux (preferably on the distro that you run).

    Even better, get your machine from a vendor that sells machines with Linux pre-installed (my experience with System76 has been good, albeit with a sample size of 1). People tend to compare their experience with Windows / OS X machines purchased pre-configured from the vendor (“It works out of the box!”) with their experience wiping a machine that came with a proprietary system, or building a machine from scratch, and installing and configuring Linux themselves.

    I’ve had a few issues with my laptop, but in general not nearly as many (or as severe) as I’ve had with machines I did ground-up installs on.

  28. > First, if X’s default virtual screen is going to be larger than 2560×1440, why is it not 2x that size already?

    Seems obvious to me: Because the reason it is 2560×2560, specifically, is because a square ratio appeals to someone’s misguided sense of aesthetics. The reason the width is 2560 is to fit two 1280×1024 screens side-by-side, from back when that was an impressive resolution.

    Unless there’s a good reason for it not to be, they should make the default something stupidly large like 65536×65536.

  29. I combine your description with my experience, and arrive at different conclusions.

    From the photo, I’d swear someone dropped or crushed the monitor on the left. It has the same cracked-glass shape and pattern of light, dark, and noisy areas as my own broken LCD flat panels, and I know how those happened because I was the one who dropped and/or crushed them. Digital video signal quality issues and GPU rendering issues have many faces, but they usually look nothing like that pattern.

    One rather bizarre side-effect of panel breakage is that EDID queries stop working at the same time. I don’t know why this happens (surely the EDID data is on a dedicated chip that is isolated from the LCD panel proper?), but it does, more than once.

    Video cards do fail and do arrive DoA. They are supplied with a lot of energy for their physical size and their failure modes can be subtle or spectacular. I’ve seen video cards literally explode spontaneously under normal operating conditions, or take six months to gradually melt their DACs because of a software bug, or desolder their own GPU chips because they run hot enough to physically do that given a workload like xscreensaver with a short cycle interval. I wouldn’t blame the video card for anything until I’d tested with at least two, preferably with widely separated date codes.

    One thing I have not seen yet is a video card kill a LCD panel monitor. I had some Matrox cards in the 1990’s that would blow protection fuses on cheap CRT monitors during the BIOS power-on self-test, but LCD panels don’t usually have components that can be hurt by a mere video signal. Differential signalling on digital video connectors should make them even less susceptible to out-of-range electrical values than monitors with analog VGA and DVI-A inputs.

  30. @Jon Brase: “People tend to compare their experience with Windows / OS X machines purchased pre-configured from the vendor (“It works out of the box!”) with their experience wiping a machine that came with a proprietary system, or building a machine from scratch, and installing and configuring Linux themselves.”

    Yes. My experience with installing Windows and Linux onto custom-built hardware says that there wasn’t that much difference between them even in the 90’s. But most Windows users have no idea what it’s like to install Windows. Linux, OTOH, is very easy to use once it’s installed and running. Most of the articles that complained about unfriendliness of Linux were really complaining about installation and configuration.

  31. “nVidia’s decision to publish neither open-source drivers nor enough API information to support us doing our own isn’t an OS fail, it’s an nVidia fail. End of story.”

    Agreed. There’s a market niche here that isn’t being filled properly.

  32. If I had gone the full open-source route from the beginning, total time and hassle expended would have been for running xrandr once, noticing the virtual screen size problem, and writing that xorg.conf. 90 minutes of minor irritation rather than three days of hell.

    Or you could just plug it in. To a computer that already knows how to recognise it. Even 90 minutes is too much, you shouldn’t have to go stuffing around with something called xorg.conf to get a monitor working! So even if the hardware and software weren’t bad, you would have had to hand edit a configuration file to make it work? Not good enough.

    To clarify “OS failure”, I know that the Unix way is to regard the UI and device drivers as separate from the OS. But to an end user, that’s not important – an X problem or a driver problem is an OS problem. I’d argue that insistence on separation is one of the structural failures of Linux as a desktop platform.

    while Microsoft *used* to have a well designed UI, their UI quality slid through the 2000?s and then took a complete dive with Win8

    Which, Jon, is why Microsoft have been struggling to end-of-life XP for so long. Businesses simply refused to upgrade to Vista/Win7/Win8. Anyone with any sense has stuck with XP for precisely that reason.

  33. There is no way to win the graphics game. There are too many players involved, and even if they did collaborate with each other–and the incentives around collaboration are generally neutral or negative–getting those systems fully debugged is a billion-dollar R&D project: doable, maybe, but not many can afford to try (though maybe more do try than most people realize).

    The manual for a modern display controller–the simple component that implements xrandr and clocks out pixels from your RAM to your DVI port–typically has over a thousand pages of documentation, 80% of which is tables of interdependent register details. Every one of those pages matters in some corner case or other, and there can be undocumented interactions with other parts of the system (especially parts supplied by some other vendor, as is common on PC hardware). An error at any point will give you an unusable display or crash your system.

    GPUs are orders of magnitude more complex than display controllers, and even if you had the driver developer’s documentation, you’d need the hardware in machine-readable form for simulation to fix any non-trivial bug before the hardware it occurs on becomes obsolete. Modern GPU hardware has memory management and multiple parallel threads of execution implemented in hardware–and all the bugs any software developer would expect to arise from those things.

    By contrast, a network or memory controller can be documented in 150 pages or so.

    Artificial limitations on virtual display size usually come about because some variant of the hardware involved has a limitation or needs a workaround (usually because someone failed to collaborate). Most of my configuration work for a new machine is systematically cleaning out all the crappy workarounds that the Linux distro has put in the way of working code, and the rest is learning to live within surprising hardware limitations, like GPUs that are incapable of understanding a buffer more than 2048 pixels wide because some-but-not-all of their registers don’t have enough bits. Even Intel hardware with working xrandr support needs end-user help now and then, if e.g. the people who built the laptop around the hardware failed to disclose (or perhaps had no way to communicate to software) what clock speeds are actually possible on the VGA port.

    Buying an integrated system from a vendor who can afford to own board manufacturing, OS, CPU, and GPU components is a good start–in exchange for considerable concentration of agency in a corporation and an extremely limited set of hardware choices, now you can maybe get working graphics, and only have to fix everything else that is broken.

  34. > Even 90 minutes is too much

    The specific job would probably normally take about 10-15 and is a somewhat-strange edge case anyway. I suspect the main reason it took so long was that Eric was already quite tired and frustrated and took some wrong turns when investigating.

    1. >I suspect the main reason it took so long was that Eric was already quite tired and frustrated and took some wrong turns when investigating.

      That’s possible, but I said 90 minutes because I was counting the time required to notice the xrandr error message, Google for relevant stuff, read documentation, figure out the fix, and one or two tests due to typos. Actually writing the custom xorg.conf was the shortest part of the process.

  35. I propose an alternate theory: It was not any harm or poor quality inherent in nvidia’s drivers, tools or hardware that caused this nightmare. Rather, it was the unreasonableness of GPL ideologues who consider closed-source video drivers to be just as high or higher on the harm scale as proprietary data jails and other sins.

    The symbols in the kernel to do mode switching are GPL-only. Thus, the nvidia closed-source driver cannot make use of KMS, and has to do it’s mode switching in user space. If you want to have nice boot splash screens, or high resolution frame buffer text consoles that play nice with graphical modes under X, you have to use KMS. If you want KMS with an nvidia card, you have to use the open-source nouveau drivers. Ubuntu started using nouveau by default in 2009 with the 10.04 release so they *could* have nice boot splash screens and high resolutions text consoles that play nice with X. If the kernel devs hadn’t made KMS GPL-only, nvidia could make use of KMS in their driver, and there would be no need to use nouveau beyond ideological purity.

    As has been mentioned here in the comments, and on the G+ post as well, a common failure mode in trying to get the proprietary drivers/tools to work is a conflict with the nouveau driver. That would explain your trouble with nvidia’s tools — likely the nouveau and nvidia drivers were stepping on each other’s toes. I’d arugue such isn’t an nvidia’s failure either. Package management varies from distro to distro and is the responsibility of the provider of that distro, not upstream. So the nouveau driver should have been shoved aside unless there were a failure on the part of Ubuntu’s install package for nvidia.

    I consider it to be highly unlikely that the nvidia card fried your monitor. More likely, the monitor decided to die at the same time because you’d done something to get the attention of Finagle — something beyond just the hardware change or working with proprietary software. Maybe you didn’t offer the hardware gods a large enough blood sacrifice, I dunno. If I had to guess, a static zap just under the perception threshold caught the monitor in just the right way to cause EDID to malfunction.

    All that said, I plan to use AMD cards on my next build, precisely because I want KMS, shiny splash screens and large frame buffer consoles. I’ll do my 3d gaming on a wintendo.

  36. Plug it in. Works for me. My laptop happily migrates between two docking stations and a couple of other monitors, most of which have different resolutions and aspect ratios. If it was even 10 minutes to reconfigure every time, I’d be monumentally pissed off. Let’s say I know about it and what to do, that cuts it down to a couple of minutes. I’d still be pissed off.

  37. Sorry, my previous comment was a reply to jsk.
    However, Jeremy “I consider it to be highly unlikely that the nvidia card fried your monitor.” – I agree.

  38. “To clarify ‘OS failure’, I know that the Unix way is to regard the UI and device drivers as separate from the OS. But to an end user, that’s not important – an X problem or a driver problem is an OS problem. I’d argue that insistence on separation is one of the structural failures of Linux as a desktop platform.”

    I strongly disagree. Tight integration of OS and UI is a structural failure of the Windows platform. It prevents the equivalent of sshing in from another machine to tweak parameters. It prevents me from running a headless server (such as an older machine) without the overhead of the UI, which is very significant. It makes it more difficult to write and port cross-platform code, because standard Windows apps litter the code with UI-specific calls and do not keep such calls out of core functional code. And so on.

    The question of why X can’t do a better job of plug and play is worth discussion, but the solution is NOT to move to tighter integration with the OS.

    1. >The question of why X can’t do a better job of plug and play is worth discussion, but the solution is NOT to move to tighter integration with the OS.

      Indeed not. More monolithic tightly-integrated code leads to more complex cross-modules dependencies, which in turn leads to brittleness and higher bug loads. It would be absolutely the wrong direction to go.

      Anyway, the level of integration between the driver and the OS, or between either and X, wasn’t any part of the real issues in this mess, and arguing about it is therefore pointless. Almost all my problems, except for the small and relatively painless bit of X configuration at the end, were caused by some combination of bad nVidia hardware and a bad nVidia driver package.

      This is true regardless of whether you think I was actually running the proprietary blob (most likely) or nouveau (possible) when the EDID-killing damage to my left-hand monitor occurred. In either case, to the extent the problem was bad hardware. it can be laid directly at nVidia’s feet. To the extent the problem was software, it was either directly due to nVidia botching its own drivers or failing to get the packaging right so nouveau was blacklisted out of the way.

      The claim that GPL zealots not allowing nVidia the use of dkms was anything like a primary cause is allso transparently bogus. The root cause of the entire software shambles is nVidia’s refusal to publish either open-source drivers or the specifications for their APIs, thereby necessitating the teetering pile of kluges and workarounds that I collided with. To claim anything else is to mistake symptoms and secondary effects as the actual disease.

  39. “I propose an alternate theory: It was not any harm or poor quality inherent in nvidia’s drivers, tools or hardware that caused this nightmare. Rather, it was the unreasonableness of GPL ideologues who consider closed-source video drivers to be just as high or higher on the harm scale as proprietary data jails and other sins.”

    Ugh. Thanks for nothing, Alan Cox et. al., for making me futz around with Bumblebee to get functionality out of my Optimus setup.

  40. Mandriva, OpenSUSE, and Fedora have all detected and autoconfigured my dual monitor setups for many years now. Mostly NVidia cards. The Linux nouveau driver seems to support everything I do with a video card, so I haven’t had to bother with the proprietary blob in years.

  41. When you hate closed source software, it hates you back ;)

    May I ask, which distribution of Linux caused you all this? My experience is that all of Ubuntu, Gentoo and Fedora never screw your drivers like that. And I have always used Nvidia binary blobs, doing all kinds of weird things, dual monitors included. My current workstation has a GT660 which works flawlessly, tried both both with binary and Nouveau drivers. I even do some 3D gaming when using the binary blob.

    I imagine that I have some kind of special “training” in using proprietary software with Linux for so long. I always make sure to read on “what is the tricky part” and “which is the battle tested way that works for everyone”. For example, choosing Ubuntu (automatic proprietary driver configuration) over Debian is usually worth it, in order to save you some white hairs. It’s much easier to “reshape” old known variables like your editor, init scripts etc., instead of spending days fighting undocumented proprietary monsters…

    This is the reason I think that being too “pure” like Debian, actually does us all disservice, especially in fields that are hopelessly lost causes… like Nvidia drivers. No problem should have to be solved twice. Problems with proprietary software too… By not supporting the proprietary solutions, distributions force us to solve those problems again and again, everyone on their own.

    So I think that things like Ubuntu are the “sweet spot”. Too bad that Canonical are starting to walk the “arrogant enterprise” alley lately…
    It’s a sad situation :(

    1. >May I ask, which distribution of Linux caused you all this? My experience is that all of Ubuntu, Gentoo and Fedora never screw your drivers like that.

      Ububtu 12.04. I upgraded to 12.10 while trying to get things working.

  42. He didn’t have any ATI dual-head cards.

    … did he just not have any ATI cards at all?

    By which I mean – and maybe this is just a side-effect of using my PC for gaming and not as a “lowest cost unix machine” – I didn’t think ATI even made (well, I mean, the makers using ATI chipsets, not ATI themselves) cards that didn’t do dual-head.

    I just did a quick sanity check, and yeah … even the cards NewEgg sells for $34.99 are dual-head cards.

  43. Also, “ In either case, to the extent the problem was bad hardware. it can be laid directly at nVidia’s feet.“?

    nVidia doesn’t sell its own nVidia-branded cards, does it?

    I don’t think the Kepler chipset in the GT640 is a monitor-killer in itself.

    Assuming there was a hardware-level problem, the blame is probably best put on the OEM, whoever it was, for their shoddy quality control.

    (I mean, I’ve been an ATI guy for a long time … but I don’t think this specific hardware problem is nVidia’s fault.)

  44. “To the extent the problem was software, it was either directly due to nVidia botching its own drivers or failing to get the packaging right so nouveau was blacklisted out of the way.”

    I disagree. Unless you downloaded the driver package directly from nvidia’s ftp site and attempted to install it by hand, the fault is at least partially Canonical’s. Canonical is responsible for Ubuntu’s package manager, and its content. That includes ensuring the packages they publish play nice with the rest of the system and install properly.

  45. @LTW:
    >To clarify “OS failure”, I know that the Unix way is to regard the UI and device drivers as separate from the OS.

    UI certainly. I’m not so sure about device drivers. Most Unices aren’t microkernels, and Linux certainly isn’t. Driver code generally doesn’t run in user mode or have its own process.

    About the only way that drivers in Linux could be regarded as “separate from the OS” is that Linux can load kernel modules written by hardware manufacturers. But Windows can do this to, and probably depends on it more than Linux does (I bet that Microsoft develops far fewer Windows drivers internally than Linux developers write open source drivers for proprietary hardware). And I have had Windows bluescreen on me because of buggy drivers written by hardware manufacturers, and I don’t blame Windows any more than I’d blame Linux if my NVidia or ATI driver caused a kernel panic.

  46. “The root cause of the entire software shambles is nVidia’s refusal to publish either open-source drivers or the specifications for their APIs, thereby necessitating the teetering pile of kluges and workarounds that I collided with.”

    Sorry, Eric, but I gotta disagree here. You’re demanding that they give away the keys to the kingdom for anyone – specifically including their competition – to use to beat them with. Why should a company spend gigabucks to help out their competition?

    Personally, I strongly doubt we’ll ever see good 3D performance in an open-source driver. Period. There’s too much expense that has to be recouped, and the Linux market is vanishingly small by comparison to recoup it in. The Linux downloads of the Second Life viewer I use are about 1% of the total.

    1. >You’re demanding that they give away the keys to the kingdom for anyone – specifically including their competition – to use to beat them with.

      Nonsense. They could put the sooper sekrit sauce (the value of which they no doubt grossly overerestimate) in on-board firmware, then publish the firmware’s API for open-source drivers to use.

  47. I’m running the open source ATI drivers on my laptop right now. For some reason, while power management on it works just fine, the system defaults the ATI card at full power all the time. Since I never really use the 3D in Linux, I run “echo low > /sys/class/drm/card0/device/power_profile” and that locks it to low power mode, which is still more power than you can use if you’re not using 3D. Also, lately WebGL has started working too. Performance is probably not as great as it could be, but it’s usable on most of the demos being pushed now. (I’m getting 22fps on http://webglsamples.googlecode.com/hg/dynamic-cubemap/dynamic-cubemap.html on low power, and very inconsistent fps on high power, zooming up to 60 and bouncing all around; this is just as likely to be JS as the graphics card. And I’m definitely not bleeding edge with all the pieces here, either, just “whatever happens to be latest ubuntu without trying too hard”.)

  48. “Unless you downloaded the driver package directly from nvidia’s ftp site and attempted to install it by hand”

    I have found that this is usually the best approach, actually.

  49. “Personally, I strongly doubt we’ll ever see good 3D performance in an open-source driver. Period. There’s too much expense that has to be recouped.”

    Why has the open source community had so much trouble writing an Open Source driver that has good performance? Are there some magical algorithms that only nVidia knows about? I find that hard to believe. Between reverse-engineering the existing nVidia drivers (legal in some jurisdictions) to figure out the API and the general expertise of the Open Source community in graphics (ample proof of this is out there), I really am surprised that better performance is not available without the proprietary drivers.

  50. “You’re demanding that they give away the keys to the kingdom for anyone – specifically including their competition – to use to beat them with.”

    See my comments above about putting the keys to the kingdom in firmware and a thin API layer in the driver. I would pay a premium for such a card, since it would always have up-to-date drivers in every new kernel release.

  51. (the value of which they no doubt grossly underestimate)

    Typo, or did Ballmer get a hold of your mind control ray?

    — Foo Quuxman

  52. I strongly disagree. Tight integration of OS and UI is a structural failure of the Windows platform.

    Not, as I think I’ve stressed repeatedly, for a desktop platform. Headless server? By all means, use something else, and I’ve done exactly that. Sshing in? There’s plenty of remote access options for Windows, I regularly hold phone conference reviews with multiple people VNCed into my machine. Yes, there is a hit to portability, but that is a cost-benefit tradeoff for better plug and play. Unix designers have always though in terms of “write the application, then layer the UI on top of it”. Then you get all these problems bubbling up through the stack because the underlying code wasn’t designed with the user in mind. You can separate UI/functionality code and that’s good practice, sure, but start with the interface and write the application to support it from underneath.

    except for the small and relatively painless bit of X configuration at the end

    After the bad hardware/driver was identified and replaced, it was still 90 minutes! Not good enough. Still having to edit configuration files for something that simple is failure for a desktop platform.

  53. The question of why X can’t do a better job of plug and play is worth discussion, but the solution is NOT to move to tighter integration with the OS.

    And in saying that, you’ve just ruled out the only solution that will work, so there’s not much point in having the discussion. The Apple reputation for “it just works” is overblown, but their control over the hardware/software combination is legendary, and how they built their particular competitive advantage. Personally I’ve always preferred the Windows half-way house for anything desktoppy, some freedom but in general things work without excessive friction. How many printing/sound/USB/graphics tales of woe does it take to show that Linux is a server platform with a thin layer of UI on top of it, and that that’s in the very nature of it’s design?

  54. “You can separate UI/functionality code and that’s good practice, sure, but start with the interface and write the application to support it from underneath.”

    Here there be dragons. You need to design the interfaces between the UI and the application code very carefully, and don’t let the UI API design them for you.

    I’ve noticed that Windows app developers, even experienced ones, tend to be very sloppy about preserving this UI/function separation, whereas Unix developers are fanatical about it. Guess which one is more likely to have solid unit tests and regression tests in the source tree?

  55. @Cathy: “The question of why X can’t do a better job of plug and play is worth discussion, but the solution is NOT to move to tighter integration with the OS.”

    “And in saying that, you’ve just ruled out the only solution that will work”

    You assert this, but I see no evidence presented.

  56. X is soon (this year or next) to be supplanted by Wayland on most major distributions.

    Delivered by flying pigs, no doubt.

  57. Guess which one is more likely to have solid unit tests and regression tests in the source tree?

    And guess which one results in a system where you can’t plug in a second monitor without editing a text configuration file? That’s my evidence. Where’s yours? You’re asserting that being fanatical about separating UI and functionality is good – and I ‘m not seeing a benefit for the application I’m talking about.

    I’m sure their unit tests are solid. I’m also sure that the requirements they’re testing against were badly (or never) written or thought about. Which makes the testing largely a waste of time. Regression testing is handy for proving the software is just as broken as it used to be though.

  58. On a tangent, I’ve never had much respect for unit and regression testing. On a major infrastructure project I worked on, I instituted a semi-formal procedure for rolling out new builds that the software developers hated. It got installed, we ran the official test procedure, and they passed everything – but it didn’t stay in until my boss had sat down and operated it for a bit. He routinely broke it in less than five minutes. I selected him because I was too close to the development and knew where to “click, wait now until it’s ready”, “no that colour doesn’t mean the fan is actually running, that’s a known fault so ignore it” – that sort of thing. I was the wrong person to test something for use by a non-technical user. I had to operate it for the tunnel smoke test because no one else could make the fucker work. The difference between me and most Unix developers is I recognise that.
    If it survived 15 minutes we declared it successful enough to stay in production.

    1. >I had to operate it for the tunnel smoke test because no one else could make the fucker work.

      Your report reveals that you don’t know how to write a decent test suite. That’s your problem; it doesn’t mean the rest of us don’t know how, nor does it mean that those who don’t know how shouldn’t learn.

  59. esr: “They could put the sooper sekrit sauce (the value of which they no doubt grossly overerestimate) in on-board firmware, then publish the firmware’s API for open-source drivers to use.”

    What if they have proprietary methods for managing CPU-side resources that are dependent on the driver architecture?

    1. >What if they have proprietary methods for managing CPU-side resources that are dependent on the driver architecture?

      I don’t understand the question. The firmware is running on a Turing machine.

  60. “They could put the sooper sekrit sauce (the value of which they no doubt grossly overerestimate) in on-board firmware, then publish the firmware’s API for open-source drivers to use.”

    There are a few problems with this…

    1) The driver is big – the current 32-bit Nvidia Windows driver is about 167 MB, and the 64-bit one is 215 MB. That means that putting it in firmware would eat up lots of physical address space, even if not all of that would go in ROM.

    2) It changes frequently. The AMD driver on Windows has a monthly release schedule, more or less; I’m pretty sure the Nvidia one does as well. They’re highly tuned and optimized for the games that use them, and vary with the popularity of those games (an optimization for a game that isn’t played as much any more may be removed if it interferes with an optimization for a newly popular game). This means that updating it suddenly requires a ROM flash cycle, which is something that raises the danger level of the update process substantially.

    3) It’s much more complex than what you or I would think of as a hardware driver. In particular, it includes an OpenGL compiler: part of a game’s initialization is passing OpenGL shader programs and the like to the driver in source form, and the driver compiles them and returns a handle to the module. The module can also be recompiled on the fly if desired.

    4) And about the parenthetical: When you spend a few billion dollars on developing something, it tends to color your perception of its value.

    1. >There are a few problems with this…

      Possibly, but at least three of those you think you’re pointing out are obviously bogus. They depend on a hidden assumption that it’s possible, in a market with optimizations turning over every month, for any of those optimizations to be actually worth keeping secret.

      This is nonsense; by the time the competition can copy game-specific hacks into a shipping product, their net present value will have gone to zero, and it’s even possible it will have become a pessimization and have negative value. You actually want your competition to copy stuff like this; it wastes their NRE relative to where you are now. I call this the “Mary Gloster” effect, after the lines by Kipling: “They copied all they could follow, but they couldn’t copy my mind / And I left ’em sweating and stealing a year and a half behind.”

      Some techniques (though much less commonly than is generally believed) have secrecy value over longer timescales. Those are the ones you put in firmware. You throw the Mary Gloster stuff into open source. What’s left between the two? Probably nothing.

      It’s not economic minimaxing that leads to secrecy in markets like this, it’s irrational territoriality and stupid bad habits.

  61. Interestingly I’ve had similar issues with the Radeon card built into my laptop – not the monitor frying but the getting the damn thing to recognize a second monitor under linux (xubuntu 12.04) and display them side by side.

    This was exacerbated by trying to use the proprietary driver and it’s stupid “catalyst control center” which for some unknown reason tries to use its own version of gksu to startup and which (on my ubuntu fails miserably whenever it does so). After a considerable amount of faffing I did get it to reconfig xorg.conf sensibly and ever after I just use xrandr to switch. But it took me longer than I wanted and I went back and forth between proprietary and OS drivers to get a result that worked. Proprietary seems better, but it still won’t log me in and start the monitors in side by side mode without my running a custom startup script. The startup script is very simple (do xrandr , check to see if two monitors are connected and if so do xrandr –output (otherconnector) –left-of LVDS

  62. There’s a distinct possibility here that the underlying issue was a bad GPU/Chipset interaction rather than being NVidia’s fault. I’ve seen some seriously strange behaviour in certain cases where a newer video card was used on a motherboard designed for an older revision of the PCI express spec. This can occur with both ATI and NVidia cards as well, I’ve usually found the easiest fix to be replacing with a comparable card from a different vendor.

    Generally I run into these issues when adding a low-end NVidia or ATI card into an older system to add dual-head support. I’m expecting to see this again with the new PCI Express 3.0 cards going into 2.1 and 2.0 revision systems.

    And X’s auto-detect for multi-head is lousy. This is something Apple figured out with the Mac II back in 1987 (at one point I had a Mac II with 2 Toby cards and an 8.24GC running 3 displays, all plug & play aside from the 8.24GC’s setup utility, and 3 more displays were possible).

  63. How many printing/sound/USB/graphics tales of woe does it take to show that Linux is a server platform with a thin layer of UI on top of it, and that that’s in the very nature of it’s design?

    I wish people would actually start thinking of Linux as being just for hackers, hobbyists, and servers; maybe they’d stop trying to fix what ain’t broke which is where a lot of the cruftiness comes from.

    An example: Sound used to Just Work under Linux. For many well-known sound cards, it was even leaps and bounds ahead of Windows, at least back in the 90s. Then along came PulseAudio and blew our house down.

    Actually the problems started earlier — with ALSA. Instead of just opening the sound device and blasting audio to it, sound now had an API. It did provide things that OSS (Open Sound System) didn’t — like dmix — but there appear to be holes in dmix’s design: for one, the OSS compatibility mode simply doesn’t have it. I think this may have been deliberately left in to discourage people from writing apps against OSS. Nevertheless, it was one of the major things that PulseAudio “fixed” — by monkeypatching open(2)!

    It would have been possible to add software mixing to the existing OSS infrastructure — by, say, providing pseudo sound outputs like we have pseudo tty’s or just allowing multiple opens to /dev/dsp like the AWE64 drivers did — but we got stuck with the second-system effect that is ALSA, and following onto that, PulseAudio. (My buttcheeks clenched together when I read that Steam for Linux required PulseAudio — mercifully I think they took that requirement out.)

  64. @Jay Maynard:

    >There are a few problems with this…

    >1) The driver is big – the current 32-bit Nvidia Windows driver is about 167 MB, and the 64-bit one is 215 MB. That means that putting it in firmware would eat up lots of physical address space, even if not all of that would go in ROM.

    That’s the size of the deb package that contains the driver. Most of that is userspace libraries, with both 32 and 64 bit libraries included in the 64-bit package. The “sooper sekrit sauce” is the file “nv-kernel.o” in the /usr/src/$DRIVER_PACKAGE_NAME-$DRIVER_VERSION/ directory. nv-kernel.o weighs about 15 megs.

  65. @Jeff Read
    I sympathize with your proposal to make Linux a pure OS for hackers and hobbyists.
    But may i remind you that Linux did start as a pure hacker OS and didn’t stay that way because it is in the nature of hackers to bend technology far from its intended purpose, like bending Linux to be also a OS for non hackers. So your proposal doesn’t even has a theoretical chance to succeed.

  66. Between reverse-engineering the existing nVidia drivers (legal in some jurisdictions) to figure out the API and the general expertise of the Open Source community in graphics (ample proof of this is out there), I really am surprised that better performance is not available without the proprietary drivers.

    Time, effort, and risk are the limiting factors here. I don’t care if you’re Fabrice fucking Bellard; RE’ing a binary is difficult, time-consuming work, especially if it’s big like even nv-kernel.o is. I’d say it’s well beyond the “for shits and giggles” threshold; it’s the sort of work that even great hackers expect some sort of payment for undertaking. That’s why RE’ing binaries is traditionally considered a malware-related activity; at least malware authors get paid (with funds from your stolen credit card).

    Beyond that there is HUGE legal risk. If you do anything related to video decoding, you owe royalties to MPEG-LA. If you use the standard texture compression (DXTC/S3TC), that’s patented as well. And these are just two of the things we know about. There could be all sorts of hidden patent gotchas in the peculiar way rasterization or tessellation is done on NVIDIA’s platform. It’s a minefield, and no one wants to take the risk unless a huge corporation with a crack legal team can pay them, and indemnify them from legal action.

    In short: forget it. High-performing 3D drivers are outside the purview of open source.

  67. >Unix designers have always though in terms of “write the application,
    >then layer the UI on top of it”. Then you get all these problems bubbling up
    >through the stack because the underlying code wasn’t designed with the user in mind.

    All abstractions are leaky The consequence of the Unix way is that there are more leaks, but those leaks are easier to fix when you encounter them. By contrast, OS X and Windows chose a different tradeoff. They prefer fewer leaks in exchange for those leaks being extremely unlikely to be solved by a user with a text editor.

    >But really, where could I go if I gave up? My experience with closed-source OSes
    >warns me that almost everything would be this bad, all the time. In different ways, sure,
    >but tearing my hair out over registry snafus and malware and data jailed by proprietary
    >products and formats (among hundreds of other evils) would still be tearing my hair out.

    Honestly I think you might be over estimating the frustrations you would find in a modern OS X system. I’d be honestly interested in seeing how you fared with an OS X system for a few months, especially from the standpoint of trying to ensure that all of your data is kept out of data jails while still enjoying the benefits the OS provides. I would imagine that you could find open source replacements for almost your entire workflow, even if the specific program isn’t available. Heck there even seems to be some people compiling i3 for OS X (http://infra.in.zekjur.net/pipermail/i3-discuss/2012-August/000855.html) though I don’t know how much OS X benefits you lose when you drop out of aqua. Other than the general dislike of closed source, the only true non starter for that I could think of might be if you rely on X forwarding a lot. You can enable it for X11 if you want, but you can’t do X forwarding for aqua/cocoa apps. If nothing else I think the experience would be interesting for you, and the associated commentary would be interesting and potentially enlightening for us. Something to think about next time you’re provided the opportunity to acquire a mac?

    1. >Something to think about next time you’re provided the opportunity to acquire a mac?

      No, not really.

      I’ll grant you the premise that I would gain some occasional convenience from Mac OS/X, like avoiding the graphics-card horror of the last week.

      I still won’t do it, because I think what I’d be paying for that convenience is exposure to much larger risks. See my essay Evaluating the harm from closed source for discussion.

      And it’s not just that I don’t want to incur those risks for myself. I refuse to be an accessory before the fact in inflicting them on others. I’ll play closed-source games because they’re not very harmful, and it doesn’t bother me when I ride an elevator with closed-source firmware, but I will not – ever, under any circumstances, for any reason – give Apple my money or my approval so that it can lock more people into its beautifully-decorated jails.

  68. Eric: “They depend on a hidden assumption that it’s possible, in a market with optimizations turning over every month, for any of those optimizations to be actually worth keeping secret.

    This is nonsense; by the time the competition can copy game-specific hacks into a shipping product, their net present value will have gone to zero, and it’s even possible it will have become a pessimization and have negative value.”

    With all due respect, Eric, I don’t see how you can come to this conclusion with any degree of confidence unless you’ve seen the code. You don’t know what’s in the secret sauce. It may be just ketchup, mayonnaise, and pickle relish. It might be complete wizardly unobtainium. It might be (probably is) somewhere in between. Even if it’s obsolete for you, moreover, that doesn’t mean it won’t help your competition: it might be just the clue they need to leapfrog you when added to their existing technology. Can a corporation with fiduciary duties to its shareholders to preserve its intellectual property take that gamble?

    Jon: “The “sooper sekrit sauce” is the file “nv-kernel.o” in the /usr/src/$DRIVER_PACKAGE_NAME-$DRIVER_VERSION/ directory. nv-kernel.o weighs about 15 megs.”

    Uhm, no. There’s lots of secret sauce in the userspace libraries, too, and while not all of it would need to be moved into ROM, a big chunk of it would.

    1. >With all due respect, Eric, I don’t see how you can come to this conclusion with any degree of confidence unless you’ve seen the code.

      Quite easily, thank you, just by thinking about the economics of NRE and discounted net present value. The logic is insensitive to where the code is on the ketchup/unobtainium spectrum except at extremes you almost certainly cannot reach in a market where dwell time of any given code component is this short.

      I see you really don’t understand the Mary Gloster effect. They key to it is that copying has a cost, and that cost has to be paid out of limited resources that then can’t be spent on something that isn’t copying. Copying competes with innovation; in fact, it drives out innovation.

      Um, I was going to write a more detailed explanation with stuff about how different timescales and cost gradients affect the payoff, but I think it should be a blog post.

  69. X is soon (this year or next) to be supplanted by Wayland on most major distributions.

    Delivered by flying pigs, no doubt.

    By the way, I do not doubt that some distros will try to push Wayland to center stage. But in those cases it will be supplementing X, not supplanting it. Large numbers of apps will still be X apps running on X forwarded to Wayland. Which means you inherit the complexity of X, plus the complexities of Wayland, plus the complexity of the glue code. All the fun of running an X app on Mac OS X, or on Windows under Cygwin/X, but now on Linux.

    It’s rather unfortunate. An actual X12, made with portability to things like Solaris and the BSDs in mind, could have been a useful step forward.

  70. esr: “I don’t understand the question. The firmware is running on a Turing machine.”

    So? If the firmware is managing things on the CPU-side then it has to fit into a driver architecture. It’s not as simple as telling the GPU to draw some stuff – there are huge opportunities for optimisation on the CPU-side. How can this code be made independent of the driver architecture?

    1. >How can this code be made independent of the driver architecture?

      I’m still not understanding the question, sorry. I find your description so vague and confusing that I’m not even sure how to ask for clarification.

  71. “Copying competes with innovation; in fact, it drives out innovation.”

    And lots of people hate innovation. I would even hazard to say that more than 80% of the market hates innovation; they want incremental change. See also: Unity interface, Windows 8 interface. The fact that iOS is basically the Newton interface with better graphics rendering, and Android is basically the Newton interface with better graphics rendering.

    Copying leads to commoditization. This is, indeed, your central thesis on the theme of Android Uber Alles…and why most of the Android UI stack has remained in stasis once they copied iOS’s UI stack to the point where only purists like Jeff Read notice the differential any longer.

    Now, put yourself in nVidia’s or AMD’s board’s position.

    Copying leads to commoditization. It takes roughly 2.5 years of development time to get a video chipset out, largely because for nVidia and AMD, they’re trying really hard to NOT be the commidity chipset manufacturers, they’re the fabless wunderkind.

    When a chipset is released, it’s got a definite life cycle; when I was writing about them regularly, you’d see Release Date +30 to Release Date +45, where whatever chipset they had just released was more appealing than sexual reproduction. Then their competitor would release a new product. Then there’d be the same hardware run to a slightly higher clock speed. Then there’d be a fine tuned version of the hardware running at the original clock speed, and usually with a better drive.

    Then the lead in the race would swap back to their competitor, and vice versa – but for 2.5 years of development lead time, you have a window to make your investment back that’s roughly 9-10 months – you parallel-develop so that you’re constantly developing three chipsets at different stages of ‘burn to silicon’ and you’re constantly tweaking the drivers.

    All of that costs engineering time and money. You have shareholders to answer to.

    And copying leads to commoditization, in an industry where your window of profitable sales is about 9 months after a new piece of hardware is released, and you really really hope that your esteemed competitor doesn’t have something hotter/cooler/more buzz-worthy coming out, and you’re really depending on those manufacturing and licensing royalties.

    The reason why ATI got engulfed by AMD is because nVidia beat them, badly, for three release cycles in a row. And AMD bought ATI for cheap.

    nVidia may not survive the current computing sales apocalypse. Depending on whose numbers you dip into salt and chew, 1Q non-tablet computer sales for 2013 are down by 14% or 17% from 1Q sales in 2012, and 2012 was a bleak year.

    1. >Copying leads to commoditization.

      Yes it does. And copying techniques with an NPV of xero – or, worse, negative because they were optimizations in the last cycle but pessimizations in this one – has negative value. Because you spent NRE on that rather than on getting ahead of your competitor on something they don’t know how to do.

      You’ve actually done an excellent job of describing the kind of market in which the Mary Gloster trap bites the hardest. Shorter product cycles make it worse; higher capital costs make it worse; shortages of qualified engineers make it worse. It was already enough of a killer to destroy steamship companies around the turn of the last century; today the hidden penalty for copying is much, much higher.

  72. esr: “I’m still not understanding the question, sorry. I find your description so vague and confusing that I’m not even sure how to ask for clarification.”

    I can’t tell which part of this is unclear. Graphics drivers have to manage the split between resources on the CPU side and the GPU side. For example, at various times the same geometry might reside in system memory or in graphics memory. Moving data to and from the graphics card is the major bottleneck in high performance consumer graphics. How can this be done efficiently without being dependent on the driver architecture? For example, you potentially have to know about the OS’ process model since the resources on the graphics card are shared between processes. Different driver architectures will share resources between processes in different ways. How can all this management be written into firmware when you need to know about the driver architecture, which varies between operating systems?

    1. >Graphics drivers have to manage the split between resources on the CPU side and the GPU side.

      Sure, but there isn’t any magic there; virtual-paging and TLBs are very well-understood techniques. There’s no reason other than territorial habit for that code to be secret – nobody is going to steal any competitive advantage from you by looking at yours. In fact, rationally, you should want them to burn resources on that kind of futile effort.

      The actual secret sauce in these cards is in things like shader algorithms and texture mapping – fastest possible implementation of Mesa primitives, basically. That stuff really does have secrecy value, but it’s also an excellent candidate to go to firmware.

  73. @ESR
    The thing about MacOS (and even Windows these days) is that the OS is *not* the application layer.

    Because of what I do, and who I do it for I routinely have to use Windows at work. I also have MacOS at home, and use Iinux and occasionally BSD at both. I use almost exactly the same application stack on whatever OS I’m using. Firefox, Thunderbird, Evernote (or nixnote on Linux), Libre/Open office (unless there’s a requirement for MS Office at work) and GIMP. On WIndows I install cygwin to get my precious bash shell, but it’s there by default on MacOS.

    I completely understand your concern about data jails, but on MacOS it’s just not a concern.

    OTOH, I’ve double headed Linux/X workstations for well over a decade now, including using the NVIDIA drivers and the only time I had any where near your level of difficulty was in getting a 30 inch HP monitor running–I had to find (couldn’t buy) just the right card.

    1. >I completely understand your concern about data jails, but on MacOS it’s just not a concern.

      Sorry, that’s impossible in principle. Where source is closed it must be a concern – unless there is an open, redistributable standard describing each and every detail of every format such that a conforming implementation could be built from the standard. And there isn’t, is there?

  74. esr:>
    > but tearing my hair out over registry snafus and malware and data jailed by proprietary products and formats (among hundreds of other evils) would still be tearing my hair out.

    The windows registry is less evil than the immense and uncontrollable pile of configuration files in linux.

    Once in a while, the user friendly graphical configuration wizard is totally stuck, you have to dive in fix up whatever is wrong in the registry, but, on the whole, the registry seems more logical than linux config files, more like the product of a single mind. Cathedrals are apt to be more nicely laid out than bazaars.

    Also, when windows switched from config files to the registry, they got a chance to rewrite the the immense and uncontrollable pile of configuration files in windows. In place of an unholy mess, we got a considerably less unholy mess.

  75. @JAD

    >on the whole, the registry seems more logical than linux config files, more like the product of a single mind.

    This is laughable. As Hans Reiser put it so well, the Registry is conceptually just another filesystem, but with a completely separate API for manipulating it. Microsoft actively discouraged the use of .ini files for program configuration in favor of the Registry, because their filesystem architecture wastes way too much space storing small config files.

    In contrast, the *nix configs can be manipulated with any text editor, as well as tools like sed/grep/awk/perl, and the Filesystem Hierarchy Standard allows these configs to be stored in net-shared filesystems where appropriate, making Linux system administration for large numbers of machines much more efficient than on Windows.

    And the way the Registry handles settings for 32-bit software on 64-bit versions of Windows is exactly umop apisdn—instead of leaving everything where it was and creating a new branch for 64-bit-specific software, they moved the 32-bit stuff to a new key, and stuck the 64-bit stuff where the 32-bit stuff was. So anything that looked for those settings under the old key would be broken by this change. Stupid, stupid, stupid.

  76. Your report reveals that you don’t know how to write a decent test suite.

    Very funny. Nice of you to avoid my point – that we test against requirements, and that if the requirements are badly written then no test suite will save you. You may pass everything, but the end product will still be completely useless.

    For a server side program, those requirements should be functionality based. For a user program, UI based.

  77. esr: “Sure, but there isn’t any magic there; virtual-paging and TLBs are very well-understood techniques. There’s no reason other than territorial habit for that code to be secret – nobody is going to steal any competitive advantage from you by looking at yours. In fact, rationally, you should want them to burn resources on that kind of futile effort.”

    This is not about implementing virtual paging etc in the OS. It is about implementing CPU-side optimisations – and why they can’t just be stuffed into firmware. What does virtual paging being “well understood” have to do with whether or not there is room for CPU-side optimisations? Your answers sound like guesswork from someone who hasn’t looked at the graphics pipeline in depth for years.

    esr: “The actual secret sauce in these cards is in things like shader algorithms and texture mapping – fastest possible implementation of Mesa primitives, basically. That stuff really does have secrecy value, but it’s also an excellent candidate to go to firmware.”

    The major bottleneck in current high performance consumer graphics is data transfer from the CPU to the GPU, not “shader algorithms”. In fact, shaders are almost universally user-defined via HLSL or GLSL (including texture mapping). No major game engine uses a fixed pipeline anymore. If your driver is clever about when it chooses to send data to the GPU it will outperform a competing driver.

    1. >Your answers sound like guesswork from someone who hasn’t looked at the graphics pipeline in depth for years.

      Maybe they sound like guesswork because you keep changing the subject. Let’s see, first it was “driver architecture”, then it was “CPU-side optimizations”, now it’s “data transfer from the CPU to the GPU”. That seems like progress from my POV, actually; it’s an argument for moving more processing on-chip, into firmware, so fewer intermediate representations get thrown back and forth.

      But you’re right that my knowledge of the graphics pipeline is stale. Waving your hands and a lot of vague terminology at me won’t help.

  78. In contrast, the *nix configs can be manipulated with any text editor

    We’re still having this argument? Correction, “In contrast, the *nix configs can be screwed up with any text editor, assuming you can find them”.

    Fine, it’s nice to have access to the underlying configuration files. But it shouldn’t be a requirement to get something basic running! Linux is an incredible achievement, not least because the underlying architecture is so flawed. Text configuration files are fragile – one character out of place and you’re stuffed.

  79. >Sorry, that’s impossible in principle. Where source is closed it must
    > be a concern – unless there is an open, redistributable standard
    >describing each and every detail of every format such that a
    >conforming implementation could be built from the standard.
    >And there isn’t, is there?

    I’m not sure what the concern here is. I perfectly understand the hacking the hardware and underlying OS concerns that would come from using a closed source OS, but what data do you envision storing on your computer that you think you will be forced to enter into a data jail on osx or for that matter, windows.

    1. >what data do you envision storing on your computer that you think you will be forced to enter into a data jail on osx or for that matter, windows.

      If I knew the answer to that in advance, I’d be less worried.

      The biggest problem with having proprietary anything underneath you in the stack isn’t necessarily the lock-in you can predict in advance, it’s the unpredictable kind that you don’t notice you’re getting more involved with as your workflow gradually changes. Until the day it chews a chunk out of your ass.

  80. > If I knew the answer to that in advance, I’d be less worried.

    From a developer standpoint, basically anything. Having to port anything to OSX gives me the willies; the way they lay out the frameworks and whatnot is highly convoluted if you’re not already embedded in that culture, and having to build your local code around it will make it more painful to get away, especially if you do anything graphical.
    You could rely on something like wxwindows or equivalent I suppose, but I don’t know how realistic that is.

  81. Actual personal experience: once had the xcode linker refuse to link my code, period, because it didn’t like the kernel version the OS was reporting. There was no actual problem, it was purely I-Know-Better-Than-You watchdog behavior. But I couldn’t compile a damn thing at that point. Never did fix it, either. Scrapped that install entirely.

  82. My religion requires that at least some of the time, I be off-topic and funny.

    I am afraid that Eris is still having her way with my personal life, but some folks may be interested in knowing how I came to be the Patron Saint of Unintelligible Greek Post-it Notes. I invite one and all to my

    What Would Eris Do?

    website at
    http://www.wwed.info and the
    Holy Names page in particular.

    The website has grown in general, and I sort of have the basics covered enough that I can focus on Operation Mindfuck and Jakes (an area that hasn’t changed since the multi-page website went up).

    Hail Eris! All Hail Discordia!

  83. ESR, I see your comment about “90 minutes of minor irritation”. You’re talking about adding a monitor. On a Windows box, I can literally do it in half that…including the time I have to spend driving to the store to buy the monitor in the first place. Once it’s out of the box, you plug it in and you’re pretty much done. Maybe you’ll need to reboot, if you want to use any fancy features(dual-screen wallpapers was the one thing that didn’t work last time, IIRC), but generally not. How exactly is taking 90 minutes to plug a basic peripheral in a sign of a good OS?

  84. @BRM
    In the interest of lunatic fraternity, a hearty greeting, well met and all that fnord, on this miscellaneous hour of what I’m pretty sure is Sunday (or rather, Prickle-Prickle, Dis 31), from a fellow Erisian traveller, AKA, SOS, QED, Pope Rykex Hysteria, Metagame Whiplash and Episkopos, Presbyteros, &c., Two Horses Cabal. Sainthood patent pending. Hail Eris fnord, and now back to our regularly scheduled whatevers.

  85. Your suggestion the nVidia card “roached” the display reminded me of this passage by Peter van der Linden from his book, “Expert C Programming:”

    “The original IBM PC monitor operated at a horizontal scan rate provided by the video controller chip. The flyback transformer (the gadget that produces the high voltage needed to accelerate the electrons to light up the phosphors on the monitor) relied on this being a reasonable frequency.

    “However, it was possible, in software, to set the video chip scan rate to zero, thus feeding a constant voltage into the primary side of the transformer. It then acted as a resistor, and dissipated its power as heat rather than transforming it up onto the screen. This burned the monitor out in seconds. Voila: undefined software behavior causes system meltdown!”

  86. esr: “Maybe they sound like guesswork because you keep changing the subject. Let’s see, first it was “driver architecture”, then it was “CPU-side optimizations”, now it’s “data transfer from the CPU to the GPU”. That seems like progress from my POV, actually; it’s an argument for moving more processing on-chip, into firmware, so fewer intermediate representations get thrown back and forth.”

    It sounds like “changing the subject” because you don’t understand any of the things you put in scare quotes, and consequently you don’t see the relationship or the thrust of the conversation. You say a desire to move processing on-chip suggests moving driver code into firmware, when one function of the drivers is to optimise things from the CPU-side so that – among other things – more processing happens on-chip! In fact, this function is the subject of this thread of conversation. You’re talking about moving texture-mapping algorithms into firmware.. if only you knew how utterly ludicrous this suggestion is! You could understand all this easily by actually reading something about graphics pipelines instead of lecturing others on how graphics drivers should be designed when you don’t know the first thing about current graphics pipelines. What a surreal conversation.

    1. >You could understand all this easily by actually reading something about graphics pipelines instead of lecturing others on how graphics drivers should be designed

      You might try not blowing smoke about what I know or don’t know. The stuff at your link is dated 2011; unless something has changed radically since, the pipeline is not very different than it was when I was last reasonably current on this stuff, around 2008. Which, actually, I find a little surprising; I was expecting it to be less familiar. That’s useful information; it may mean the dev cycles in this market are not quite as fast as I thought.

      I’ve never actually coded in such an environment, mind you, but I’m guessing you haven’t either.

      What piqued my interest, back then, was when a friend told me that this hardware had been required to develop what amounted to multitasking operating systems. I’m not a graphics wizard (well, not in 3D, anyway) but I understand OS design well enough that I thought that literacy would help me understand some of the rest of what was going on down there. It did.

  87. What’s interesting about your post is you forgot to explain the main point being discussed, namely: how to move all the optimisations from the driver into firmware. I await your explanation of how this is supposed to work, but I won’t hold my breath – since I know (and knew from the start of this conversation) that it won’t work, and consequently you will be unable to produce an answer. I feel sorry for you – not being able to admit you were wrong as a fully-grown man is a handicap (and an irritation to others). I suppose that’s why you accuse me of “hand-waving” when you’re yet to produce a single word of substance.

    1. >What’s interesting about your post is you forgot to explain the main point being discussed, namely: how to move all the optimisations from the driver into firmware.

      Of course I don’t know how to do it! I haven’t specialized there enough. But they’re both Turing machines. QED.

      I’ll cheerfully admit I was wrong about one thing. I thought you might manage not to go the full angry-shadow-autist megillah in this conversation. Oh, well.

  88. esr: “Of course I don’t know how to do it! I haven’t specialized there enough. But they’re both Turing machines. QED.”

    What does this have to do with anything? Turing machines don’t model bandwidth between system components, so this has little bearing on the conversation. During this whole conversation you have failed miserably to substantiate your claim (not yet retracted), and predictably we arrive at you casting aspersions on my behavior.

  89. Roger, for what it might be worth, I’m with Eric on this one: while there are substantial obstacles to putting the secret sauce in firmware, the differences between OSes are not part of them. A properly designed API to the secret sauce would be OS-agnostic, leaving the details of getting data from the application to the firmware to the OS-level driver. Indeed, I’ll be more than a little surprised if the drivers aren’t architected this way already, out of sheer self-defense.

  90. @Ltw
    > Text configuration files are fragile – one character out of place and you’re stuffed.

    As if the Registry weren’t even more fragile. Have you ever tried to repair “one character out of place” in the Registry, which prevents Windows from booting properly? It’s usually easier to just roll back the entire Registry from a backup made when things were working correctly.

    If the analogous problem occurs in Linux, you can boot from a CD, mount the /var filesystem, tail messages, determine what broke, pull up the corresponding config in your editor of choice, and fix the damned thing.

    >Can’t stop laughing.
    I fail to see what’s so funny about a computer’s configuration being editable not only by humans who might make the sort of “one character” error you’ve just described, but also by automated tools that have safeguards against those errors. In fact, most of the executable files in a *nix system are not binaries, but interpreted scripts, many of which encapsulate the knowledge about how to edit various configs safely. This is a Good Thing, because it is a big part of what lets Linux be ported to a different processor architecture so easily.

    If you think the ability to automate editing is funny, you’ve obviously never done enough system administration on *nix to have a clue.

  91. If the analogous problem occurs in Linux, you can boot from a CD, mount the /var filesystem, tail messages, determine what broke, pull up the corresponding config in your editor of choice, and fix the damned thing.

    Still laughing, even more. As I’ve said numerous times I’m talking about a desktop environment. Good luck with with explaining that over the phone.

    If you think the ability to automate editing is funny, you’ve obviously never done enough system administration on *nix to have a clue.

    I’ve been there and done that. And that’s exactly why I think Unix is a useless desktop platform (which is what I keep saying). A good, scalable, efficient server yes. But no end-user should ever have to do system administration. Why do sysadmins constantly think that what works for them is what should work for an end-user? Horses for courses territory.

    In fact, most of the executable files in a *nix system are not binaries, but interpreted scripts, many of which encapsulate the knowledge about how to edit various configs safely.

    Isn’t that a Bad Thing? Yes, the code is the documentation. That’s just great for Aunt Tilley. Even better, those scripts all break every time a tool changes its undocumented behaviour. On a server the cost-benefit analysis works, because the control over the system is worth it even after you pay all the sysadmins that you need to nursemaid it. Desktop? Ideally, you wouldn’t even know they exist.

  92. I am mystified by this notion of moving graphics drivers to firmware. We would laugh as we reflexively dismiss the notion of moving a filesystem or network stack or compiler toolchain into firmware (except for the trivial special case of copying a Linux system into a ROM chip), but we want to do this with graphics drivers? This makes no sense.

    There are some blobs of code in a graphics driver that make sense to embed in ROM: BIOS POST code and FPGA configuration required to glue the board together. Most of the time on PC-type machines these blobs are already in ROM and executed before loading any OS, so we’re done. In these cases the “secret sauce” is secret because the manufacturers don’t want to answer inane questions from users, like “what happens if I disable the thermal limiting circuit in the FPGA blob?” or “I disabled the thermal limiting circuit in the FPGA blob, and now all the smoke leaked out of my graphics card. Is the consequential damage to my hardware covered under the warranty?”

    GPU drivers do run code and they’re Turing complete, but there are some gotchas:

    The code the GPU runs is supplied by the application. GL is implemented on modern hardware by translating the GL calls into GPU machine code (warning: gross oversimplification here), which the GPU then executes independently of the CPU. For non-trivial shader programs there will also be compilation and optimization of application code. Turning that code into firmware doesn’t make sense, in the same sense that turning your C compiler or SQL database server into firmware makes no sense.

    The GPU isn’t good at things the CPU does well and vice versa. (warning: simplification by analogy) Imagine coding on a CPU with add/subtract opcodes that do not work when one of the operands is between 2 and 8. Then imagine the effort required to make an optimizing compiler for a C-like language run on that CPU, when there’s a perfectly good (if not better) CPU sitting on the host system which can do a small amount of preprocessing to make application code fit into the GPU’s limitations. Yes, it would be possible to fix the GPU’s ALU, but in doing so you’d add a bunch of hardware (and therefore space, heat, and power footprint, and probably also latency) to the system that only the GPU would be able to use.

  93. Nobody reverse engineers graphics drivers for competitive copying reasons. If you already know how to write a graphics driver from whole cloth, there’s little or nothing useful you can learn from reverse engineering someone else’s (doubly so when someone else’s is tied instruction-by-instruction to the details of hardware, so any information you might learn can’t be used until you’ve also reverse-engineered the hardware).

    If you don’t already know how to write a graphics driver from whole cloth, you are not a competitive threat, and probably also don’t have the domain expertise to understand what a disassembler will tell you about a graphics driver anyway.

    The big players in graphics hardware are well aware of what their competitors’ drivers are doing, but it’s not because they bother with (much) reverse engineering. They have the same customers, and those customers file bug reports of the form “feature X works well on company Y’s driver but not yours.”

  94. Jay: “A properly designed API to the secret sauce would be OS-agnostic, leaving the details of getting data from the application to the firmware to the OS-level driver. Indeed, I’ll be more than a little surprised if the drivers aren’t architected this way already, out of sheer self-defense.”

    The “OS-level” driver (of which there may be several) is what we are talking about here, so this response doesn’t make sense.

  95. Roger, sure it does. Basically, there are two pieces to this puzzle: the part that talks to the OS, and the part that talks to the GPU hardware. Sure, you can mingle the two, but then you have to do a lot of rewriting to have your driver work on three different OSes. The part that talks to the GPU hardware is part of the OS driver package, but it could be moved to firmware easily enough, subject to the other problems we’ve been talking about. It’s just a matter of clean design.

  96. @Ltw

    > That’s just great for Aunt Tilley.

    For Aunt Till(e)y, we should be providing configuration front-ends so that she never sees the config files themselves. In fact, there is no reason why a single config tool could not be produced that has the schemata for every config file in the system that Aunt Till(e)y might ever need to tweak. If such a tool were created, app developers would be able to create their own schema files and package them in .rpm / .deb, so that changes to programs installed outside of a distro’s package manager would be automatically picked up by the config manager.

    > Even better, those scripts all break every time a tool changes its undocumented behaviour.

    Not if those scripts are maintained by the people who maintain the code that has to read those config files. People like Red Hat and Canonical make sure those tools are updated, so that the rest of us don’t have to worry about such details..

  97. we should be providing configuration front-ends so that she never sees the config files themselves… If such a tool were created

    Well, that would be great. Except that it hasn’t been done – not comprehensively anyway. So, not what I would consider as a desktop platform for end users. And your approach comes precisely to my point of what’s wrong about it. You want to paste another layer of abstraction over the top of the system to hide it, rather than change it. Ok, so the user ticks a box on your lovely config tool. In the background, you edit text in a file, save the file, probably send the process a SIGHUP to force a re-read – how many things can go wrong in that chain? Leaky abstractions all over again.

    Look, I like Linux, have used it for 20 years, and it has a lot going for it. And yes, I can usually puzzle out configuration issues with some help from Google. But for day to day, non-dev related use, I prefer Windows. Less annoying friction, and a far better user experience. That wasn’t always true, of course. XP was the watershed.

  98. @Ltw
    Seems like your arguments are assuming “Linux” defined as every distribution as a gestalt, taking failures over successes, rather than picking implementations where things do work well.

    If a Windows-style registry gets corrupted, you’re pretty hosed, and Aunt Tilly is going to have to get help re-installing. Even slightly more savvy users will likely be in a position where they have no recourse but to format/reinstall. Without another system to recover data, it’s no different from losing everything.

    With text-file based system configuration, there’s always a recovery path (unless something truly catastrophic happened), and if Auntie is having to get help anyway, better that it’s able to be done in a sane way that actually allows for quick recovery by someone who does know how to do it.

  99. Yes, I’m ignoring distros, because I’m talking about basic architecture they all share. I haven’t seen a Windows registry get corrupted in ten years, except by hardware failure (that did happen to me a few years ago, a hard-used laptop slowly died on me, needed to be rebooted 3-4 times a day, probably an accumulating disk failure. I don’t know because I tossed it away and got it replaced). You’re thinking of Win95/98, or possibly the abortion that was Win Me. I’m not buying it. We’re still talking about plugging in a basic peripheral and having to edit text files to get it working aren’t we? Not catastrophic system failure? Of course an end-user will need help with the latter. They shouldn’t need it for the former. I keep saying the same things, and everyone keeps shifting the ground.

  100. Jay: “Roger, sure it does. Basically, there are two pieces to this puzzle: the part that talks to the OS, and the part that talks to the GPU hardware. Sure, you can mingle the two, but then you have to do a lot of rewriting to have your driver work on three different OSes. The part that talks to the GPU hardware is part of the OS driver package, but it could be moved to firmware easily enough, subject to the other problems we’ve been talking about. It’s just a matter of clean design.”

    Basically, you are just guessing. Do you have any idea how these drivers are structured? How is the part that talks to the GPU hardware going to do so without talking to the OS? That makes no sense at all. For a start, it needs to talk to the OS to know what it’s supposed to be telling the card to do (at the _absolute minimum_).

    The driver architecture argument is only relevant if we are talking about storing drivers on the firmware and later executing them on the CPU – the kinds of functions we are talking about cannot be controlled from the GPU itself since they have to respond speedily to the demands of programs running on the CPU. This is where Eric is plainly mistaken – he thinks you can run every performance-critical, optimisable function on the firmware. And there is no secrecy gain to be had in storing driver code on the firmware and later executing it on CPU. In fact, this just isolates all your sensitive code with a nice interface allowing it to be strapped into a test harness and reverse-engineered with the greatest of ease. Why don’t you stick some neon lights on it too? Same goes for partial binary drivers in software.

    As Zygo points out, even for the parts that could potentially be moved into firmware (e.g. shader compilation) the benefit of doing so is dubious given the difficulty of doing so. Why would you stick things that might improve performance in future software updates in the firmware where they are harder to update?

  101. @Ltw
    I was replying to a specific post of yours.

    Speaking of X, I’ve never had any problems with multi-head video. The last time I even had to look at xorg.conf was when I was doing something hackery with a special setup I was using.

  102. I was misconstruing Jay’s use of the word ‘part’, since he is talking about an intra-driver interface not trying to talk about driver architecture, so apologies to Jay and consider my first paragraph above retracted.

  103. In any case Jay, let’s return to the driver architecture argument (a diversion from the firmware nonsense). Essentially what you are arguing for is selectively binarizing parts of the driver. Of course, you might be able to get this to work given some known selection of driver architectures. But it would *not* be independent of the driver architecture! For example, the driver arch might require a user/kernel split in the driver. So your blob must be structured so that parts of it can be split across user and kernel land.

  104. Roger: “As Zygo points out, even for the parts that could potentially be moved into firmware (e.g. shader compilation) the benefit of doing so is dubious given the difficulty of doing so. Why would you stick things that might improve performance in future software updates in the firmware where they are harder to update?”

    I wouldn’t. As I pointed out in an earlier posting, there are lots of problems with this approach. Still, we should be arguing about real issues, not ones that can be solved with good design.

    The open source folks’ argument is that the current driver package can be split into two parts, one part that has the stuff that manufacturers believe they need to keep secret to protect their investment in development, and another that is just interfaces to that; the latter can be open-sourced, and the former moved into firmware.

    You’re arguing that the split is not possible. I am arguing that it is, and that the reasons for not using that split to move the secret sauce into firmware are independent of that. The latter is sufficient to argue against the whole idea; the former gives the open source advocates a convenient target to shoot at.

  105. I am only in this discussion to debate particular technical points about the importance of the contents of the drivers. Frankly, I think we would all be better off if they open sourced them and documented their interfaces like good citizens. But the technical realities can’t change to suit my point of view, and if anyone wants to say the important stuff in the drivers can be shoved off into firmware, let him tell us what the contents of the drivers are and then tell us how each part can be replicated in the firmware.

  106. @Roger Phillips
    > But it would *not* be independent of the driver architecture!

    The solution is to implement a single driver architecture everywhere. There was a brief attempt to do this with network card drivers, you may have heard of ndiswrapper.

    Unfortunately, it is probably not possible to do this and still have good performance.

  107. “I am reminded of the “The Luxury of Ignorance”, from years ago. How many times do you have to get burned by the Linux/X combination before you give up on it?”

    STRAIGHT to the point.

    This is exactly the reason I love my Mac and my Windows PC: there are constant problems with graphics and audio in Linux. The linux folks blame the hardware vendors for not being open enough, the hardware vendors blame Desktop Linux for having outdated or plain broken graphics (X.org) and audio (PulseAudio) stacks, and the user is caught somewhere in the middle.

    Do I look like I care? I want things to work, that’s why I stay out of Linux, although I like Unity. Distro vendors are the underdog here, not Nvidia/AMD/Intel, so it’s the job of the distro vendors to make sure hardware vendors can push prorpietary blobs for Linux, by not breaking compatibility.

    PS: Oooh, two sins in one post. Calling X.org and PulseAudio broken (the inconvinient truth linuxeros will never admit), and admitting I like Unity. I don’t expect ESR to answer.

    PPS: Have a look here http://linuxhaters.blogspot.gr/2008/10/pulse-my-audio.html and here http://linuxhaters.blogspot.gr/2008/06/nitty-gritty-shit-on-open-source.html for the gory details regarding the audio and graphics mess in linux and why, in the end, it’s mostly Linux fault and not the hardware vendors’ fault

  108. “The solution is to implement a single driver architecture everywhere. There was a brief attempt to do this with network card drivers, you may have heard of ndiswrapper.”

    @Random832: IIRC, this is presently done with the Nvidia driver, and has been done since the TNT line of graphics cards from the late 90s. Or at the very least, Nvidia does a decent job of providing the illusion of such. For as long as I can remember, the Nvidia driver has been but a single package, regardless of what platform you’re on or what chip of theirs your video card uses.

    “You’re arguing that the split is not possible. I am arguing that it is, and that the reasons for not using that split to move the secret sauce into firmware are independent of that. The latter is sufficient to argue against the whole idea; the former gives the open source advocates a convenient target to shoot at.”

    @Jay Maynard: If I’m understanding you correctly, not only is such possible, it is already being done with the AMD/ATI driver, or at least was. A few years back when I was building my present system, I had to fiddle with the open-source ATI driver to get on-board graphics to work before the Nvidia card from my then-current system could be transplanted. Part of the process involved getting a firmware binary blob and pointing the kernel config to it. The rest of the driver was in the kernel source tree if I recall.

  109. @Ltw

    >You want to paste another layer of abstraction over the top of the system to hide it, rather than change it.

    That’s precisely the “layer of abstraction” that exists in Windows between the user and the Registry, as well as between the iOS and Android UIs and whatever they respectively use under the hood. A computer needs some way to store configuration data, and it needs a way for Aunt Tilly to modify it. The two aren’t necessarily related. Tilly doesn’t care if the config is a text file of VAR=VAL pairs, XML, JSON, or some impenetrable proprietary binary format. The advantage of storing it in editable text files is that you aren’t forced to use a particular front end to manipulate it, although you do have to accept the responsibility if you screw things up if you bypass a front end that “knows” about the config format, which is why we only want Tilly working with such front ends.

    The disadvantages of editable text files:
    1) They can be much larger than binary blobs. One way to mitigate this is to use a well-defined compression tool like gzip or (pk)zip, as Open Office does.
    2) The programmers who write the software tend to think they’re so good they don’t need a front end, and therefore may not write one. Think of about:config here. This can be fixed by writing that Grand Unified Configuration Tool to which I referred above.

  110. > Calling X.org and PulseAudio broken (the inconvinient truth linuxeros will never admit)

    You must not spend much time talking to “linuxeros.”

  111. I have no dog in this fight (I have no idea what video card is in my PC, nor do I care what OS I am running) but the fact that your monitor was working fine for a few months has absolutely NOTHING to do with the likelihood of it being the culprit.

    The act of plugging and unplugging the video cable is what tends to trigger faults. This could have just as easily have been a marginal circuit board in your monitor, or static electricity on your part.

    The same applies to networking. Install a new switch and suddenly see a server NIC start misbehaving. In my experience the NIC (that had been working just fine prior to install) is at fault 3 to 1.

    Correlation != Causality

    You know this.

  112. Jeremy it is not within Nvidia’s power to replace the OS’ driver arch. They can do something like Jay is suggesting, but that is not independent of the underlying driver model. There is no dispute over whether you can stick a blob into firmware and call into it. But you can’t assume to be able to take all the important stuff from the driver and stick it in a blob and have it be efficient on or even compatible with all driver architectures. And again the point is absolutely moot since, putting such blobs on the firmware for secrecy is utterly pointless under the scenario being discussed, sine the open source code will have to get access to it. It is also a hard technical fact (for the time being) that you cannot perform all the performance-critical functions of the driver executing on the card. So the whole idea of “just stick it in firmware” as suggested by Eric does not work for the purposes he was proposing it.

  113. I have never had to edit the windows registry to plug in a basic peripheral. With great regularity, I have had to edit linux config files to plug in a basic peripheral.

  114. @The Monster
    > The programmers who write the software tend to think they’re so good they don’t need a front end, and therefore may not write one. Think of about:config here.

    Am I missing something? Firefox does have a preferences window. There are settings it does not expose, but having settings you don’t want normal users to change is not the same thing as expecting normal users to directly edit raw configuration data. Firefox would not be a fundamentally different product for “Aunt Tilly” if about:config did not exist at all and none of those settings were alterable without modifying the source and recompiling.

    If you are suggesting your “Grand Unified Configuration Tool” ought to present “user-friendly” descriptions for everything that’s in about:config (and what should it do for programs whose configuration language is turing-complete? Actually, Firefox’s is as well, but even about:config hides that.), that’s flatly impossible – the sheer number means it won’t be user-friendly.

  115. Delivered by flying pigs, no doubt.

    KDE is switchng to Wayland for version 5. From the sound of it they may not even support X11 anymore because it doesn’t meet their needs.

    The decision to deprecate X and move to Wayland is pretty much a done deal at this point; now it’s just a simple matter of hacking to get there.

  116. Jeff Read on Wednesday, April 17 2013 at 5:20 pm said:
    > KDE is switchng to Wayland for version 5. From the sound of it they may not even support X11 anymore because it doesn’t meet their needs.

    X11 is designed to provide a window at the other end of a long, slow, narrow unreliable ethernet connection from the program using the window.

    It therefore fails to take full advantage of the normal case, which is short fast fat reliable memory mapped connection between program and screen.

  117. Which is the oddest thing because the “modern graphics hardware” framework for Linux, OpenGL, has a display model much more like X’s than like the dumb framebuffer model favored by Wayland. Actually it’s a bit like NeWS in that you can send data and executable code to the server, then send draw commands that use that data and code as parameters.

    More evidence that “modern toolkits” are doing things the stupid way around.

  118. For what it’s worth, I had no problems setting up a dual monitor display on Mageia 2 and KDE. The machine is a Thinkpad W510 with a built-in Nvidia display chip. There is a nifty little configuration tool which is so easy to use that a Mac or Windows user would take very little time to master it.

    I must say though that I did battle a bit when I adopted my usual method trying to edit the Xorg.conf file, etc. It was only when I was ready to bash my head against both screens that I stumbled upon the nifty little “Configure your desktop” tool at the botrtom of the KDE screen. It’s easier than cursing after you have typed :wq in some wysiwyg editor.

Leave a comment

Your email address will not be published. Required fields are marked *