What is so fundamentally different about drivers in Linux and Windows?
Specifically, video card drivers have always frustrated my understanding of what's going on under the hood. Say I have a nice video card from ATI. I need to install some cool drivers from ATI in order to make the card work at its best and in order to do any cool things like dual monitors. I download these drivers from the company's website, install them on my machine, and I'm off and running. Assuming all goes according to plan.
That's all fine. Now, this is what confuses me. In Windows, I'm done forever at this point. I've never had a problem, nor heard of a problem, where I had to mysteriously reinstall the drivers for my video card after an Automatic Update. Not so in Linux. Every kernel update has me wondering if I'll have to reinstall the drivers for my video card. It seems like sometimes I do and sometimes I don't. I just reboot and pray that my video card is still working afterwards.
Not knowing a great deal about how drivers really work in Linux or Windows, I can only really conclude that either Microsoft never updates the Windows kernel (at least not in a way that screws with driver interfaces), or there is something very different about how the two operating systems handle drivers. Can anyone shed some light on the subject for me?
Thanks, bit
On Wed, 2007-12-26 at 12:06 -0500, Bit wrote:
What is so fundamentally different about drivers in Linux and Windows?
<snip>
Not knowing a great deal about how drivers really work in Linux or Windows, I can only really conclude that either Microsoft never updates the Windows kernel (at least not in a way that screws with driver interfaces), or there is something very different about how the two operating systems handle drivers. Can anyone shed some light on the subject for me?
Thanks, bit
<snip sig stuff>
It really comes down to the difference between the "business models". Windows: closed, propietary, major market share; *IX (a lot of them) open, non-propietary, lesser market share.
The developers of the hardware almost always provide drivers for W*dows. They know their hardware intimately, they have full-blown develpment teams and systems with access to necessary source in W*dows, etc.
For *IX, only some hardware developers provide drivers. The rest are developed by community members. These are often based on only specs from the hardware developers and often no specs at all. Specs may be erroneous, incomplete and/or late. No in-house development systems or teams.
W*dows drivers available upon release of the hardware, *IX often necessarily have to come late due to the items mentioned above.
You can avoid a lot of this by just running all your video adapters in VGA mode, which is relatively static and supported with very little change needed as new kernels and hardware become available.
William L. Maltby wrote:
On Wed, 2007-12-26 at 12:06 -0500, Bit wrote:
What is so fundamentally different about drivers in Linux and Windows?
<snip>
Not knowing a great deal about how drivers really work in Linux or Windows, I can only really conclude that either Microsoft never updates the Windows kernel (at least not in a way that screws with driver interfaces), or there is something very different about how the two operating systems handle drivers. Can anyone shed some light on the subject for me?
Thanks, bit
<snip sig stuff>
It really comes down to the difference between the "business models". Windows: closed, propietary, major market share; *IX (a lot of them) open, non-propietary, lesser market share.
The developers of the hardware almost always provide drivers for W*dows. They know their hardware intimately, they have full-blown develpment teams and systems with access to necessary source in W*dows, etc.
For *IX, only some hardware developers provide drivers. The rest are developed by community members. These are often based on only specs from the hardware developers and often no specs at all. Specs may be erroneous, incomplete and/or late. No in-house development systems or teams.
W*dows drivers available upon release of the hardware, *IX often necessarily have to come late due to the items mentioned above.
You can avoid a lot of this by just running all your video adapters in VGA mode, which is relatively static and supported with very little change needed as new kernels and hardware become available.
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Thanks, bit
On Wed, 2007-12-26 at 12:48 -0500, Bit wrote:
<snip>
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
If your drivers come originally from the OS repo, you should not have to do this. If they come from elsewhere, they may get stepped on when you do a yum update. Depends on the structure of the components (directories, etc.).
Something that may help, is to make sure they come from some kind of rpm. After updates, "updatedb" followed by "locate rpmsave rpmnew" may help also. If your driver is from external source, maybe all that happened is your (X?) config file was (not) replaced?
If the kernel API hasn't changed and your driver is from outside the repo of the distribution you use, you shouldn't have to re-install. Just reference it in the appropriate config file (Xorg.conf, modprobe.conf? etc.)
There is also dkms available to automatically remake the drivers and another pkg I can't remember. Search with google using site::centos/org for dkms and you'll see the thread that mentions the other system too.
Oh! http://lists.centos.org/pipermail/centos/2007-October/087623.html
Knod is the preferred system.
Thanks, bit
<snip sig stuff>
On Dec 26, 2007 12:48 PM, Bit bit2300@gmail.com wrote:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
In windows, you're not swapping out the kernel tree with somewhat regular frequency. You're patching it, or applying patches to other bits that it calls. The proprietary driver vendors (ATI/Nvidia) have spent more time and more development resource into ensuring that the transitions here work fine with no need to re-install. Such is not the case with linux, which is only now gaining enough market share to force them to notice.
In some instances, things like dkms can be used which remove the irritation you're seeing, as the kernel module is updated/rebuilt for each newer kernel if it's detected to not already be present. ATI and Nvidia however do not use this packaging method. Dag and other 3rd party packagers have taken the charge on some of this, however it's still mostly up to the community to improve what the vendors dole out.
On Wed, Dec 26, 2007 at 12:48:52PM -0500, Bit wrote:
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Linux doesn't have a stable ABI (for drivers, userland is a different thing), but Windows does.
That means that drivers compiled for your kernel today may not install on newer (or older) kernels. You'll have to recompile it. Also, changes like support for more than 4GB, how the lower 4GB is split, architecture options, gcc version, function calling convections, etc., creates dependencies that have to be met by the binary driver.
Windows guarantees that the exposed interface doesn't change, so there's no need to recompile things if something internal changes.
But Linux doesn't even have a stable API, so the module may not compile on your newer kernels.
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
Note that without a stable API, there is no change of a stable ABI.
Luciano Rocha wrote:
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
And keep in mind when you read it that other popular operating systems do manage to specify and maintain stable interfaces and cooperate with vendors that provide working drivers. And in spite of the spin this article applies to the deficiency in Linux you may end up needing one of those other operating systems for some things.
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 12:48:52PM -0500, Bit wrote:
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Linux doesn't have a stable ABI (for drivers, userland is a different thing), but Windows does.
That means that drivers compiled for your kernel today may not install on newer (or older) kernels. You'll have to recompile it. Also, changes like support for more than 4GB, how the lower 4GB is split, architecture options, gcc version, function calling convections, etc., creates dependencies that have to be met by the binary driver.
Windows guarantees that the exposed interface doesn't change, so there's no need to recompile things if something internal changes.
But Linux doesn't even have a stable API, so the module may not compile on your newer kernels.
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
Note that without a stable API, there is no change of a stable ABI.
Luciano, thank you very much. I read your post, the link you provided, and a few other things from that link, and I at least understood enough to realize that it answers my question. I think I *kind of* get it now.
I think understanding the answer to my question really revolves around understanding an API and an ABI. Would you please read the following and let me know if I at least get the gist of what these two things are?
An API influences what your source code will look like. If they change the Linux kernel API, then you may need to make changes to your source code such as making "myLinuxKernelAPIFunctionCall( myparam1, myparam2 )" look something more like "myUpdatedLinuxKernelAPIFunctionCall( myparam1 )" in order to even make your code compile.
The ABI is the interface between a compiled binary kernel module and the kernel. It determines if an already compiled binary will properly interface with the kernel and run. If the ABI changes and you find your kernel module won't run properly, you just need to recompile from source to get a kernel module that will run. Hopefully the API hasn't changed and you won't need to change your source code to make it recompile...
BOTH kinds of changes happen with some degree of frequency in Linux.
Did I get at least this much right?
On Wed, Dec 26, 2007 at 04:01:22PM -0500, Bit wrote:
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 12:48:52PM -0500, Bit wrote:
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Linux doesn't have a stable ABI (for drivers, userland is a different thing), but Windows does.
That means that drivers compiled for your kernel today may not install on newer (or older) kernels. You'll have to recompile it. Also, changes like support for more than 4GB, how the lower 4GB is split, architecture options, gcc version, function calling convections, etc., creates dependencies that have to be met by the binary driver.
Windows guarantees that the exposed interface doesn't change, so there's no need to recompile things if something internal changes.
But Linux doesn't even have a stable API, so the module may not compile on your newer kernels.
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
Note that without a stable API, there is no change of a stable ABI.
Luciano, thank you very much. I read your post, the link you provided, and a few other things from that link, and I at least understood enough to realize that it answers my question. I think I *kind of* get it now.
I think understanding the answer to my question really revolves around understanding an API and an ABI. Would you please read the following and let me know if I at least get the gist of what these two things are?
FYI, when in doubt about these acronyms, search for "define: ABI", for instance, in google.
An API influences what your source code will look like. If they change the Linux kernel API, then you may need to make changes to your source code such as making "myLinuxKernelAPIFunctionCall( myparam1, myparam2 )" look something more like "myUpdatedLinuxKernelAPIFunctionCall( myparam1 )" in order to even make your code compile.
Yes, but also more important things. Like changing the use of semaphores to mutexes, when appropriate (they "lock" something, mutexes can have only one accessing the something at once, while semaphores can have N accessing); changing the way stuff is exported to userland (sysfs, configfs, debugfs, relayfs, procfs); etc..
The ABI is the interface between a compiled binary kernel module and the kernel. It determines if an already compiled binary will properly interface with the kernel and run. If the ABI changes and you find your kernel module won't run properly, you just need to recompile from source to get a kernel module that will run. Hopefully the API hasn't changed and you won't need to change your source code to make it recompile...
Yes, that's it. The compiled modules include the dependency info, so that you won't be able to insert it in another kernel: $ modinfo ext2.ko filename: ext2.ko license: GPL description: Second Extended Filesystem author: Remy Card and others depends: vermagic: 2.6.23.12lcfs1 preempt mod_unload PENTIUM4 4KSTACKS
BOTH kinds of changes happen with some degree of frequency in Linux.
Yes. Due to the nature of the kernel (open-source, GPL), and the current policy, changes occur *very* frequently, especially in the 2.6.x series.
Did I get at least this much right?
I think you're doing fine. Note that this state of affairs is more due to how the kernel people work/philosophy, than due to technical limitations, as Les Mikesell pointed out.
Anyway, nifty things have come from this development method in the latest Linux kernels.
Also, there are some stable apis/abis: userland driver (for simple devices, no dma is possible, for instance), and userland access to usb devices (mostly used for printers, scanners and non-standard usb memory sticks/mp3 players).
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 04:01:22PM -0500, Bit wrote:
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 12:48:52PM -0500, Bit wrote:
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Linux doesn't have a stable ABI (for drivers, userland is a different thing), but Windows does.
That means that drivers compiled for your kernel today may not install on newer (or older) kernels. You'll have to recompile it. Also, changes like support for more than 4GB, how the lower 4GB is split, architecture options, gcc version, function calling convections, etc., creates dependencies that have to be met by the binary driver.
Windows guarantees that the exposed interface doesn't change, so there's no need to recompile things if something internal changes.
But Linux doesn't even have a stable API, so the module may not compile on your newer kernels.
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
Note that without a stable API, there is no change of a stable ABI.
Luciano, thank you very much. I read your post, the link you provided, and a few other things from that link, and I at least understood enough to realize that it answers my question. I think I *kind of* get it now.
I think understanding the answer to my question really revolves around understanding an API and an ABI. Would you please read the following and let me know if I at least get the gist of what these two things are?
FYI, when in doubt about these acronyms, search for "define: ABI", for instance, in google.
An API influences what your source code will look like. If they change the Linux kernel API, then you may need to make changes to your source code such as making "myLinuxKernelAPIFunctionCall( myparam1, myparam2 )" look something more like "myUpdatedLinuxKernelAPIFunctionCall( myparam1 )" in order to even make your code compile.
Yes, but also more important things. Like changing the use of semaphores to mutexes, when appropriate (they "lock" something, mutexes can have only one accessing the something at once, while semaphores can have N accessing); changing the way stuff is exported to userland (sysfs, configfs, debugfs, relayfs, procfs); etc..
The ABI is the interface between a compiled binary kernel module and the kernel. It determines if an already compiled binary will properly interface with the kernel and run. If the ABI changes and you find your kernel module won't run properly, you just need to recompile from source to get a kernel module that will run. Hopefully the API hasn't changed and you won't need to change your source code to make it recompile...
Yes, that's it. The compiled modules include the dependency info, so that you won't be able to insert it in another kernel: $ modinfo ext2.ko filename: ext2.ko license: GPL description: Second Extended Filesystem author: Remy Card and others depends: vermagic: 2.6.23.12lcfs1 preempt mod_unload PENTIUM4 4KSTACKS
BOTH kinds of changes happen with some degree of frequency in Linux.
Yes. Due to the nature of the kernel (open-source, GPL), and the current policy, changes occur *very* frequently, especially in the 2.6.x series.
Did I get at least this much right?
I think you're doing fine. Note that this state of affairs is more due to how the kernel people work/philosophy, than due to technical limitations, as Les Mikesell pointed out.
Anyway, nifty things have come from this development method in the latest Linux kernels.
Also, there are some stable apis/abis: userland driver (for simple devices, no dma is possible, for instance), and userland access to usb devices (mostly used for printers, scanners and non-standard usb memory sticks/mp3 players).
Thank you again for your help! You've given me a place to really get started and this doesn't seem quite so mysterious to me anymore. I have one last related burning question for now that I was hoping you might be able to answer.
ATI drivers are proprietary and closed-source. So, for example, on my current desktop, I download the Linux drivers for my card from the link below and run the installer as per their instructions. http://ati.amd.com/support/drivers/linux/linux-radeon.html
It's doing *something* to make a kernel module that will insert into and work with my current running kernel. At one time, I thought that it was compiling a module from source code, probably by invoking make, in much the same way I might download and install any open-source software in Linux from a tarball.
However, I realized that this doesn't make sense since ATI's drivers are proprietary and closed-source. So the installer I download can't possibly be compiling anything from source code, because that would mean I could almost certainly read the source code, which they don't want. Which leaves me wondering what the installer is really doing. Any ideas?
Thanks! bit
On Wed, Dec 26, 2007 at 05:01:54PM -0500, Bit wrote:
ATI drivers are proprietary and closed-source. So, for example, on my current desktop, I download the Linux drivers for my card from the link below and run the installer as per their instructions. http://ati.amd.com/support/drivers/linux/linux-radeon.html
It's doing *something* to make a kernel module that will insert into and work with my current running kernel. At one time, I thought that it was compiling a module from source code, probably by invoking make, in much the same way I might download and install any open-source software in Linux from a tarball.
However, I realized that this doesn't make sense since ATI's drivers are proprietary and closed-source. So the installer I download can't possibly be compiling anything from source code, because that would mean I could almost certainly read the source code, which they don't want. Which leaves me wondering what the installer is really doing. Any ideas?
The drivers are comprised of two things: 1. The X driver and OpenGL library, usually in binary form only; 2. The kernel driver for accessing and controlling the hardware.
Usually, the X driver/OpenGL library does most of the "3D" work, but that isn't necessarily so.
Now, about the "can't possibly be compiling anything from source code".
Assuming you have compiled or developed a few things, you should know that the final program is composed by several object files, .o.
Kernel drivers/modules aren't any different. What happens is that there's at least one binary .o, without any source code, already compiled in the installer/package.
There's also what is usually called a shim. A piece of source code that does the bridge between your kernel and the real code. The real code is thus a little abstracted from the kernel API, though not at all, as was attested by recent breaks in the nVidia driver with new kernels. But they are usually quick to respond to those changes.
So, there _is_ a make and compile involved, but is usually the compilation of small code, linking with the big blob in an .o.
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 05:01:54PM -0500, Bit wrote:
ATI drivers are proprietary and closed-source. So, for example, on my current desktop, I download the Linux drivers for my card from the link below and run the installer as per their instructions. http://ati.amd.com/support/drivers/linux/linux-radeon.html
It's doing *something* to make a kernel module that will insert into and work with my current running kernel. At one time, I thought that it was compiling a module from source code, probably by invoking make, in much the same way I might download and install any open-source software in Linux from a tarball.
However, I realized that this doesn't make sense since ATI's drivers are proprietary and closed-source. So the installer I download can't possibly be compiling anything from source code, because that would mean I could almost certainly read the source code, which they don't want. Which leaves me wondering what the installer is really doing. Any ideas?
The drivers are comprised of two things:
- The X driver and OpenGL library, usually in binary form only;
- The kernel driver for accessing and controlling the hardware.
Usually, the X driver/OpenGL library does most of the "3D" work, but that isn't necessarily so.
Now, about the "can't possibly be compiling anything from source code".
Assuming you have compiled or developed a few things, you should know that the final program is composed by several object files, .o.
Kernel drivers/modules aren't any different. What happens is that there's at least one binary .o, without any source code, already compiled in the installer/package.
There's also what is usually called a shim. A piece of source code that does the bridge between your kernel and the real code. The real code is thus a little abstracted from the kernel API, though not at all, as was attested by recent breaks in the nVidia driver with new kernels. But they are usually quick to respond to those changes.
So, there _is_ a make and compile involved, but is usually the compilation of small code, linking with the big blob in an .o.
Thanks! I appreciate you taking the time to explain all that. I have some moderate amount of programming experience, so that did (for the most part) make sense. =)
Thanks to everyone else who helped out too. This has probably been the most helpful thread I've ever read. One of those little things constantly at the back of my mind finally put to rest. =P
Bit wrote:
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 04:01:22PM -0500, Bit wrote:
Luciano Rocha wrote:
On Wed, Dec 26, 2007 at 12:48:52PM -0500, Bit wrote:
Thanks to both of you for the reply. Good information, but that still doesn't really answer my question. I'm more interested in the technical side of things. What I really want to understand boils down to this:
Why is it that in Windows I can install ATI drivers once and never worry about it again, while in Linux I may have to *reinstall* the drivers at a later date after a system update to get my card working with them again? Experience has proven to me that in Windows I can install the ATI drivers once, leave those same drivers on there for eternity, update the system over and over with Automatic Updates, and never worry about it breaking my video card. In Linux, every time I see a kernel update, I've learned to be braced for impact and just be ready with my ATI drivers to reinstall to get my card working again. I've never understood this. I'd like a technical explanation for why this is so.
Linux doesn't have a stable ABI (for drivers, userland is a different thing), but Windows does.
That means that drivers compiled for your kernel today may not install on newer (or older) kernels. You'll have to recompile it. Also, changes like support for more than 4GB, how the lower 4GB is split, architecture options, gcc version, function calling convections, etc., creates dependencies that have to be met by the binary driver.
Windows guarantees that the exposed interface doesn't change, so there's no need to recompile things if something internal changes.
But Linux doesn't even have a stable API, so the module may not compile on your newer kernels.
Please see Documentation/stable_api_nonsense.txt, in the kernel sources, or online at: http://scienceblogs.com/gregladen/2007/12/linux_stable_api_vs_not.php
Note that without a stable API, there is no change of a stable ABI.
Luciano, thank you very much. I read your post, the link you provided, and a few other things from that link, and I at least understood enough to realize that it answers my question. I think I *kind of* get it now.
I think understanding the answer to my question really revolves around understanding an API and an ABI. Would you please read the following and let me know if I at least get the gist of what these two things are?
FYI, when in doubt about these acronyms, search for "define: ABI", for instance, in google.
An API influences what your source code will look like. If they change the Linux kernel API, then you may need to make changes to your source code such as making "myLinuxKernelAPIFunctionCall( myparam1, myparam2 )" look something more like "myUpdatedLinuxKernelAPIFunctionCall( myparam1 )" in order to even make your code compile.
Yes, but also more important things. Like changing the use of semaphores to mutexes, when appropriate (they "lock" something, mutexes can have only one accessing the something at once, while semaphores can have N accessing); changing the way stuff is exported to userland (sysfs, configfs, debugfs, relayfs, procfs); etc..
The ABI is the interface between a compiled binary kernel module and the kernel. It determines if an already compiled binary will properly interface with the kernel and run. If the ABI changes and you find your kernel module won't run properly, you just need to recompile from source to get a kernel module that will run. Hopefully the API hasn't changed and you won't need to change your source code to make it recompile...
Yes, that's it. The compiled modules include the dependency info, so that you won't be able to insert it in another kernel: $ modinfo ext2.ko filename: ext2.ko license: GPL description: Second Extended Filesystem author: Remy Card and others depends: vermagic: 2.6.23.12lcfs1 preempt mod_unload PENTIUM4 4KSTACKS
BOTH kinds of changes happen with some degree of frequency in Linux.
Yes. Due to the nature of the kernel (open-source, GPL), and the current policy, changes occur *very* frequently, especially in the 2.6.x series.
Did I get at least this much right?
I think you're doing fine. Note that this state of affairs is more due to how the kernel people work/philosophy, than due to technical limitations, as Les Mikesell pointed out.
Anyway, nifty things have come from this development method in the latest Linux kernels.
Also, there are some stable apis/abis: userland driver (for simple devices, no dma is possible, for instance), and userland access to usb devices (mostly used for printers, scanners and non-standard usb memory sticks/mp3 players).
Thank you again for your help! You've given me a place to really get started and this doesn't seem quite so mysterious to me anymore. I have one last related burning question for now that I was hoping you might be able to answer.
ATI drivers are proprietary and closed-source. So, for example, on my current desktop, I download the Linux drivers for my card from the link below and run the installer as per their instructions. http://ati.amd.com/support/drivers/linux/linux-radeon.html
It's doing *something* to make a kernel module that will insert into and work with my current running kernel. At one time, I thought that it was compiling a module from source code, probably by invoking make, in much the same way I might download and install any open-source software in Linux from a tarball.
However, I realized that this doesn't make sense since ATI's drivers are proprietary and closed-source. So the installer I download can't possibly be compiling anything from source code, because that would mean I could almost certainly read the source code, which they don't want. Which leaves me wondering what the installer is really doing. Any ideas?
It is compiling a module from kernel source code ... part of the process also LINKS into PRE compiled object files (also known a .o files) and /or shared object files (also known as .so files).
ATI (and nvidia) provide pre-compiled .o files to link to ... without providing the source code to build the .o files.
The newer ATI drivers are really open source, so they provide the source code to build the .o files.
On Wed, Dec 26, 2007, Bit wrote:
What is so fundamentally different about drivers in Linux and Windows?
A major difference with Vista is that their drivers are more concerned with DMCA copyright protection than performance, and this goes down into the hardware as well.
http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.html
Specifically, video card drivers have always frustrated my understanding of what's going on under the hood. Say I have a nice video card from ATI. I need to install some cool drivers from ATI in order to make the card work at its best and in order to do any cool things like dual monitors. I download these drivers from the company's website, install them on my machine, and I'm off and running. Assuming all goes according to plan.
Video card manufacturers have a long history of changing things, with no documentation of course as they're proprietary. So long as they provide Windows drivers, they figure their job is done.
Linux doesn't have enough market share to really get their attention, and developing these for Vista is made more expensive (see the article above) so they concentrate their efforts where they will get the most return.
Bill -- INTERNET: bill@celestial.com Bill Campbell; Celestial Software LLC URL: http://www.celestial.com/ PO Box 820; 6641 E. Mercer Way FAX: (206) 232-9186 Mercer Island, WA 98040-0820; (206) 236-1676
The day-to-day travails of the IBM programmer are so amusing to most of us who are fortunate enough never to have been one -- like watching Charlie Chaplin trying to cook a shoe.