Hello, Colleagues!
The new service for tracking ABI changes in various C/C++ libraries is now available for Linux distribution maintainers and upstream developers - "Upstream Tracker". It may be helpful for analyzing risks of libraries updating in the CentOS and other systems. The service includes more than 100 libraries at the moment: OpenSSL, ALSA, glib, cairo, libssh, fontconfig etc.
The service is freely available at: http://linuxtesting.org/upstream-tracker/
Suggestions for libraries inclusion and feature/bug requests are very welcome. Thanks!
Hi Andrey,
On 07/14/2010 01:30 PM, Andrey Ponomarenko wrote:
The service is freely available at: http://linuxtesting.org/upstream-tracker/
This looks really good, thanks for getting it online. I was trying to track down the components and get my head around how it all works.
Is there anyway to generate the Library Descriptors onthefly / via code only ? I can see the upstream-tracker / ABI compliance testing script being a fantastic addition to the post-build testing that we do, but it would need to be completely automated. Getting Version, Libs and Headers info should be straightforward, but the abi sanity testing snippets in the xml looks hard to autogenerate. Or I am just missing something.
Given that you have most likely already spent sometime on this issue, would you have any comments about this ?
- KB
On 07/17/2010 12:16 PM, Karanbir Singh wrote:
Hi Andrey,
On 07/14/2010 01:30 PM, Andrey Ponomarenko wrote:
The service is freely available at: http://linuxtesting.org/upstream-tracker/
This looks really good, thanks for getting it online. I was trying to track down the components and get my head around how it all works.
Is there anyway to generate the Library Descriptors onthefly / via code only ? I can see the upstream-tracker / ABI compliance testing script being a fantastic addition to the post-build testing that we do, but it would need to be completely automated. Getting Version, Libs and Headers info should be straightforward,
Library descriptor (XML) including path to libs, headers and version number only is enough to execute both core tools "ABI Compliance Checker" and "API Sanity Autotest". Other sections of the descriptor are not necessary.
The examples of integration are: http://packages.debian.org/experimental/apt http://rpm5.org/ http://linuxtesting.org/upstream-tracker/
Now our team is working on the integration of the tools to the OBS (openSUSE Build Service) at the packages (RPM) building stage.
All examples shows that it is very easy to integrate these tools anywhere.
but the abi sanity testing snippets in the xml looks hard to autogenerate. Or I am just missing something.
Given that you have most likely already spent sometime on this issue, would you have any comments about this ?
- KB
Hi,
On 07/20/2010 10:31 AM, Andrey Ponomarenko wrote:
The examples of integration are: http://packages.debian.org/experimental/apt http://rpm5.org/ http://linuxtesting.org/upstream-tracker/
looks good.
Now our team is working on the integration of the tools to the OBS (openSUSE Build Service) at the packages (RPM) building stage.
That sounds interesting and is exactly what I had mentioned in my previous email that I'd like to be able to consider for the centos buildsystem as well. It would quite nicely add a test for changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
We don't do any testing of that nature at the moment, we rely on upstream having done such heavy lifting and we just aim to make sure we are as close to their builds as possible.
- KB
On Tue, 20 Jul 2010, Karanbir Singh wrote:
On 07/20/2010 10:31 AM, Andrey Ponomarenko wrote:
Now our team is working on the integration of the tools to the OBS (openSUSE Build Service) at the packages (RPM) building stage.
That sounds interesting and is exactly what I had mentioned in my previous email that I'd like to be able to consider for the centos buildsystem as well. It would quite nicely add a test for changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
Is the CentOS buildsystem available somewhere for people to hack on ?
On 07/20/2010 01:43 PM, Dag Wieers wrote:
changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
Is the CentOS buildsystem available somewhere for people to hack on ?
Its effectively plague, which is public ( although not going anywhere ).
- KB
On Tue, 2010-07-20 at 13:59 +0100, Karanbir Singh wrote:
On 07/20/2010 01:43 PM, Dag Wieers wrote:
changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
Is the CentOS buildsystem available somewhere for people to hack on ?
Its effectively plague, which is public ( although not going anywhere ).
--- Any patches to it to use a 100% NFS Build Root? Instead of having to use a Local disk for the yum database locking? Curious if you have a way around this.
John
On Jul 20, 2010, at 9:35 AM, JohnS wrote:
On Tue, 2010-07-20 at 13:59 +0100, Karanbir Singh wrote:
On 07/20/2010 01:43 PM, Dag Wieers wrote:
changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
Is the CentOS buildsystem available somewhere for people to hack on ?
Its effectively plague, which is public ( although not going anywhere ).
Any patches to it to use a 100% NFS Build Root? Instead of having to use a Local disk for the yum database locking? Curious if you have a way around this.
Yum or rpmdb locking? If rpmdb, Berkeley DB can be mapped locally, or mapped to a RDONLY store on NFS that NEVER changes so that rpmdb locks can be safely disabled.
Yum can fix its own issues.
73 de Jeff
On Tue, 2010-07-20 at 10:07 -0400, Jeff Johnson wrote:
On Jul 20, 2010, at 9:35 AM, JohnS wrote:
Yum or rpmdb locking? If rpmdb, Berkeley DB can be mapped locally, or mapped to a RDONLY store on NFS that NEVER changes so that rpmdb locks can be safely disabled.
Yum can fix its own issues.
73 de Jeff
--- It would be a Yum locking problem when using plague or revisor. So how can yum obtain a lock over a "rw" nfs root? I was interested in Karan was doing this.
# Working Config self.destination_directory = "/home/ethan/x86_64/BUILDER/JE2" # NFS Root self.working_directory = "/BUILDER" # Local Directory
# Non Working I would like this in the NFS Root as well but have the yum lock issue. self.working_directory = "/home/ethan/x86_64/BUILDER/Working/JE2"
John
On Jul 20, 2010, at 10:45 AM, JohnS wrote:
On Tue, 2010-07-20 at 10:07 -0400, Jeff Johnson wrote:
On Jul 20, 2010, at 9:35 AM, JohnS wrote:
Yum or rpmdb locking? If rpmdb, Berkeley DB can be mapped locally, or mapped to a RDONLY store on NFS that NEVER changes so that rpmdb locks can be safely disabled.
Yum can fix its own issues.
73 de Jeff
It would be a Yum locking problem when using plague or revisor. So how can yum obtain a lock over a "rw" nfs root? I was interested in Karan was doing this.
OK.
# Working Config self.destination_directory = "/home/ethan/x86_64/BUILDER/JE2" # NFS Root self.working_directory = "/BUILDER" # Local Directory
# Non Working I would like this in the NFS Root as well but have the yum lock issue. self.working_directory = "/home/ethan/x86_64/BUILDER/Working/JE2"
There's nothing in yum but fcntl locks last I looked (perhaps a year ago, but there's so many incompatible versions around that ymmv.)
Smells like you need NFS locking configured. Do you have NFS configured for locking? Likely the best verification is to use a NFS stats collector and confirm whether you see lockd RPC packets. tcpdump can be used in a pinch too.
73 de Jeff
On Tue, 2010-07-20 at 10:50 -0400, Jeff Johnson wrote:
On Jul 20, 2010, at 10:45 AM, JohnS wrote:
It would be a Yum locking problem when using plague or revisor. So how can yum obtain a lock over a "rw" nfs root? I was interested in Karan was doing this.
OK.
# Working Config self.destination_directory = "/home/ethan/x86_64/BUILDER/JE2" # NFS Root self.working_directory = "/BUILDER" # Local Directory
# Non Working I would like this in the NFS Root as well but have the yum lock issue. self.working_directory = "/home/ethan/x86_64/BUILDER/Working/JE2"
There's nothing in yum but fcntl locks last I looked (perhaps a year ago, but there's so many incompatible versions around that ymmv.)
The exact problem is yum can't get a lock on the meta data in an nfs root. I have browsed the yum list and seen the same problems and no fix for it yet or even if it could be fixed.
Smells like you need NFS locking configured. Do you have NFS configured for locking? Likely the best verification is to use a NFS stats collector and confirm whether you see lockd RPC packets. tcpdump can be used in a pinch too.
Tried locking and no locking for the nfs root. I've seen reports that this was a UDP problem but it persists UDP or TCP does not matter. I wanted Plague to run on a total nfs file root so multiple machines could have access at once to transfer files needed (grid style). I suspect Karan is doing it on Local Disk wheres as I want it my way. Maybe a thread subject change if he has any ideas and thanks for your also.
John
On Tue, 2010-07-20 at 11:37 -0400, JohnS wrote:
The exact problem is yum can't get a lock on the meta data in an nfs root. I have browsed the yum list and seen the same problems and no fix for it yet or even if it could be fixed.
You'd be best off asking on the yum or yum-devel lists and not on centos-devel.
thanks, -sv
On Tue, 2010-07-20 at 12:58 -0400, seth vidal wrote:
On Tue, 2010-07-20 at 11:37 -0400, JohnS wrote:
The exact problem is yum can't get a lock on the meta data in an nfs root. I have browsed the yum list and seen the same problems and no fix for it yet or even if it could be fixed.
You'd be best off asking on the yum or yum-devel lists and not on centos-devel.
thanks, -sv
--- Than You
John
On 07/20/2010 02:35 PM, JohnS wrote:
Any patches to it to use a 100% NFS Build Root? Instead of having to use a Local disk for the yum database locking? Curious if you have a way around this.
I dont, and dont know anyone, using chroots hosted over nfs. That would be slightly counter productive! I guess it depends on how many and what kind of packages you are building.
Eg. a couple of the centos builders run based off tmpfs hosted roots. No yum issues there
- KB
On Wed, 2010-07-21 at 10:43 +0100, Karanbir Singh wrote:
On 07/20/2010 02:35 PM, JohnS wrote:
Any patches to it to use a 100% NFS Build Root? Instead of having to use a Local disk for the yum database locking? Curious if you have a way around this.
I dont, and dont know anyone, using chroots hosted over nfs. That would be slightly counter productive! I guess it depends on how many and what kind of packages you are building.
Eg. a couple of the centos builders run based off tmpfs hosted roots. No yum issues there
- KB
--- Thank you very much for the reply.
John
On 07/20/2010 02:35 PM, JohnS wrote:
Any patches to it to use a 100% NFS Build Root? Instead of having to use a Local disk for the yum database locking? Curious if you have a way around this.
Karanbir Singh wrote:
I dont, and dont know anyone, using chroots hosted over nfs. That would be slightly counter productive! I guess it depends on how many and what kind of packages you are building.
Eg. a couple of the centos builders run based off tmpfs hosted roots. No yum issues there
NFS locking, and delay, and clock skew issues on the NFS servers vs. the build unit, make building in an NFS mount a painful experience. Adding chroot just makes what needs to be a robust process more fragile
There is little percentage to nfs based build finesystems, unless it is one's only choice ... and with the way ram prices have moved in the last few years, tmpfs provides a clearly better choice
-- Russ herrold
On Thu, 2010-07-22 at 15:30 -0400, R P Herrold wrote:
On 07/20/2010 02:35 PM, JohnS wrote:
NFS locking, and delay, and clock skew issues on the NFS servers vs. the build unit, make building in an NFS mount a painful experience. Adding chroot just makes what needs to be a robust process more fragile
I agree in one way then the other I do not. Standard kernel I see problems of timing issues. On the other hand a very different kernel, lets just say you can heartbeat to tier1atomic clock. Standard CentOS pool uses 2 which means the end user host is 3. I actually use that same kernel you seen on a host that's tier 2 clock. The killer is not the machine itself. It is the latency of the network. I can say as of now no time stamp problem yet (knock on wood). You know it's like the stock market. Latency loses all the time.
As for the total chroot that's in the back of my mind and no frilies on it yet. I don't have good thoughts on that. I started a local disk chroot to build a specific yonder project to see if it would return me repetable results under CentOS. I think a bot picked that up somewhere.
There is little percentage to nfs based build finesystems, unless it is one's only choice ... and with the way ram prices have moved in the last few years, tmpfs provides a clearly better choice
--- NFS exported to me running on my storage server give me multiple redundant data copies.
temp fs scares me unless I have a machine capable of RAM RAID 1 + Multi ECC and I have only one atm. It would be a much better way, I can't argue that point.
John
On Thu, 22 Jul 2010, JohnS wrote:
temp fs scares me unless I have a machine capable of RAM RAID 1 + Multi ECC and I have only one atm. It would be a much better way, I can't argue that point.
You are missing the point that building a distribution, or even just a few packages is going to (and had better) a binary idempotent result. If a machine goes down, wipe the failed partial build and do it again
You just don't need multiply redundant backups and six nines hardware, unless you need to heat your office
-- Russ herrold
On Fri, 2010-07-23 at 00:25 -0400, R P Herrold wrote:
On Thu, 22 Jul 2010, JohnS wrote:
temp fs scares me unless I have a machine capable of RAM RAID 1 + Multi ECC and I have only one atm. It would be a much better way, I can't argue that point.
You are missing the point that building a distribution, or even just a few packages is going to (and had better) a binary idempotent result. If a machine goes down, wipe the failed partial build and do it again
I'm quite sure you know how it feels to have to start over again. Time to fully script the process. I'll bang it around this weekend and take it for a run. I'll use /foo/bar/tmpfs for everything except the build output directory and go from there. I'll take ya way and see how it goes.
You just don't need multiply redundant backups and six nines hardware, unless you need to heat your office
No but I need a backup or two and maybe some notes taken here and there. Your correct on heating the office/computer room. It was 108*F today.
John
On 07/23/2010 06:21 AM, JohnS wrote:
temp fs scares me unless I have a machine capable of RAM RAID 1 + Multi ECC and I have only one atm. It would be a much better way, I can't argue that point.
Its only the buildroots that are hosted in tmpfs, not the scripts or the process around it. The buildroots are rebuild from scratch on each package build, so if the machine reboots you only lose the currently-being-built-package, nothing else around it.
- KB
On Fri, 2010-07-23 at 13:48 +0100, Karanbir Singh wrote:
On 07/23/2010 06:21 AM, JohnS wrote:
temp fs scares me unless I have a machine capable of RAM RAID 1 + Multi ECC and I have only one atm. It would be a much better way, I can't argue that point.
Its only the buildroots that are hosted in tmpfs, not the scripts or the process around it. The buildroots are rebuild from scratch on each package build, so if the machine reboots you only lose the currently-being-built-package, nothing else around it.
- KB
--- Thank you Karan
John
On Jul 20, 2010, at 8:38 AM, Karanbir Singh wrote:
Hi,
On 07/20/2010 10:31 AM, Andrey Ponomarenko wrote:
The examples of integration are: http://packages.debian.org/experimental/apt http://rpm5.org/ http://linuxtesting.org/upstream-tracker/
looks good.
Now our team is working on the integration of the tools to the OBS (openSUSE Build Service) at the packages (RPM) building stage.
That sounds interesting and is exactly what I had mentioned in my previous email that I'd like to be able to consider for the centos buildsystem as well. It would quite nicely add a test for changeset-between-updates sanity testing ( in as much as what sanity means to the CentOS userbase => things haven't changed ).
Adding to CentOS post-build tests isn't the right usage case for upstream-tracker imho.
The information in the upstream tracker is based on "upstream". There are any number of downstream modifications, such as ECC crypto being ripped out everywhere @redhat.com and the fact that there are no "reproducible builds" as long as AutoFu exists.
(aside) But there's no reason why additional interfaces cannot be analyzed during rpmbuild, or post-build for the RPM upgrade challenged (its really no harder than invoking an additional script at end of %install).
We don't do any testing of that nature at the moment, we rely on upstream having done such heavy lifting and we just aim to make sure we are as close to their builds as possible.
You should be careful to point out that "upstream" for CentOS is @redhat.com. That's a different meaning than commonly used for software distribution.
What I like about upgrade tracker is the deep vertical history. Its quite difficult to assess portability and determine "minimal necessary version" as a developer. One ends up getting blasted back to ancient releases with poorly understood de facto mixtures of "stuff". The information in upstream-tracker ie easily examined, and reasonable (from personal experience maintaining 2 of the projects being tracked upstream-tracker is identifying issues reliably. Sure there are both faux postives/negatives, can't be helped.)
The equivalent developer decision for distros is choosing when to, say, upgrade to openssl-1.0.0, and itemize a check-list of known issues that need to be fixed moving forward. ATM, distro decisions are largely ad hoc, where the old version is moved to compat-foo (which largely guarantees no ABI breakage), but the issues moving to a newer version of a package (and ensuring that all pther packages have been updated) are largely implicit rawhide-model processes which the details in the vertical history in update-tracker can help identify.
73 de Jeff