Hi, I have a bundle of armv7 machines here and quite some interest in running CentOS7 on them at a point not too far into the future.
So, given that we have more hardware than sysadmin-time available, but can invest some of our time to this,
What can we do to assist / help?
Is anyone helped by access to hardware? If so, we can set up the machines to tftp boot and give development access to them over ssh.
Are you more helped by setting up automated build systems? If so, how do I get started (Deployment guide, documentation ,what has to be installed and where do I start would be welcome)
Are things in the point where it can already run and work? If so, where do I find binaries / images to use and test with?
Questions questions, looking forwards to hear from you.
//D.S.
Hi,
On 07/01/2014 04:22 PM, D.S. Ljungmark wrote:
Hi, I have a bundle of armv7 machines here and quite some interest in running CentOS7 on them at a point not too far into the future.
nice!
So, given that we have more hardware than sysadmin-time available, but can invest some of our time to this,
What can we do to assist / help?
get a fedora19 armv7 repo up locally, write a mock config to point at it, and then run that mock builder against the centos7 srpms.
Is anyone helped by access to hardware? If so, we can set up the machines to tftp boot and give development access to them over ssh.
it might be most productive to have you just run the mock setup in a loop - if you have 20 odd machines running, the entire loop should finish fairly quickly, and just iteratively keep running till you get a complete cycle with nothing building. Those buildlogs will then be interesting to see somewhere, the interim ones might be good to have archieved off somewhere.
what we tend to do is have every iteration run its output into its own directorty, eg: c6.99.01 might be a good target number to use for the first time, then c6.99.02 for the second , and so on.
it it helps,m I can get some mock configs online for you to bootstrap from ?
Are you more helped by setting up automated build systems? If so, how do I get started (Deployment guide, documentation ,what has to be installed and where do I start would be welcome)
we'd need to do that once we have the code is going to build and work, till then just mock by hand is most useful.
Are things in the point where it can already run and work? If so, where do I find binaries / images to use and test with?
there are a few build loops already run, but feel free to start again - that way you have the complete picture, locally. - KB
Will Fedora 20 work, or is it F19 or bust? ( the reason I'm asking is that we have a functional F20 image for them. )
mock configs would be excellent. I'll have to postpone getting it running for a week or so due to the aforementioned missing sysadmin-time.
Regards, D.S.
On Wed, Jul 2, 2014 at 11:33 AM, Karanbir Singh kbsingh@centos.org wrote:
Hi,
On 07/01/2014 04:22 PM, D.S. Ljungmark wrote:
Hi, I have a bundle of armv7 machines here and quite some interest in running CentOS7 on them at a point not too far into the future.
nice!
So, given that we have more hardware than sysadmin-time available, but can invest some of our time to this,
What can we do to assist / help?
get a fedora19 armv7 repo up locally, write a mock config to point at it, and then run that mock builder against the centos7 srpms.
Is anyone helped by access to hardware? If so, we can set up the machines to tftp boot and give development access to them over ssh.
it might be most productive to have you just run the mock setup in a loop - if you have 20 odd machines running, the entire loop should finish fairly quickly, and just iteratively keep running till you get a complete cycle with nothing building. Those buildlogs will then be interesting to see somewhere, the interim ones might be good to have archieved off somewhere.
what we tend to do is have every iteration run its output into its own directorty, eg: c6.99.01 might be a good target number to use for the first time, then c6.99.02 for the second , and so on.
it it helps,m I can get some mock configs online for you to bootstrap from ?
Are you more helped by setting up automated build systems? If so, how do I get started (Deployment guide, documentation ,what has to be installed and where do I start would be welcome)
we'd need to do that once we have the code is going to build and work, till then just mock by hand is most useful.
Are things in the point where it can already run and work? If so, where do I find binaries / images to use and test with?
there are a few build loops already run, but feel free to start again - that way you have the complete picture, locally.
- KB
-- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
On 07/02/2014 09:07 PM, D.S. Ljungmark wrote:
Will Fedora 20 work, or is it F19 or bust? ( the reason I'm asking is that we have a functional F20 image for them. )
f19 or bust... the f20 codebase is far newer for us to bootstrap from
remember you dont need to be running f19 on the machines, you just need mock to be hitting a f19 repo to be using that inside the buildroots. I dont thikn there is any harm in doing so on a f20 installed machine
eg. all our c7 x86_64 builds are run on a centos6 machine
mock configs would be excellent. I'll have to postpone getting it running for a week or so due to the aforementioned missing sysadmin-time.
that gives me a bit of time as well then :)
Okay, I'll have to read up on how Mock works, I wasn't certain about the difference between a mock f19 target/repo and running OS.
Hoping I can get a limited / minimal CentOS 7 up into bootable state (Well, uboot and kernel might have to come from elsewhere, but that's a minor thing) for them now, as I'm mostly unsatisfied with the current distributions.
Poke me if I disappear, I have a tendency to get both distracted and overworked at the same time.
( Now, it's back to building debian on them. ;)
//D.S.
On Wed, Jul 2, 2014 at 10:11 PM, Karanbir Singh mail-lists@karan.org wrote:
On 07/02/2014 09:07 PM, D.S. Ljungmark wrote:
Will Fedora 20 work, or is it F19 or bust? ( the reason I'm asking is that we have a functional F20 image for them. )
f19 or bust... the f20 codebase is far newer for us to bootstrap from
remember you dont need to be running f19 on the machines, you just need mock to be hitting a f19 repo to be using that inside the buildroots. I dont thikn there is any harm in doing so on a f20 installed machine
eg. all our c7 x86_64 builds are run on a centos6 machine
mock configs would be excellent. I'll have to postpone getting it running for a week or so due to the aforementioned missing sysadmin-time.
that gives me a bit of time as well then :)
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
On 07/02/2014 09:15 PM, D.S. Ljungmark wrote:
( Now, it's back to building debian on them. ;)
we must save this man ( or woman, as the case might be )
On 07/02/2014 11:27 PM, Karanbir Singh wrote:
On 07/02/2014 09:15 PM, D.S. Ljungmark wrote:
( Now, it's back to building debian on them.;)
we must save this man ( or woman, as the case might be )
Faute de mieux as the French say it, I'm running linaro on my machines . For the Fedora image I could boot ( after a long hassle and following step by step Hans's instructions ) it would have taken way too much time to get graphic acceleration properly packaged and functional. For linaro is was dd if=image.iso of=sdcard && profit.
On 2014-07-03 03:01, Manuel Wolfshant wrote:
On 07/02/2014 11:27 PM, Karanbir Singh wrote:
On 07/02/2014 09:15 PM, D.S. Ljungmark wrote:
( Now, it's back to building debian on them.;)
we must save this man ( or woman, as the case might be )
Faute de mieux as the French say it, I'm running linaro on my machines . For the Fedora image I could boot ( after a long hassle and following step by step Hans's instructions ) it would have taken way too much time to get graphic acceleration properly packaged and functional. For linaro is was dd if=image.iso of=sdcard && profit.
I usually hand pick the files I need from a distro that works and then apply them to distro I want to run. For example, RedSleeve EL6 doesn't come with a kernel because there are too many SoCs to sensibly support - the official recommendation is to use whatever kernel/modules/firmware your device came with when using the EL6 userspace.
I guess the point I am getting to is that it's not an either/or case, you can have the best of both. For example, I use a Samsung Chromebook with RedSleeve and the standard ChromeOS kernel.
Gordan
F19 is the closest to EL7, so it will make your live much easier for the first pass. The more you diverge, the fiddlier the first stage gets, and first stage is always the fiddliest. Stick with F19 if at all possible.
I'm looking at strapping first pass on F18 for soft-float, and I'm expecting it to be much less smooth.
I've been through similar with the EL6 build, and in retrospect I would have saved myself a fair amount of time if I had built the first pass on F12 rather than F13, and those were very similar releases.
Gordan
On 2014-07-02 21:07, D.S. Ljungmark wrote:
Will Fedora 20 work, or is it F19 or bust? ( the reason I'm asking is that we have a functional F20 image for them. )
mock configs would be excellent. I'll have to postpone getting it running for a week or so due to the aforementioned missing sysadmin-time.
Regards, D.S.
On Wed, Jul 2, 2014 at 11:33 AM, Karanbir Singh kbsingh@centos.org wrote:
Hi,
On 07/01/2014 04:22 PM, D.S. Ljungmark wrote:
Hi, I have a bundle of armv7 machines here and quite some interest in running CentOS7 on them at a point not too far into the future.
nice!
So, given that we have more hardware than sysadmin-time available, but can invest some of our time to this,
What can we do to assist / help?
get a fedora19 armv7 repo up locally, write a mock config to point at it, and then run that mock builder against the centos7 srpms.
Is anyone helped by access to hardware? If so, we can set up the machines to tftp boot and give development access to them over ssh.
it might be most productive to have you just run the mock setup in a loop - if you have 20 odd machines running, the entire loop should finish fairly quickly, and just iteratively keep running till you get a complete cycle with nothing building. Those buildlogs will then be interesting to see somewhere, the interim ones might be good to have archieved off somewhere.
what we tend to do is have every iteration run its output into its own directorty, eg: c6.99.01 might be a good target number to use for the first time, then c6.99.02 for the second , and so on.
it it helps,m I can get some mock configs online for you to bootstrap from ?
Are you more helped by setting up automated build systems? If so, how do I get started (Deployment guide, documentation ,what has to be installed and where do I start would be welcome)
we'd need to do that once we have the code is going to build and work, till then just mock by hand is most useful.
Are things in the point where it can already run and work? If so, where do I find binaries / images to use and test with?
there are a few build loops already run, but feel free to start again
that way you have the complete picture, locally.
- KB
-- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
Thanks for the head's up on that.
So, plan of action would be: * Find / prepare a F19 bootable image. * Install mock (git or are the packages ok?) * build a mock F19 starter, test compile something traditional (bash?) * Duplicate this environment to the various machines * set up nfs for compile target * wrap some scripts around pssh to do parallel builds
-- Am I missing something major here? //D.S.
On 03/07/14 11:57, Gordan Bobic wrote:
F19 is the closest to EL7, so it will make your live much easier for the first pass. The more you diverge, the fiddlier the first stage gets, and first stage is always the fiddliest. Stick with F19 if at all possible.
I'm looking at strapping first pass on F18 for soft-float, and I'm expecting it to be much less smooth.
I've been through similar with the EL6 build, and in retrospect I would have saved myself a fair amount of time if I had built the first pass on F12 rather than F13, and those were very similar releases.
Gordan
On 2014-07-02 21:07, D.S. Ljungmark wrote:
Will Fedora 20 work, or is it F19 or bust? ( the reason I'm asking is that we have a functional F20 image for them. )
mock configs would be excellent. I'll have to postpone getting it running for a week or so due to the aforementioned missing sysadmin-time.
Regards, D.S.
On Wed, Jul 2, 2014 at 11:33 AM, Karanbir Singh kbsingh@centos.org wrote:
Hi,
On 07/01/2014 04:22 PM, D.S. Ljungmark wrote:
Hi, I have a bundle of armv7 machines here and quite some interest in running CentOS7 on them at a point not too far into the future.
nice!
So, given that we have more hardware than sysadmin-time available, but can invest some of our time to this,
What can we do to assist / help?
get a fedora19 armv7 repo up locally, write a mock config to point at it, and then run that mock builder against the centos7 srpms.
Is anyone helped by access to hardware? If so, we can set up the machines to tftp boot and give development access to them over ssh.
it might be most productive to have you just run the mock setup in a loop - if you have 20 odd machines running, the entire loop should finish fairly quickly, and just iteratively keep running till you get a complete cycle with nothing building. Those buildlogs will then be interesting to see somewhere, the interim ones might be good to have archieved off somewhere.
what we tend to do is have every iteration run its output into its own directorty, eg: c6.99.01 might be a good target number to use for the first time, then c6.99.02 for the second , and so on.
it it helps,m I can get some mock configs online for you to bootstrap from ?
Are you more helped by setting up automated build systems? If so, how do I get started (Deployment guide, documentation ,what has to be installed and where do I start would be welcome)
we'd need to do that once we have the code is going to build and work, till then just mock by hand is most useful.
Are things in the point where it can already run and work? If so, where do I find binaries / images to use and test with?
there are a few build loops already run, but feel free to start again
that way you have the complete picture, locally.
- KB
-- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional (bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
Gordan
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional (bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
Gordan _______________________________________________ Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
On 2014-07-03 11:19, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
I can't help but think that maybe the best way to work on this would be much more in the open. At the moment, Karanbir & Co. are working on their own builds, I'm working on mine, and you're planning to work on yours, and largely duplicating the work.
In the interest of sharing and efficiency, I think I'll kick off a rebuild from scratch this weekend, with the built packages going straight to a publicly accessible download point, with continuously updated repository metadata.
Gordan
On 07/03/2014 05:37 AM, Gordan Bobic wrote:
On 2014-07-03 11:19, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
I can't help but think that maybe the best way to work on this would be much more in the open. At the moment, Karanbir & Co. are working on their own builds, I'm working on mine, and you're planning to work on yours, and largely duplicating the work.
In the interest of sharing and efficiency, I think I'll kick off a rebuild from scratch this weekend, with the built packages going straight to a publicly accessible download point, with continuously updated repository metadata.
Gordon,
Once we get the x86_64 version of CentOS 7 out the door ... hopefully in the next week ... we (CentOS) will start concentrating more on ARM32 as a group. We will set everything up to start building to buildlogs.centos.org just like we have for the x86_64 and i686 arches for c7.
At that point, we can all work and collaborate to do this together.
In fact, the goal is to have an "Alternative ARCHes" Special Interest Group so that we do not have to duplicate anything and everything goes to buildlogs.centos.org and everyone can collaborate and submit patches to the list ... and we will roll them in.
If you look at git.centos.org in the centos-git-common repo, you will see that most of the updates have come from the centos-devel mailing list from people who are not on the CentOS team ...
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional (bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
On 2014-07-03 11:42, Johnny Hughes wrote:
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional
(bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
Indeed, I touched upon that in my previous post, but that doesn't really matter too much for the first build pass, as you are going to rebuild everything again anyway. And you can pick off the few build failures arising from too much junk in the build environment before the second pass via mock rebuilds.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building takes a while. It's not like throwing a full distro rebuild at a 24-thread 96GB of RAM Xeon monster and being able to expect it to have completed by tomorrow morning.
On 03/07/14 12:52, Gordan Bobic wrote:
On 2014-07-03 11:42, Johnny Hughes wrote:
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional
(bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
Indeed, I touched upon that in my previous post, but that doesn't really matter too much for the first build pass, as you are going to rebuild everything again anyway. And you can pick off the few build failures arising from too much junk in the build environment before the second pass via mock rebuilds.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building takes a while. It's not like throwing a full distro rebuild at a 24-thread 96GB of RAM Xeon monster and being able to expect it to have completed by tomorrow morning.
Oh trust me on that one, I know how this is, and the boards we have aren't monsters. (but we have a fair amount of them;)
Ex Gentoo dev here, I know -everything- about obnoxious build times and how amazingly painful it can be. I suspect this will be a lot like going back to an old thunderbird CPU with PATA drives.
Crossing it off onto many different machines might help. Not sure if there's a way to hook btrfs or lvm snapshots into the build/restore process ( I don't understand it enough yet to test ) but those certainly won't work over network filesystems.
//D.S
On 2014-07-03 14:25, D.S. Ljungmark wrote:
On 03/07/14 12:52, Gordan Bobic wrote:
On 2014-07-03 11:42, Johnny Hughes wrote:
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote:
Thanks for the head's up on that.
So, plan of action would be:
- Find / prepare a F19 bootable image.
Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
- Install mock (git or are the packages ok?)
See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
- build a mock F19 starter, test compile something traditional
(bash?)
- Duplicate this environment to the various machines
- set up nfs for compile target
- wrap some scripts around pssh to do parallel builds
-- Am I missing something major here?
That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
Indeed, I touched upon that in my previous post, but that doesn't really matter too much for the first build pass, as you are going to rebuild everything again anyway. And you can pick off the few build failures arising from too much junk in the build environment before the second pass via mock rebuilds.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building takes a while. It's not like throwing a full distro rebuild at a 24-thread 96GB of RAM Xeon monster and being able to expect it to have completed by tomorrow morning.
Oh trust me on that one, I know how this is, and the boards we have aren't monsters. (but we have a fair amount of them;)
Ex Gentoo dev here, I know -everything- about obnoxious build times and how amazingly painful it can be. I suspect this will be a lot like going back to an old thunderbird CPU with PATA drives.
It's not _too_ bad. The biggest package in the distro is LibreOffice, and that takes about 24 hours to build on an Exynos A15 Chromebook. It would build in half that time, but the build process goes out of it's way to only fork one thread for some reason.
Crossing it off onto many different machines might help. Not sure if there's a way to hook btrfs or lvm snapshots into the build/restore process ( I don't understand it enough yet to test ) but those certainly won't work over network filesystems.
Indeed, and not working over NFS means you can only use builders with local SATA or at a push USB->SATA disks. Having said that, there are packages that fail to build on NFS due to test failures.
4GB of RAM is the biggest limitation. What I tend to do is attach a decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to each builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to yield optimal hardware saturation even if some packages insist on building single-threaded.
On 03/07/14 15:33, Gordan Bobic wrote:
On 2014-07-03 14:25, D.S. Ljungmark wrote:
On 03/07/14 12:52, Gordan Bobic wrote:
On 2014-07-03 11:42, Johnny Hughes wrote:
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote:
On 2014-07-03 11:00, D.S. Ljungmark wrote: > Thanks for the head's up on that. > > So, plan of action would be: > * Find / prepare a F19 bootable image. Technically, as Karanbir said, you don't have to run F19 on the build host, just use the F19 respository for mock builds. OTOH, for first pass you may find it a lot faster to install F19 (install _all_ packages), and instead of mock, use just straight rpm to build the first pass.
This will save you a tonne of time because the chroot won't have to be built every time (it takes time even if it's tarred and cached rather than yum installed each time).
Expect spurious failures if you do that - in EL6 I noticed there are packages that fail to build if other packages that aren't in the dependency list are installed. This is because the package's configure finds the extra packages and tries to build against them, which fails (or worse, produces a broken binary). If you remove the extra package, the build will succeed.
But for the first pass it should be OK because you are only going to use what comes out of it to build the second pass.
Then you rebuild it all again, just to make sure, and you should be good for an alpha test, and start working on genuine build failures, erroneous arch restrictions, etc. It is this stage that takes hundreds of man-hours. Everything else is mostly CPU time.
For building with multiple machines, I use a simple script on all the builders that places a lock file on uncached NFS when a package is picked for build, and if a builder sees there's a lock file there, goes on to the next package in the list. It's trivially simple and works very well. It would be nice to have something that resolves all dependencies for building and tries to build the packages in the dependency tree order, but that's mostly useful for bootstrapping from scratch, and we are cheating by bootstrapping on F19, so it isn't as big a problem.
> * Install mock (git or are the packages ok?) See above - you can save a lot of time for the first build pass by not using mock. Install all Fedora packages, and then simply use:
rpmbuild --rebuild $package.rpm
> * build a mock F19 starter, test compile something traditional > (bash?) > * Duplicate this environment to the various machines > * set up nfs for compile target > * wrap some scripts around pssh to do parallel builds > > -- Am I missing something major here? That's pretty much it. I am happy to share the scripts I use. If I don't post them by the weekend ping me to remind me. I can't get to them right now because my build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
Indeed, I touched upon that in my previous post, but that doesn't really matter too much for the first build pass, as you are going to rebuild everything again anyway. And you can pick off the few build failures arising from too much junk in the build environment before the second pass via mock rebuilds.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building takes a while. It's not like throwing a full distro rebuild at a 24-thread 96GB of RAM Xeon monster and being able to expect it to have completed by tomorrow morning.
Oh trust me on that one, I know how this is, and the boards we have aren't monsters. (but we have a fair amount of them;)
Ex Gentoo dev here, I know -everything- about obnoxious build times and how amazingly painful it can be. I suspect this will be a lot like going back to an old thunderbird CPU with PATA drives.
It's not _too_ bad. The biggest package in the distro is LibreOffice, and that takes about 24 hours to build on an Exynos A15 Chromebook. It would build in half that time, but the build process goes out of it's way to only fork one thread for some reason.
Crossing it off onto many different machines might help. Not sure if there's a way to hook btrfs or lvm snapshots into the build/restore process ( I don't understand it enough yet to test ) but those certainly won't work over network filesystems.
Indeed, and not working over NFS means you can only use builders with local SATA or at a push USB->SATA disks. Having said that, there are packages that fail to build on NFS due to test failures.
4GB of RAM is the biggest limitation. What I tend to do is attach a decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to each builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to yield optimal hardware saturation even if some packages insist on building single-threaded.
That might be an issue to check, I'll probably end up swapping anyhow as the boards aren't gloriously drowning in RAM.
Ach well, more stuff to look at and keep in mind, good knowledge to have though! ( next up, performance test of the boards that swap over NFS vs. those that swap over USB=>sata SSD. Based on previous numbers, I guess swap over NFS will actually be faster. That's for monday though, as so many other things )
//D.S.
On 2014-07-03 14:41, D.S. Ljungmark wrote:
On 03/07/14 15:33, Gordan Bobic wrote:
On 2014-07-03 14:25, D.S. Ljungmark wrote:
On 03/07/14 12:52, Gordan Bobic wrote:
On 2014-07-03 11:42, Johnny Hughes wrote:
On 07/03/2014 05:19 AM, D.S. Ljungmark wrote:
Excellent information, I'd love the scripts, and post-weekend sounds as if it'd fit well with my schedule.
I'll see about taking the time to document steps as well so we might get a wiki started on how to do this, seems as if there are a few people who have interest, and at least documenting the basics might be good.
Regards, D.S.
On 03/07/14 12:14, Gordan Bobic wrote: > On 2014-07-03 11:00, D.S. Ljungmark wrote: >> Thanks for the head's up on that. >> >> So, plan of action would be: >> * Find / prepare a F19 bootable image. > Technically, as Karanbir said, you don't have to run F19 > on the build host, just use the F19 respository for mock > builds. OTOH, for first pass you may find it a lot faster > to install F19 (install _all_ packages), and instead of > mock, use just straight rpm to build the first pass. > > This will save you a tonne of time because the chroot won't > have to be built every time (it takes time even if it's > tarred and cached rather than yum installed each time). > > Expect spurious failures if you do that - in EL6 I noticed > there are packages that fail to build if other packages > that aren't in the dependency list are installed. This > is because the package's configure finds the extra > packages and tries to build against them, which fails > (or worse, produces a broken binary). If you remove the > extra package, the build will succeed. > > But for the first pass it should be OK because you > are only going to use what comes out of it to build > the second pass. > > Then you rebuild it all again, just to make sure, > and you should be good for an alpha test, and start > working on genuine build failures, erroneous arch > restrictions, etc. It is this stage that takes > hundreds of man-hours. Everything else is mostly CPU > time. > > For building with multiple machines, I use a simple > script on all the builders that places a lock file > on uncached NFS when a package is picked for build, > and if a builder sees there's a lock file there, > goes on to the next package in the list. It's > trivially simple and works very well. It would be > nice to have something that resolves all dependencies > for building and tries to build the packages in the > dependency tree order, but that's mostly useful for > bootstrapping from scratch, and we are cheating by > bootstrapping on F19, so it isn't as big a problem. > >> * Install mock (git or are the packages ok?) > See above - you can save a lot of time for the first > build pass by not using mock. Install all Fedora > packages, and then simply use: > > rpmbuild --rebuild $package.rpm > >> * build a mock F19 starter, test compile something traditional >> (bash?) >> * Duplicate this environment to the various machines >> * set up nfs for compile target >> * wrap some scripts around pssh to do parallel builds >> >> -- Am I missing something major here? > That's pretty much it. I am happy to share the scripts > I use. If I don't post them by the weekend ping me > to remind me. I can't get to them right now because my > build farm is behind I firewall I don't have a hole on.
I would be happy to setup mock for you if you want ... or even just put what I have been using for mock configs here on this list for a test build.
I personally would rather produce the RPMs via mock as that prevents pulling in spurious links for packages because the buildroot is too fat (ie, in RHEL, package-y.x.z does not link against package-a.b.c because it is not a BuildRequire and not installed in the mock build root .. but if run with the package-a.b.c in the buildroot, the configure process checks for and links against it.
Indeed, I touched upon that in my previous post, but that doesn't really matter too much for the first build pass, as you are going to rebuild everything again anyway. And you can pick off the few build failures arising from too much junk in the build environment before the second pass via mock rebuilds.
All mock does is build a minimum clean build root for each package where only the specific requires for building are in the build root so that each package gets only what it needs to build and builds are more consistently.
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building takes a while. It's not like throwing a full distro rebuild at a 24-thread 96GB of RAM Xeon monster and being able to expect it to have completed by tomorrow morning.
Oh trust me on that one, I know how this is, and the boards we have aren't monsters. (but we have a fair amount of them;)
Ex Gentoo dev here, I know -everything- about obnoxious build times and how amazingly painful it can be. I suspect this will be a lot like going back to an old thunderbird CPU with PATA drives.
It's not _too_ bad. The biggest package in the distro is LibreOffice, and that takes about 24 hours to build on an Exynos A15 Chromebook. It would build in half that time, but the build process goes out of it's way to only fork one thread for some reason.
Crossing it off onto many different machines might help. Not sure if there's a way to hook btrfs or lvm snapshots into the build/restore process ( I don't understand it enough yet to test ) but those certainly won't work over network filesystems.
Indeed, and not working over NFS means you can only use builders with local SATA or at a push USB->SATA disks. Having said that, there are packages that fail to build on NFS due to test failures.
4GB of RAM is the biggest limitation. What I tend to do is attach a decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to each builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to yield optimal hardware saturation even if some packages insist on building single-threaded.
That might be an issue to check, I'll probably end up swapping anyhow as the boards aren't gloriously drowning in RAM.
Ach well, more stuff to look at and keep in mind, good knowledge to have though! ( next up, performance test of the boards that swap over NFS vs. those that swap over USB=>sata SSD. Based on previous numbers, I guess swap over NFS will actually be faster. That's for monday though, as so many other things )
Does swapping to a file on NFS even work reliably any more? I don't think it does, unless something changed recently. It certainly didn't with the kernels I used (which are, granted, quite old), and neither did swapping on any kind of a network attached block device, because parts of the networking stack can end up getting swapped out and the machine locks up because to unswap them, it needs the networking stack to get to the block device.
Either way, you are likely to be sufficiently CPU constrained that swapping to USB->SATA->SSD should be good enough.
Other than my posts, this list has been quite since Jul 3rd. Is there any activity on putting together builds as were mentioned that day?
On 07/03/2014 09:33 AM, Gordan Bobic wrote:
4GB of RAM is the biggest limitation. What I tend to do is attach a decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to each builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to yield optimal hardware saturation even if some packages insist on building single-threaded.
I moved all of the July posts into Thunderbird so I could read them better. And it seems that it might be beyond my minimal ablities.
My Cubieboard 2 is an Allwinner A20 duo core with 1GB memory. I have a 16GB SD for the OS for now. It does have a real SATA2 interface to put a notebook sata drive on. I have a few 320GB drives I can use.
The F19 build uses the Sunxi 3.4 kernel.
I will attempt to do a 'yum update --exclude=kern*' and I do have a few things to install (eg tigervnc-server).
Not much has happened on my end. Lock setup and running, but as I said earlier, lacking time. On 30 Jul 2014 03:41, "Robert Moskowitz" rgm@htt-consult.com wrote:
Other than my posts, this list has been quite since Jul 3rd. Is there any activity on putting together builds as were mentioned that day?
On 07/03/2014 09:33 AM, Gordan Bobic wrote:
4GB of RAM is the biggest limitation. What I tend to do is attach a decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to each builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to yield optimal hardware saturation even if some packages insist on building single-threaded.
I moved all of the July posts into Thunderbird so I could read them better. And it seems that it might be beyond my minimal ablities.
My Cubieboard 2 is an Allwinner A20 duo core with 1GB memory. I have a 16GB SD for the OS for now. It does have a real SATA2 interface to put a notebook sata drive on. I have a few 320GB drives I can use.
The F19 build uses the Sunxi 3.4 kernel.
I will attempt to do a 'yum update --exclude=kern*' and I do have a few things to install (eg tigervnc-server).
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
Same here - real life and this thing called "day job" has a knack for getting in the way of doing cool and interesting stuff like this done.
On 2014-07-30 08:58, D.S. Ljungmark wrote:
Not much has happened on my end. Lock setup and running, but as I said earlier, lacking time. On 30 Jul 2014 03:41, "Robert Moskowitz" rgm@htt-consult.com wrote:
Other than my posts, this list has been quite since Jul 3rd. Is there any activity on putting together builds as were mentioned that day?
On 07/03/2014 09:33 AM, Gordan Bobic wrote:
4GB of RAM is the biggest limitation. What I tend to do is attach
a
decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to
each
builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to
yield
optimal hardware saturation even if some packages insist on
building
single-threaded.
I moved all of the July posts into Thunderbird so I could read them better. And it seems that it might be beyond my minimal ablities.
My Cubieboard 2 is an Allwinner A20 duo core with 1GB memory. I have a 16GB SD for the OS for now. It does have a real SATA2 interface to put a notebook sata drive on. I have a few 320GB drives I can use.
The F19 build uses the Sunxi 3.4 kernel.
I will attempt to do a 'yum update --exclude=kern*' and I do have a few things to install (eg tigervnc-server).
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev [1]
Links:
[1] http://lists.centos.org/mailman/listinfo/arm-dev
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
If I better understood the build process, I would spend the $100 for another Cubie if needed. Going to have to get another one for the server anyway. Thing is will the C2 with 1GB memory be enough, or the Ctruck with 2GB memory? I have 2 production Intel servers that I would be interested in replacing. First my DNS server, onlo.htt-consult.com. All it runs is DNS; I would like to get DNSSEC working at some point. z9n9z is my mial server. I built a C6 replacement for it, but still have not rolled it out. Minimally I would do a C7 build on the replacement hardware to check out all of the components before trying this on arm.
Finally my Win server is ClearOS. Would be nice to bring that into the fold (at least ClearOS was closer to my target than AMAHI was). Oh and Medon is a test/web server. So actually there are 4 systems I would like to move over to arm. At that point, I would just order 5 systems direct and save a few bucks. But I have to know it will work. Yearend is OK target date.
On 07/30/2014 04:42 AM, Gordan Bobic wrote:
Same here - real life and this thing called "day job" has a knack for getting in the way of doing cool and interesting stuff like this done.
On 2014-07-30 08:58, D.S. Ljungmark wrote:
Not much has happened on my end. Lock setup and running, but as I said earlier, lacking time. On 30 Jul 2014 03:41, "Robert Moskowitz" rgm@htt-consult.com wrote:
Other than my posts, this list has been quite since Jul 3rd. Is there any activity on putting together builds as were mentioned that day?
On 07/03/2014 09:33 AM, Gordan Bobic wrote:
4GB of RAM is the biggest limitation. What I tend to do is attach
a
decent, large SSD (LibreOffice needs nearly 40GB to build!!!) to
each
builder, set up tons of swap (say, 8GB per core), and run as many build threads as there are cores in the machine. That tends to
yield
optimal hardware saturation even if some packages insist on
building
single-threaded.
I moved all of the July posts into Thunderbird so I could read them better. And it seems that it might be beyond my minimal ablities.
My Cubieboard 2 is an Allwinner A20 duo core with 1GB memory. I have a 16GB SD for the OS for now. It does have a real SATA2 interface to put a notebook sata drive on. I have a few 320GB drives I can use.
The F19 build uses the Sunxi 3.4 kernel.
I will attempt to do a 'yum update --exclude=kern*' and I do have a few things to install (eg tigervnc-server).
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev [1]
Links:
[1] http://lists.centos.org/mailman/listinfo/arm-dev
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
Arm-dev mailing list Arm-dev@centos.org http://lists.centos.org/mailman/listinfo/arm-dev
On 2014-07-30 14:34, Robert Moskowitz wrote:
If I better understood the build process, I would spend the $100 for another Cubie if needed. Going to have to get another one for the server anyway. Thing is will the C2 with 1GB memory be enough, or the Ctruck with 2GB memory?
For what purpose/workload? If it is for building packages, when I built RedSleeve I used 512MB Sheeva/Guru/Dream Plug machines. As long as you attach plenty of swap on reasonable media (don't use USB sticks or SD cards, their random-write performance is _terrible_) it'll be fine. More RAM will help speed things up for sure but it isn't necessary.
If you are asking for some kind of a server workload, it depends on the workload. For example, redsleeve.org runs in a 2 GHz armv5tel Marvell Kirkwood with 1GB of RAM (QNAP TS-421).
For a heavier workload you might want to look at something like the Cornfed Systems' Conserver (quad core ARM, 4GB of RAM, mini ITX form factor).
I have 2 production Intel servers that I would be interested in replacing. First my DNS server, onlo.htt-consult.com. All it runs is DNS; I would like to get DNSSEC working at some point. z9n9z is my mial server. I built a C6 replacement for it, but still have not rolled it out. Minimally I would do a C7 build on the replacement hardware to check out all of the components before trying this on arm.
I wouldn't rush headlong into EL7 in production quite yet. Let the bleeding edge adopters sort out the the most obvious issues at least on x86 first.
Gordan
On 07/30/2014 10:05 AM, Gordan Bobic wrote:
On 2014-07-30 14:34, Robert Moskowitz wrote:
If I better understood the build process, I would spend the $100 for another Cubie if needed. Going to have to get another one for the server anyway. Thing is will the C2 with 1GB memory be enough, or the Ctruck with 2GB memory?
For what purpose/workload? If it is for building packages, when I built RedSleeve I used 512MB Sheeva/Guru/Dream Plug machines. As long as you attach plenty of swap on reasonable media (don't use USB sticks or SD cards, their random-write performance is _terrible_) it'll be fine. More RAM will help speed things up for sure but it isn't necessary.
How do you 'attach plenty of swap'? Attaching an SATA drive is easy for me and I can format it anyway I need, but how do I point swap to it?
If you are asking for some kind of a server workload, it depends on the workload. For example, redsleeve.org runs in a 2 GHz armv5tel Marvell Kirkwood with 1GB of RAM (QNAP TS-421).
Thought so. My C2 would be more than enough, except maybe the mail server. But even it is not running all out.
For a heavier workload you might want to look at something like the Cornfed Systems' Conserver (quad core ARM, 4GB of RAM, mini ITX form factor).
I am waiting to see what the Allwinner A80 will be like!
I have 2 production Intel servers that I would be interested in replacing. First my DNS server, onlo.htt-consult.com. All it runs is DNS; I would like to get DNSSEC working at some point. z9n9z is my mial server. I built a C6 replacement for it, but still have not rolled it out. Minimally I would do a C7 build on the replacement hardware to check out all of the components before trying this on arm.
I wouldn't rush headlong into EL7 in production quite yet. Let the bleeding edge adopters sort out the the most obvious issues at least on x86 first.
Oh, no plan for sure! Perhaps in late November based on Holidays and conferences. I mean this month is kind of open, but more for some basic testing. And some of my servers are still i686 so I have to wait on that too.
And I would rather spend the money replacing them with armv7 and save power than x86 and eat up more power!
On 2014-07-30 15:17, Robert Moskowitz wrote:
On 07/30/2014 10:05 AM, Gordan Bobic wrote:
On 2014-07-30 14:34, Robert Moskowitz wrote:
If I better understood the build process, I would spend the $100 for another Cubie if needed. Going to have to get another one for the server anyway. Thing is will the C2 with 1GB memory be enough, or the Ctruck with 2GB memory?
For what purpose/workload? If it is for building packages, when I built RedSleeve I used 512MB Sheeva/Guru/Dream Plug machines. As long as you attach plenty of swap on reasonable media (don't use USB sticks or SD cards, their random-write performance is _terrible_) it'll be fine. More RAM will help speed things up for sure but it isn't necessary.
How do you 'attach plenty of swap'? Attaching an SATA drive is easy for me and I can format it anyway I need, but how do I point swap to it?
man pages for fdisk, mkswap and swapon commands should tell you what you need to know.
If you are asking for some kind of a server workload, it depends on the workload. For example, redsleeve.org runs in a 2 GHz armv5tel Marvell Kirkwood with 1GB of RAM (QNAP TS-421).
Thought so. My C2 would be more than enough, except maybe the mail server. But even it is not running all out.
How much mail do you go through? My mail server is an Atom N450 with 2GB of RAM and its performance has never been inadequate.
For a heavier workload you might want to look at something like the Cornfed Systems' Conserver (quad core ARM, 4GB of RAM, mini ITX form factor).
I am waiting to see what the Allwinner A80 will be like!
It's not all about the CPU - it's about the entire package. Amount of RAM and form factor are, IMO, far more important than the CPU used.
I have 2 production Intel servers that I would be interested in replacing. First my DNS server, onlo.htt-consult.com. All it runs is DNS; I would like to get DNSSEC working at some point. z9n9z is my mial server. I built a C6 replacement for it, but still have not rolled it out. Minimally I would do a C7 build on the replacement hardware to check out all of the components before trying this on arm.
I wouldn't rush headlong into EL7 in production quite yet. Let the bleeding edge adopters sort out the the most obvious issues at least on x86 first.
Oh, no plan for sure! Perhaps in late November based on Holidays and conferences. I mean this month is kind of open, but more for some basic testing. And some of my servers are still i686 so I have to wait on that too.
And I would rather spend the money replacing them with armv7 and save power than x86 and eat up more power!
If you're doing it for fun, that's fair enough. But if you are looking at power saving compared to x86 on a small number of private servers, you will find that unless your electricity costs are astronomical, the net saving will be pennies per day compared to, say, Atom based x86. That's not to say that Atom wouldn't suck up several times the amount of power, but it uses little enough power to begin with that at at, say, $0.20/KWh it'll take a long time to make a big dent on a few small servers.
What you might want to look into is combining your various servers using something like Linux VServer by containerizing different tasks. My Atom N450 servers are containerized to run many different things, e.g. mail (IMAP, SMTP, webmail), MySQL for backing those, LDAP, OpenVPN, and probably a few other things I can't think of right now. They were also running various other things (WordPress, forum software) until relatively recently, but I moved those away for other reasons (performance was never an issue).
I'm not saying don't move to ARM, I'm more saying make sure you are doing whatever you decide to do for good reasons and based on facts.
My public facing ARM server is this: http://www.altechnative.net/2014/02/23/qnap-ts-421-review-modification-and-r...
mainly because I needed plenty of disk space to host the primary storage for the distro packages. Having a neat and tidy form factor was very high on my list of priorities, so cabling sprawl caused by all components being external was out of the question. It wasn't a cheap option, but I am very pleased with the end result.
Gordan
On 07/03/2014 11:52 AM, Gordan Bobic wrote:
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
that solves one problem but creates lots others - eg. the builds resulting from this run wont be usable since they will have wierd and indifferent linking
if its just a case of creating a knowledge pool about what does and does not otherwise build, to throw away the results, then sure - this would be marginally faster ( the time to build the base mock root is about 21 seconds on th A15 node were using ). But if you intend to use the resulting content, I cant stress enough - use mock.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building
which chromebooks are these ? I still think the server grade ARMv7 stuff available these days is much faster and capable.
On 07/03/2014 07:08 PM, Karanbir Singh wrote:
On 07/03/2014 11:52 AM, Gordan Bobic wrote:
Sure, but the time it takes to rm -rf the build root and then untar a cached build root copy is a non-trivial fraction of the build time for a lot of the packages. It is certainly not trivial when you multiply it by around 2,000 for the number of packages you are going to need to build.
that solves one problem but creates lots others - eg. the builds resulting from this run wont be usable since they will have wierd and indifferent linking
if its just a case of creating a knowledge pool about what does and does not otherwise build, to throw away the results, then sure - this would be marginally faster ( the time to build the base mock root is about 21 seconds on th A15 node were using ). But if you intend to use the resulting content, I cant stress enough - use mock.
I'll see how it goes. It worked reasonably well for bootstrapping stage 1 of EL6.
From experience, anything you can do to get past the first stage build faster is usually a good idea if hardware is limited - and on ARM it usually is. Even on something like the Arndale Octa or the new Chromebook which have 3-4GB of RAM and 8 cores, building
which chromebooks are these ? I still think the server grade ARMv7 stuff available these days is much faster and capable.
The chromebooks I speak of are these: http://www.samsung.com/us/computer/chrome-os-devices/XE503C32-K01US
The server grade ARMv7 machines like the Boston Viridis are quite awesome, but last I checked the cost/performance ratio of one of those, even fully populated, is a large multiple worse than the the new Chromebook or Arndale Octa. I've _very_ seriously considered getting a Viridis machine, but just haven't been able to justify the cost per unit of performance.
Gordan