The Fedora cloud sig has been talking about using the gnome ostre to deliver a potential platform, i think this is fantastic and leads in quite nicely to the conversations we've all been having about a minimial manageable image ( either for virt or cloud usage, or even baremetal ).
I wonder if we can make this work for CentOS-6 now and 7 when its around ? the rpm-ostree stack seems to have a systemd dep though. Who fancies taking a dig at it ?
references: ostree: https://wiki.gnome.org/action/show/Projects/OSTree
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
On 04/04/2014 08:09 PM, Karanbir Singh wrote:
The Fedora cloud sig has been talking about using the gnome ostre to deliver a potential platform, i think this is fantastic and leads in quite nicely to the conversations we've all been having about a minimial manageable image ( either for virt or cloud usage, or even baremetal ).
I wonder if we can make this work for CentOS-6 now and 7 when its around ? the rpm-ostree stack seems to have a systemd dep though. Who fancies taking a dig at it ?
references: ostree: https://wiki.gnome.org/action/show/Projects/OSTree
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
Another point of of interest: packages that make this work : http://copr-fe.cloud.fedoraproject.org/coprs/walters/rpm-ostree/
On Fri, Apr 4, 2014 at 2:09 PM, Karanbir Singh mail-lists@karan.org wrote:
The Fedora cloud sig has been talking about using the gnome ostre to deliver a potential platform, i think this is fantastic and leads in quite nicely to the conversations we've all been having about a minimial manageable image ( either for virt or cloud usage, or even baremetal ).
I wonder if we can make this work for CentOS-6 now and 7 when its around ? the rpm-ostree stack seems to have a systemd dep though. Who fancies taking a dig at it ?
references: ostree: https://wiki.gnome.org/action/show/Projects/OSTree
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
I think I'm missing the point of what it does - at least for your first system. Where does the baremetal hardware detection/configuration happen?
On Fri, Apr 4, 2014 at 3:24 PM, Les Mikesell lesmikesell@gmail.com wrote:
I think I'm missing the point of what it does - at least for your first system. Where does the baremetal hardware detection/configuration happen?
Anaconda is what installs:
https://www.redhat.com/archives/anaconda-devel-list/2014-March/msg00028.html
I did the patches against an older branch of Anaconda, I'm working on porting them to rawhide.
However on the "compose server" side, the trees are not "installs". It's quite close to just "rpm2cpio"ing all of the RPMs into an unpacked root, running the %post, and committing that.
A key difference though is that ostree demands that the trees can be updated without %post on the client. This lack of %post requires some design changes in the OS content. See for example:
/usr/lib/passwd: https://sourceware.org/bugzilla/show_bug.cgi?id=16142
On Fri, Apr 4, 2014 at 2:59 PM, Colin Walters walters@verbum.org wrote:
On Fri, Apr 4, 2014 at 3:24 PM, Les Mikesell lesmikesell@gmail.com wrote:
I think I'm missing the point of what it does - at least for your first system. Where does the baremetal hardware detection/configuration happen?
Anaconda is what installs:
https://www.redhat.com/archives/anaconda-devel-list/2014-March/msg00028.html
I did the patches against an older branch of Anaconda, I'm working on porting them to rawhide.
However on the "compose server" side, the trees are not "installs". It's quite close to just "rpm2cpio"ing all of the RPMs into an unpacked root, running the %post, and committing that.
A key difference though is that ostree demands that the trees can be updated without %post on the client. This lack of %post requires some design changes in the OS content. See for example:
I've always thought there should be a simple way to replicate a mostly-configured machine down to the installed package versions so one person could set up a machine that works well for a particular task and 'publish' the configuration such that any number of other people could have an equally well-configured setup just for the asking - and be able to follow the updates after they have been installed/blessed on the master. This looks close, but not quite... I think there should be a list of repositories that maintain the packages in all the referenced versions, and the main part would just be a version-controlled package list and maybe some diffs/patches to configurations.
On Fri, Apr 4, 2014 at 7:02 PM, Les Mikesell lesmikesell@gmail.com wrote:
- and be able to follow the updates after they have been
installed/blessed on the master.
Right.
This looks close, but not quite... I think there should be a list of repositories that maintain the packages in all the referenced versions,
I'm not quite sure what you're suggesting here. There are two kinds of repositories: yum and ostree. When you say "maintain the packages" are you talking about "treecompose", i.e committing a set of RPM packages to an OSTree repository?
and the main part would just be a version-controlled package list and maybe some diffs/patches to configurations.
Right, that's what https://github.com/cgwalters/fedora-atomic/blob/master/products.json boils down to - it's a list of trees, each of which are composed of a set of packages.
https://github.com/cgwalters/fedora-atomic/commit/cb3b741616b5d11a00950e8f61...
is a sample commit that added rpm-ostree to the trees shipped on each client.
Another example is that if I decided tmux was awesome and everyone should have it, I could push:
diff --git a/products.json b/products.json index ed99ccd..41133e9 100644 --- a/products.json +++ b/products.json @@ -23,7 +23,8 @@
"packages": ["kernel", "rpm-ostree", "generic-release", "lvm2", "btrfs-progs", "e2fsprogs", "xfsprogs", - "rpm-ostree-public-gpg-key", "gnupg2"], + "rpm-ostree-public-gpg-key", "gnupg2", + "tmux"],
"postprocess": ["remove-root-password"],
Conversely, I could remove it later, and it would disappear from the trees, and then vanish when client machines ran "rpm-ostree upgrade". This is a huge difference from traditional package management per client, where packages normally linger on unless explicitly removed.
On Fri, Apr 4, 2014 at 6:28 PM, Colin Walters walters@verbum.org wrote:
- and be able to follow the updates after they have been
installed/blessed on the master.
Right.
This looks close, but not quite... I think there should be a list of repositories that maintain the packages in all the referenced versions,
I'm not quite sure what you're suggesting here. There are two kinds of repositories: yum and ostree. When you say "maintain the packages" are you talking about "treecompose", i.e committing a set of RPM packages to an OSTree repository?
No, I don't see the point. RPM packages are already a reasonable distribution format. Why not use them as is? The missing part is just the inability of yum to reproduce a known working set of package versions on its own.
and the main part would just be a version-controlled package list and maybe some diffs/patches to configurations.
Right, that's what https://github.com/cgwalters/fedora-atomic/blob/master/products.json boils down to - it's a list of trees, each of which are composed of a set of packages.
https://github.com/cgwalters/fedora-atomic/commit/cb3b741616b5d11a00950e8f61...
is a sample commit that added rpm-ostree to the trees shipped on each client.
Another example is that if I decided tmux was awesome and everyone should have it, I could push:
diff --git a/products.json b/products.json index ed99ccd..41133e9 100644 --- a/products.json +++ b/products.json @@ -23,7 +23,8 @@
"packages": ["kernel", "rpm-ostree", "generic-release", "lvm2", "btrfs-progs", "e2fsprogs", "xfsprogs",
"rpm-ostree-public-gpg-key", "gnupg2"],
"rpm-ostree-public-gpg-key", "gnupg2",
"tmux"],
"postprocess": ["remove-root-password"],
Conversely, I could remove it later, and it would disappear from the trees, and then vanish when client machines ran "rpm-ostree upgrade". This is a huge difference from traditional package management per client, where packages normally linger on unless explicitly removed.
That's not at all what I had in mind. I'd like to see something where you just install a suitable set of packages on your own machine and run some program that publishes the list of package/versions you have installed. And the matching client takes that list and feeds it to yum. With repos that retain the packages as long as any published list references the package version - preferably with DNS round-robin instead of mirrorlists, so local proxy caches work sanely. Season with something to diff/patch local modifications into configs and you should have reproducible machines.
If yum/rpm aren't good enough to maintain machines they should be fixed, not forked. You just need something to get to the point where yum works and the ability to track versions and deletions.
On Fri, Apr 4, 2014 at 11:42 PM, Les Mikesell lesmikesell@gmail.com wrote:
No, I don't see the point. RPM packages are already a reasonable distribution format. Why not use them as is?
For one thing, you pay the cost of dependency resolution, downloading the entire repo metadata (in the case of Fedora, it's 18MB) per client.
It doesn't make sense to have a cluster of 1000 HPC nodes that you want to be identical each run through a satisfiability solver. Or down to embedded devices - think cash registers.
Furthermore, the "pure replication" OSTree model enforces that the trees are *identical*. It's fully atomic - if you pull the power, you either have the old system or the new system. Achieving this on top of the traditional package model requires significant re-engineering around how the installation and things like %post work.
This doesn't mean that the traditional yum/apt-get model is bad - but I think the rpm-ostree model also makes sense for some use cases.
It's going to get a lot more interesting once rpm-ostree supports package installation on top, then you're close to the best of both worlds. (This requires the significant re-engineering mentioned above).
That's not at all what I had in mind. I'd like to see something where you just install a suitable set of packages on your own machine and run some program that publishes the list of package/versions you have installed. And the matching client takes that list and feeds it to yum. With repos that retain the packages as long as any published list references the package version - preferably with DNS round-robin instead of mirrorlists, so local proxy caches work sanely. Season with something to diff/patch local modifications into configs and you should have reproducible machines.
I see. Hmm...but can you track multiple lists? Probably not, you'd get into a conflict mess.
Repositories of course are already package sets - things like comps groups (or metapackages) are the "list" aspect.
There's also Software Collections and Docker which allow parallel installation.
I understand your idea, but it feels like the above technologies overlap with it substantially.
If yum/rpm aren't good enough to maintain machines they should be fixed, not forked.
I wouldn't call ostree itself a fork of anything.
You just need something to get to the point where yum works and the ability to track versions and deletions.
I'd say it works today! It has a known quantity of strengths and weaknesses. "yum history" does track versions and deletions. The hard part is addressing the weaknesses without breaking it =)
Now, rpm-ostree when it supports package installation on top will definitely compete with traditional package tools like yum, that's true. But the trajectory in Fedora was already towards dnf which uses new libraries like libsolv, hawkey, and librepo, that rpm-ostree will use as well.
On Sat, Apr 5, 2014 at 8:29 AM, Colin Walters walters@verbum.org wrote:
No, I don't see the point. RPM packages are already a reasonable distribution format. Why not use them as is?
For one thing, you pay the cost of dependency resolution, downloading the entire repo metadata (in the case of Fedora, it's 18MB) per client.
OK, I don't mind my machines doing some work. But if you didn't, couldn't you push that part to a server with traditional caching of query results? And given that the point is to duplicate an existing system the dependencies would already be known/solved anyway.
It doesn't make sense to have a cluster of 1000 HPC nodes that you want to be identical each run through a satisfiability solver. Or down to embedded devices - think cash registers.
Maybe... I can't think of a machine that can do useful work that couldn't compute its own dependencies, though.
Furthermore, the "pure replication" OSTree model enforces that the trees are *identical*. It's fully atomic - if you pull the power, you either have the old system or the new system. Achieving this on top of the traditional package model requires significant re-engineering around how the installation and things like %post work.
If you need that, you are probably already block-replicating. But how do you deal with the necessary differences in that case? There is always some ugly overlap in software/system/local configuration that is hard to abstract away.
That's not at all what I had in mind. I'd like to see something where you just install a suitable set of packages on your own machine and run some program that publishes the list of package/versions you have installed. And the matching client takes that list and feeds it to yum. With repos that retain the packages as long as any published list references the package version - preferably with DNS round-robin instead of mirrorlists, so local proxy caches work sanely. Season with something to diff/patch local modifications into configs and you should have reproducible machines.
I see. Hmm...but can you track multiple lists? Probably not, you'd get into a conflict mess.
If there is no need to learn a new and complex configuration tool, anyone could produce the lists - so there would almost certainly be one created for most useful configurations.
Repositories of course are already package sets - things like comps groups (or metapackages) are the "list" aspect.
There's also Software Collections and Docker which allow parallel installation.
I understand your idea, but it feels like the above technologies overlap with it substantially.
And yet none of them give you the ability to build/test one system, then let someone else duplicate it easily and track its tested updates. Which is what pretty much everyone needs to do if they have multiple machines, and it would give people with no administration experience the ability to have machines with software managed by experts, almost for free.
If yum/rpm aren't good enough to maintain machines they should be fixed, not forked.
I wouldn't call ostree itself a fork of anything.
Well, if it doesn't use the same command structure and repositories to install software I call it a fork. Unless it is going to become the only way to install software.
You just need something to get to the point where yum works and the ability to track versions and deletions.
I'd say it works today! It has a known quantity of strengths and weaknesses. "yum history" does track versions and deletions. The hard part is addressing the weaknesses without breaking it =)
I doesn't work for me - now.
Now, rpm-ostree when it supports package installation on top will definitely compete with traditional package tools like yum, that's true. But the trajectory in Fedora was already towards dnf which uses new libraries like libsolv, hawkey, and librepo, that rpm-ostree will use as well.
Yes, Fedora always seems to start with the idea that everything that anyone already knows how to use is wrong and needs to be replaced. But I don't see the point of having a new abstraction that you have to know in order to build a system and then reproduce it - which is essentially just repeating the same thing. I'd like to see something that lets you build/test organically with traditional commands, then publish the configuration in a way that can be re-used on top of existing distribution infrastructure. In fact I've always thought that that should have been the goal of package management from the time computers became cheap enough that someone might own two of them. And it should be particularly obvious to people working with version control at the source level that you would want the same ability to duplicate and track someone else's revision exactly for the binaries too. But instead, there are many arcane abstractions that try to make it possible but that always involve leaning different concepts and building different infrastructure to make it work, and crazy things like making your own mirror of a repository just to control the versions available to yum - for every state you'd want to reproduce. Stuff that no one would do to make one simple, working copy of someone else's known-good system.
On Sat, Apr 5, 2014 at 12:28 PM, Les Mikesell lesmikesell@gmail.com wrote:
If you need that, you are probably already block-replicating.
Right; block replication has its advantages. There's a lot of projects out there for this. I know of a Fedora/Red Hat project around this a long time ago: https://fedoraproject.org/wiki/StatelessLinux
The more modern version is the ChromiumOS updater, with its A/B partition scheme. It's widely deployed in Google Chromebooks.
But OSTree is much more flexible than traditional block replication because:
1) It doesn't require doubling disk usage of the OS (the hardlinks deduplicate) 2) Even more crucially, it allows you the freedom to use whatever filesystem and block storage you want on each client.
For example, you can install the "fedora-atomic/rawhide/x86_64/buildmaster/base/core" tree on one machine using plain ext4 with a single big /, and another on XFS-on-LVM, perhaps with thin provisioning if you like.
You can be confident that any thinp snapshots or whatever that you make won't be interfered with by OSTree.
But how do you deal with the necessary differences in that case? There is always some ugly overlap in software/system/local configuration that is hard to abstract away.
This is another key point - in OSTree you have a writable /etc and /var. In particular the fact that a 3 way merge is applied to /etc helps make the system "feel like" Unix, in contrast to more basic block-level systems which are unaware of the semantics of /etc.
And yet none of them give you the ability to build/test one system, then let someone else duplicate it easily and track its tested updates. Which is what pretty much everyone needs to do if they have multiple machines, and it would give people with no administration experience the ability to have machines with software managed by experts, almost for free.
The more I think about your suggestion the more it feels like the "staging" model, which is clearly widely used inside individual organizations.
Anyways, there's clearly a lot of ways to build, deploy, and manage systems =) I think the rpm-ostree model is a unique middle ground between traditional packages on the client and image-based updates - not for everyone, but some here might be interested, and I'm happy to hear any other feedback!
On Sun, Apr 6, 2014 at 12:40 PM, Colin Walters walters@verbum.org wrote:
But OSTree is much more flexible than traditional block replication because:
Is if flexible enough to scale 'down'? That is, is it something you would likely use to install your 2nd system if you only had 2 and had never used it before? Or your first system as a copy of something someone else already has running?
But how do you deal with the necessary differences in that case? There is always some ugly overlap in software/system/local configuration that is hard to abstract away.
This is another key point - in OSTree you have a writable /etc and /var. In particular the fact that a 3 way merge is applied to /etc helps make the system "feel like" Unix, in contrast to more basic block-level systems which are unaware of the semantics of /etc.
Does something mange the necessary differences during an install? And can it tell a hardware or local difference that doesn't change in an update from a software configuration item that would be software-version specific? Or do these still leave lots of odd special cases, especially if you aren't using VMs?
And yet none of them give you the ability to build/test one system, then let someone else duplicate it easily and track its tested updates. Which is what pretty much everyone needs to do if they have multiple machines, and it would give people with no administration experience the ability to have machines with software managed by experts, almost for free.
The more I think about your suggestion the more it feels like the "staging" model, which is clearly widely used inside individual organizations.
Yes, everyone with more than a few machines needs that - and large organizations need it so badly they build their own in spite of the cost and difficulty. Which is why I think it is so odd that the stock tools aren't designed to do repeatable updates in the first place. Why is it so difficult to tell yum to just ignore anything added to the repositories after the staging system update was done and reproduce that same update to a production system after your tests are complete?
Anyways, there's clearly a lot of ways to build, deploy, and manage systems =) I think the rpm-ostree model is a unique middle ground between traditional packages on the client and image-based updates - not for everyone, but some here might be interested, and I'm happy to hear any other feedback!
It is very unlike the unix philosophy to need a lot of ways to do something. Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000? And more annoyingly, why does every variation of the simple idea of installing and tracking versioned updates need different and complex infrastructure for the package repository and invent a new configuration and command language that is incompatible with anything used before or the source management systems that do essentially the same things?
On 6 April 2014 12:35, Les Mikesell lesmikesell@gmail.com wrote:
On Sun, Apr 6, 2014 at 12:40 PM, Colin Walters walters@verbum.org wrote:
But OSTree is much more flexible than traditional block replication because:
It is very unlike the unix philosophy to need a lot of ways to do something. Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000?
What Unix philosophy or religion? The whole 'one tool does one thing' is not what Unix has ever been.. it is a rosy glasses view forgetting all the arguments over whether awk+sed+sh or sh+sed+awk or special program G was the best way to solve a problem. Which gets down to the fundamental issue. Humans are not lock step creatures.. they will think differently and see different ways to solve a problem and the universe is complex enough that it works so much that people get frustrated with each other because they would like just one way to do things (their way).
On Sun, Apr 6, 2014 at 3:01 PM, Stephen John Smoogen smooge@gmail.com wrote:
But OSTree is much more flexible than traditional block replication because:
It is very unlike the unix philosophy to need a lot of ways to do something. Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000?
What Unix philosophy or religion?
Ken Thompson's.
The whole 'one tool does one thing' is not what Unix has ever been.. it is a rosy glasses view forgetting all the arguments over whether awk+sed+sh or sh+sed+awk or special program G was the best way to solve a problem. Which gets down to the fundamental issue. Humans are not lock step creatures.. they will think differently and see different ways to solve a problem and the universe is complex enough that it works so much that people get frustrated with each other because they would like just one way to do things (their way).
That's a good description of what has kept unix out of the mainstream for decades. I can understand having all the different incompatible flavors when the driving force was the attempt to lock users of the commercial versions in with their own different minor extensions, but it doesn't make any sense in the free/reusable software world. It's not that I want something that works 'my' way, it is that I want something that I can learn once - or train staff to do once, not monthly. And I want it to work the same way for 2 machines as for thousands, and without rebuilding a bunch of supporting infrastructure every time someone has a different idea. Is that too much to ask? Why does every change have to make you throw out your existing knowledge and infrastructure instead of just fixing the small remaining problems?
On 6 April 2014 17:45, Les Mikesell lesmikesell@gmail.com wrote:
On Sun, Apr 6, 2014 at 3:01 PM, Stephen John Smoogen smooge@gmail.com wrote:
But OSTree is much more flexible than traditional block replication because:
It is very unlike the unix philosophy to need a lot of ways to do something. Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000?
What Unix philosophy or religion?
Ken Thompson's.
That is the one liner elevator pitch. It gets to be much more nuanced when you get to the details and the complexity of where things go. Plus it was written when text was universal and the main thing that people dealt with. Today text is a deep down thing that gets dealt with but not what people actually interact with.
The whole 'one tool does one thing' is not what Unix has ever been.. it is a rosy glasses view forgetting all the arguments over whether awk+sed+sh or sh+sed+awk or special program G was
the
best way to solve a problem. Which gets down to the fundamental issue. Humans are not lock step creatures.. they will think differently and see different ways to solve a problem and the universe is complex enough
that it
works so much that people get frustrated with each other because they
would
like just one way to do things (their way).
That's a good description of what has kept unix out of the mainstream for decades. I can understand having all the different incompatible flavors when the driving force was the attempt to lock users of the commercial versions in with their own different minor extensions, but it doesn't make any sense in the free/reusable software world. It's not that I want something that works 'my' way, it is that I want something that I can learn once - or train staff to do once, not monthly. And I want it to work the same way for 2 machines as for thousands, and without rebuilding a bunch of supporting infrastructure every time someone has a different idea. Is that too much to ask? Why does every change have to make you throw out your existing knowledge and infrastructure instead of just fixing the small remaining problems?
Rule 3 of his philosophy:
Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
Rule 4 of the Unix philosophy:
Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
And then
When in doubt use brute force.
These are the core issues that we all run into and causes us to froth and foam about change. We had a lovely 30 years where the following factors buffered us from those rules:
1) The number of programmers who were solving these problems were in the hundreds versus the millions that they are today. Out of those hundreds, you had camps which meant the central architects were probably counted on your hand. That meant that group think would allow for 'consensus' of how to solve problems and keep the number of different solutions to a minimum.
2) Due to the cost of leased lines and mailing tapes, it would take years to filter out the churn that the above rules hit. Thus your 2-4 Unix boxes could go for years without anything but software patches and might not see the next architecture change to the OS until the next hardware rollout. [Unless you were adventurous and could have the Vax down for a week as you worked on the phone with some tech on why cc didn't produce the kernel you wanted.]
3) Your site was pretty isolated and would reinforce group think via purchasing. You might be a Vax shop or a Sun shop or a HP shop but in most cases the majority of the boxes would be one architecture and one vendor would supply the OS. You might have other systems but they would be small fry compared to the main architecture.. and normally you would tool to make the one-offs look like the main vendor via symlinks etc.
4) The ability to buy systems was hard because they were extremely expensive.. counting in inflation we are talking millions of dollars of investment for 1-2 systems while today we can have hundreds of boxes for that amount. You bought 1 box and made it do everything you needed.
However, none of these exist anymore. Systems are cheaper which means we can have tons of boxes dedicated to one task which means you can write your tools for it very well but it won't work on other stuff that well. The internet and other changes make it that you are not isolated and you can see all the different ways to solve a problem without an outside filter that if you go with vendor/architecture X you will solve all your problems with X's way versus Y's way. The internet makes it also easier for the churn to be pushed out. While this stuff might go into a lab for years and then get pushed out when the vendor decides to push a new hardware system out you can see it now. And finally we now have much less groupthink than we had back then because the barrier to think differently is much lower. [Doesn't mean it isn't there but it is a lot less.] Thus instead of 5-10 brute force solutions to a problem you have 100's or 1000's.
It isn't nice or pretty and there are days when I feel like a dinosaur wishing the damn meteor would hurry up and get here.. but it also is just how things have changed and I just need to figure out ways to filter stuff better to make it work for the environments I need to work it in.
What Unix philosophy or religion?
Ken Thompson's.
On Mon, 7 Apr 2014, Stephen John Smoogen wrote:
Rule 3 of his philosophy:
citation please (might be: The UNIX Programming Environment, but I don't recognize the pull quote as such) ... this looks more like: Mike Gancarz: The UNIX Philosophy (I've only a copy of the 1st ed.)
thanks
-- Russ herrold
On 7 April 2014 13:44, R P Herrold herrold@owlriver.com wrote:
What Unix philosophy or religion?
Ken Thompson's.
On Mon, 7 Apr 2014, Stephen John Smoogen wrote:
Rule 3 of his philosophy:
http://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s06.html
[McIlroy78] The Bell System Technical Journal. Bell Laboratories. M. D. McIlroy, E. N. Pinson, and B. A. Tague. "Unix Time-Sharing System Forward". 1978. 57 (6,part2). p.1902.
And I was wrong.. that was from Doug McIlroy not Ken Thompson. The only Ken Thompson quote I could find was "When in Doubt Use Brute Force."
citation please (might be: The UNIX Programming Environment, but I don't recognize the pull quote as such) ... this looks more like: Mike Gancarz: The UNIX Philosophy (I've only a copy of the 1st ed.)
thanks
-- Russ herrold _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On Mon, 7 Apr 2014, Stephen John Smoogen wrote:
http://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s06.html
[McIlroy78] The Bell System Technical Journal. Bell Laboratories. M. D. McIlroy, E. N. Pinson, and B. A. Tague. "Unix Time-Sharing System Forward". 1978. 57 (6,part2). p.1902.
thank you
-- Russ
On Mon, Apr 7, 2014 at 1:41 PM, Stephen John Smoogen smooge@gmail.com wrote:
What Unix philosophy or religion?
Ken Thompson's.
That is the one liner elevator pitch. It gets to be much more nuanced when you get to the details and the complexity of where things go. Plus it was written when text was universal and the main thing that people dealt with. Today text is a deep down thing that gets dealt with but not what people actually interact with.
It's not about text. It is about elegant, reusable simplicity. How many system calls and options should there be to create a new process? How many programs should there be for that first process that is the parent of all others? Can't people who don't like those aspects of unix design find some other OS to butcher?
Rule 3 of his philosophy:
Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
Yeah - where does that say "push this out to vast numbers of users, wait till they learn and like it, then change it in ways that will break everything they know"? Or anything like what Fedora does...?
Rule 4 of the Unix philosophy:
Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
Likewise - where does that say to publish the bad versions before throwing them out? Or to throw them out if they still work correctly?
When in doubt use brute force.
These are the core issues that we all run into and causes us to froth and foam about change. We had a lovely 30 years where the following factors buffered us from those rules:
No, most of that 30 years was about minor variations with patent/copyright protection to prevent reuse and attempt to lock customers in.
- The number of programmers who were solving these problems were in the
hundreds versus the millions that they are today. Out of those hundreds, you had camps which meant the central architects were probably counted on your hand. That meant that group think would allow for 'consensus' of how to solve problems and keep the number of different solutions to a minimum.
So with millions of programmers, wouldn't you expect better results if they build on each other's work instead of all of them starting over from scratch with no concern for backwards compatibility?
- Due to the cost of leased lines and mailing tapes, it would take years to
filter out the churn that the above rules hit. Thus your 2-4 Unix boxes could go for years without anything but software patches and might not see the next architecture change to the OS until the next hardware rollout. [Unless you were adventurous and could have the Vax down for a week as you worked on the phone with some tech on why cc didn't produce the kernel you wanted.]
I've never been in a situation like that. I don't think it is an accurate depiction of the time since usenet existed.
- Your site was pretty isolated and would reinforce group think via
purchasing. You might be a Vax shop or a Sun shop or a HP shop but in most cases the majority of the boxes would be one architecture and one vendor would supply the OS. You might have other systems but they would be small fry compared to the main architecture.. and normally you would tool to make the one-offs look like the main vendor via symlinks etc.
I started with AT&T boxes. And by the mid-90's, linux was a fair approximation of SysVr4 - didn't need a lot of munging.
- The ability to buy systems was hard because they were extremely
expensive.. counting in inflation we are talking millions of dollars of investment for 1-2 systems while today we can have hundreds of boxes for that amount. You bought 1 box and made it do everything you needed.
That was long, long ago. Dell sold PC's with SysV installed at roughly high-end PC prices before Win95 came out.
However, none of these exist anymore. Systems are cheaper which means we can have tons of boxes dedicated to one task which means you can write your tools for it very well but it won't work on other stuff that well.
But that's all the more reason for simple reusable designs. And maintaining backwards compatibility as the hardware churns.
The internet and other changes make it that you are not isolated and you can see all the different ways to solve a problem without an outside filter that if you go with vendor/architecture X you will solve all your problems with X's way versus Y's way.
Nobody believes that after the first time. But I suppose there are a lot of people with no experience who think everything old is bad.
The internet makes it also easier for the churn to be pushed out. While this stuff might go into a lab for years and then get pushed out when the vendor decides to push a new hardware system out you can see it now.
This does have a certain value. If you look at how bad the code was on the first RedHat release that was likely to boot on most PCs (maybe 4.0 or so) and how many bugs have been found and fixed as a result of so many people trying to make it work. But, if everything with bugs gets thrown away instead of fixed, what was the point?
And finally we now have much less groupthink than we had back then because the barrier to think differently is much lower. [Doesn't mean it isn't there but it is a lot less.] Thus instead of 5-10 brute force solutions to a problem you have 100's or 1000's.
Ummm, venture capital? People wanting to build something different enough to sell?
It isn't nice or pretty and there are days when I feel like a dinosaur wishing the damn meteor would hurry up and get here.. but it also is just how things have changed and I just need to figure out ways to filter stuff better to make it work for the environments I need to work it in.
None of which really addresses the issue of elegant designs that scale down as well as up or help with version-tracking of packages across some number of systems.
On 7 April 2014 14:06, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 7, 2014 at 1:41 PM, Stephen John Smoogen smooge@gmail.com wrote: None of which really addresses the issue of elegant designs that scale down as well as up or help with version-tracking of packages across some number of systems.
My last post on this to the list because it isn't on-topic anymore and KB doesn't need to drop the hammer on me. My view is that elegant designs in computers all look different to each person due to the fact that their life experience, brain mapping, and mood of the day makes it so. And because of that, I will see constant churn and redesign no matter how stupid it is. I can either learn to adapt and keep going, or I can get out of computers and go into something more glacial paced like farming (which I have been very tempted to do every 2-3 emails from Lennart P.). That is it and I am done here unless there is something CentOS related.
On Mon, Apr 7, 2014 at 4:25 PM, Stephen John Smoogen smooge@gmail.com wrote:
... or I can get out of computers and go into something more glacial paced like farming (which I have been very tempted to do every 2-3 emails from Lennart P.). That is it and I am done here unless there is something CentOS related.
We aren't going to agree on anything here, but you obviously aren't following modern farming techniques where you find that not only are the practices from a few years ago obsolete but that they will probably give you cancer. At least that hasn't happened yet with software obsolescence.
On Mon, Apr 7, 2014 at 10:34 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 7, 2014 at 4:25 PM, Stephen John Smoogen smooge@gmail.com wrote:
... or I can get out of computers and go into something more glacial paced like farming (which I have been very tempted to do every 2-3 emails from Lennart P.). That is it and I am done here unless there is something CentOS related.
We aren't going to agree on anything here, but you obviously aren't following modern farming techniques where you find that not only are the practices from a few years ago obsolete but that they will probably give you cancer. At least that hasn't happened yet with software obsolescence.
But coming back down from the ivory tower, it looks like the main advantages put forward for ostree are: 1. Easy to switch between many different setups 2. "Atomic" updates: entire tree is changed on reboot, not piecemeal as the RPMs are installed 3. A new tree is always an exact clone of the one in the repo (unlike packages, which may diverge) 4. Doesn't need to re-calculate dependencies on every single host.
Adding in versioning or "list of packages" to yum may make some kinds of deployment easier, but to make it achieve any of the things listed above would not be so much "fixing" or "improving" yum / rpm as making it into a different beast altogether; likely many people would not want to use the new beast, and you'd have a fork anyway.
So the question is whether these are worth the cost of having Yet Another Tool.
#1 may not be a killer feature for sysadmins in particular; they are likely to figure out one setup and use it on their production systems. It would probably be very much a feature for people who *develop* these "images" however; and anything which makes it so that development
#2 is probably somewhat of an advantage; particularly if it means that you can also atomically switch back to the previous version if it turns out you've screwed something up.
#3 is also probably an advantage: particularly if you're reporting a bug and you want the developer to be able to reproduce your problem.
#4 isn't a killer advantage, but if it means smaller footprint and faster deployment, it probably is an advantage.
Whether these outweigh the disadvantage of having Yet Another Tool, I can't really tell; but the ostree idea certainly seems to have merit, and is at least worth considering.
-George
On Tue, Apr 8, 2014 at 5:20 AM, George Dunlap dunlapg@umich.edu wrote:
But coming back down from the ivory tower, it looks like the main advantages put forward for ostree are:
Is it fair to enumerate pros without mentioning any cons?
- Easy to switch between many different setups
How easy? Does this require learning another configuration language and a staging box to build the image? Do you need to keep a master for every variation? Is there a way to quantify the hard/easy evaluation, say in terms of how much training it would take before trusting someone to do it in a production setting?
- "Atomic" updates: entire tree is changed on reboot, not piecemeal
as the RPMs are installed
So, a reboot for every little change? Now we might as well throw clonezilla into the comparison (which I often use for initial rollouts, but generally not updates).
- A new tree is always an exact clone of the one in the repo (unlike
packages, which may diverge)
So, some sort of post processing might be needed for the things that can't be exact clones.
- Doesn't need to re-calculate dependencies on every single host.
Seems minor compared to the downtime of a reboot. And it might be possible to make yum accept a list of already depsolved packages if anyone really cared.
Adding in versioning or "list of packages" to yum may make some kinds of deployment easier, but to make it achieve any of the things listed above would not be so much "fixing" or "improving" yum / rpm as making it into a different beast altogether; likely many people would not want to use the new beast, and you'd have a fork anyway.
But it wouldn't need to be 'new' or break anything it already does.
#2 is probably somewhat of an advantage; particularly if it means that you can also atomically switch back to the previous version if it turns out you've screwed something up.
So add in the con of needing to maintain the disk space of the new/old copies.
#4 isn't a killer advantage, but if it means smaller footprint and faster deployment, it probably is an advantage.
My servers take forever to reboot - if you have to do it for every update it is not going to be faster in the long run. And, you'll need some sort of scheduling manager to control the rollout so not too many instances are down at once - which will probably be different than anything you already have. In a VM environment things might be different.
Hi George,
On Tue, Apr 8, 2014 at 6:20 AM, George Dunlap dunlapg@umich.edu wrote:
But coming back down from the ivory tower,
Thanks for steering us back on track =)
So the question is whether these are worth the cost of having Yet Another Tool.
Right, that is a good way to put it. There's no question that rpm-ostree is another tool. However I do want to strongly emphasize something that's implicit in the "rpm-ostree" name: I see the ostree technology as joining an ecosystem, not as entirely replacing things.
Concretely, on every rpm-ostree generated tree, "rpm -qa" works. Which if you think about it is a Big Deal - there's an incredible amount of tooling built up around RPM in this way. A simple example is: How do you report a bug? Well, Bugzilla only knows about RPMs.
And more strongly than that, the rpm-ostree server side compose tooling *enforces* that you only put RPM-based content in there.
(The OSTree side is just dumb filesystem replication, you could easily ship trees generated with a hybrid of RPM + say pip or bundler. I'd rather fix the RPM side to be capable of what people want from pip myself)
#1 may not be a killer feature for sysadmins in particular; they are likely to figure out one setup and use it on their production systems. It would probably be very much a feature for people who *develop* these "images" however; and anything which makes it so that development
Right. "ostree admin switch" is not that interesting - yet. It does demonstrate that client systems *can* have choice of software, unlike many image based upgrade systems out there. But it will be far, far more interesting when rpm-ostree supports package installation on top, then it'll act like a rebase operation.
#2 is probably somewhat of an advantage; particularly if it means that you can also atomically switch back to the previous version if it turns out you've screwed something up.
Exactly! You can, you always have two complete "trees" (kernel + /usr), that manifest as two bootloader entries. There is even now "rpm-ostree rollback": https://github.com/cgwalters/rpm-ostree/commit/441313f9ef4dca7f6e1c683dccc35...
Whether these outweigh the disadvantage of having Yet Another Tool, I can't really tell; but the ostree idea certainly seems to have merit, and is at least worth considering.
Thanks for the feedback!
If anyone has a chance to try composing trees with rpm-ostree and runs into trouble, please don't hesitate to file github issues, or you can follow up here, or mail me directly if you prefer.
I'm not going to get into the philosophical debate here, here's my totally pragmatic viewpoint:
Coming from an environment where we install hundreds of I'd guess you could call them appliances, running centos5, I'd say at first glance os-tree would provide a stable way to handle updates in the field (when we switch to centos7 later this year), where we update both the os and our own code, as well as all the supporting cast (bind, apache, etc) without too much risk. And anyway, it fedora is anything to go by, centos7 will boot much faster than centos5, which we are using currently.
At the moment, we still use yum update, but it's not always safe, and we end up with a variety of versions in the field.
Oh, and, while I sympathize with many other sysadmins, what's one more tool, if I don't have to write it myself and it does what I don' t have today? I have at least three people full-time involved in keeping our sites up to date; if I could just move one of them to other tasks, I win.
On Tue, Apr 8, 2014 at 9:10 AM, Colin Walters walters@verbum.org wrote:
Hi George,
On Tue, Apr 8, 2014 at 6:20 AM, George Dunlap dunlapg@umich.edu wrote:
But coming back down from the ivory tower,
Thanks for steering us back on track =)
So the question is whether these are worth the cost of having Yet Another Tool.
Right, that is a good way to put it. There's no question that rpm-ostree is another tool. However I do want to strongly emphasize something that's implicit in the "rpm-ostree" name: I see the ostree technology as joining an ecosystem, not as entirely replacing things.
Concretely, on every rpm-ostree generated tree, "rpm -qa" works. Which if you think about it is a Big Deal - there's an incredible amount of tooling built up around RPM in this way. A simple example is: How do you report a bug? Well, Bugzilla only knows about RPMs.
And more strongly than that, the rpm-ostree server side compose tooling *enforces* that you only put RPM-based content in there.
(The OSTree side is just dumb filesystem replication, you could easily ship trees generated with a hybrid of RPM + say pip or bundler. I'd rather fix the RPM side to be capable of what people want from pip myself)
#1 may not be a killer feature for sysadmins in particular; they are likely to figure out one setup and use it on their production systems. It would probably be very much a feature for people who *develop* these "images" however; and anything which makes it so that development
Right. "ostree admin switch" is not that interesting - yet. It does demonstrate that client systems *can* have choice of software, unlike many image based upgrade systems out there. But it will be far, far more interesting when rpm-ostree supports package installation on top, then it'll act like a rebase operation.
#2 is probably somewhat of an advantage; particularly if it means that you can also atomically switch back to the previous version if it turns out you've screwed something up.
Exactly! You can, you always have two complete "trees" (kernel + /usr), that manifest as two bootloader entries. There is even now "rpm-ostree rollback":
https://github.com/cgwalters/rpm-ostree/commit/441313f9ef4dca7f6e1c683dccc35...
Whether these outweigh the disadvantage of having Yet Another Tool, I can't really tell; but the ostree idea certainly seems to have merit, and is at least worth considering.
Thanks for the feedback!
If anyone has a chance to try composing trees with rpm-ostree and runs into trouble, please don't hesitate to file github issues, or you can follow up here, or mail me directly if you prefer.
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
Hi guys,
Forgive me for skipping the philosophical debate ;-)
If I understand correctly ostree, it would help us in OpenNebula (and mainly the OpenNebula users) with a couple of challenges that arise somewhat frequently. The main advantage I see this solves, or at least helps greatly with, is certification of both physical and virtual platforms.
First of all, by using ostree we can certify a specific CentOS OpenNebula deployment, and it also delivers a very flexible way of upgrading the frontend/worker nodes.
The other issue is that we've seen a growing number of people using not only single VMs but multi-tiered services composed of multiple VMs. Handling upgrades and certifying applications to work with those VMs is a bit challenging, since all the images can be installed differently or can have different versions of the rpm packages. However with ostree I can imagine how it would be a very useful tool for service developers when developing/deploying/upgrading these environments.
cheers, Jaime
On Tue, Apr 8, 2014 at 4:07 PM, Mike Schmidt mike.schmidt@intello.comwrote:
I'm not going to get into the philosophical debate here, here's my totally pragmatic viewpoint:
Coming from an environment where we install hundreds of I'd guess you could call them appliances, running centos5, I'd say at first glance os-tree would provide a stable way to handle updates in the field (when we switch to centos7 later this year), where we update both the os and our own code, as well as all the supporting cast (bind, apache, etc) without too much risk. And anyway, it fedora is anything to go by, centos7 will boot much faster than centos5, which we are using currently.
At the moment, we still use yum update, but it's not always safe, and we end up with a variety of versions in the field.
Oh, and, while I sympathize with many other sysadmins, what's one more tool, if I don't have to write it myself and it does what I don' t have today? I have at least three people full-time involved in keeping our sites up to date; if I could just move one of them to other tasks, I win.
On Tue, Apr 8, 2014 at 9:10 AM, Colin Walters walters@verbum.org wrote:
Hi George,
On Tue, Apr 8, 2014 at 6:20 AM, George Dunlap dunlapg@umich.edu wrote:
But coming back down from the ivory tower,
Thanks for steering us back on track =)
So the question is whether these are worth the cost of having Yet Another Tool.
Right, that is a good way to put it. There's no question that rpm-ostree is another tool. However I do want to strongly emphasize something that's implicit in the "rpm-ostree" name: I see the ostree technology as joining an ecosystem, not as entirely replacing things.
Concretely, on every rpm-ostree generated tree, "rpm -qa" works. Which if you think about it is a Big Deal - there's an incredible amount of tooling built up around RPM in this way. A simple example is: How do you report a bug? Well, Bugzilla only knows about RPMs.
And more strongly than that, the rpm-ostree server side compose tooling *enforces* that you only put RPM-based content in there.
(The OSTree side is just dumb filesystem replication, you could easily ship trees generated with a hybrid of RPM + say pip or bundler. I'd rather fix the RPM side to be capable of what people want from pip myself)
#1 may not be a killer feature for sysadmins in particular; they are likely to figure out one setup and use it on their production systems. It would probably be very much a feature for people who *develop* these "images" however; and anything which makes it so that development
Right. "ostree admin switch" is not that interesting - yet. It does demonstrate that client systems *can* have choice of software, unlike many image based upgrade systems out there. But it will be far, far more interesting when rpm-ostree supports package installation on top, then it'll act like a rebase operation.
#2 is probably somewhat of an advantage; particularly if it means that you can also atomically switch back to the previous version if it turns out you've screwed something up.
Exactly! You can, you always have two complete "trees" (kernel + /usr), that manifest as two bootloader entries. There is even now "rpm-ostree rollback":
https://github.com/cgwalters/rpm-ostree/commit/441313f9ef4dca7f6e1c683dccc35...
Whether these outweigh the disadvantage of having Yet Another Tool, I can't really tell; but the ostree idea certainly seems to have merit, and is at least worth considering.
Thanks for the feedback!
If anyone has a chance to try composing trees with rpm-ostree and runs into trouble, please don't hesitate to file github issues, or you can follow up here, or mail me directly if you prefer.
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
--
*Mike SCHMIDT *
*CTO Intello Technologies Inc.**mike.schmidt@intello.com mike.schmidt@intello.com*
*Canada: 1-888-404-6261 x320 <1-888-404-6261%20x320> USA: 1-888-404-6268 x320 <1-888-404-6268%20x320> Mobile: 514-409-6898 <514-409-6898> www.intello.com http://www.intello.com/*
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On Thu, Apr 10, 2014 at 10:39 AM, Jaime Melis jmelis@opennebula.org wrote:
First of all, by using ostree we can certify a specific CentOS OpenNebula deployment,
Can you describe a bit more what you mean by certify? Does that mean for example running a validated set of packages and versions? If so then yes, OSTree allows you to say "we certify commit <sha256>" which refers to a full filesystem tree, which was generated from a set of packages.
The commits can also be GPG signed.
and it also delivers a very flexible way of upgrading the frontend/worker nodes.
Right.
The other issue is that we've seen a growing number of people using not only single VMs but multi-tiered services composed of multiple VMs. Handling upgrades and certifying applications to work with those VMs is a bit challenging, since all the images can be installed differently or can have different versions of the rpm packages.
Well, if the nodes are deployed using rpm-ostree in its current form, then they are immutable, so there's no ability for each VM to drift. Now, the question is, do you need the ability to have some different package versions per node? Then it's more complicated - you can generate *multiple* trees which share storage (both on the compose server and on clients).
An example is, from: http://rpm-ostree.cloud.fedoraproject.org/composeui/#/
You can be running the "fedora-atomic/rawhide/x86_64/buildmaster/base/core" tree, and do:
ostree admin switch fedora-atomic/rawhide/x86_64/buildmaster/server/docker-io
Which does an atomic swap to that tree. This shows that OSTree is a *lot* more flexible than traditional image based systems where clients systems normally have no choice at all.
Thanks for the further explanations Colin.
Can you describe a bit more what you mean by certify? Does that mean for example running a validated set of packages and versions? If so then yes, OSTree allows you to say "we certify commit <sha256>" which refers to a full filesystem tree, which was generated from a set of packages.
Yes, that's exactly what I meant. That's kind f a great deal, because we can just deliver as you say a full filesystem tree that is guaranteed to work (certified). We don't exactly do this kind of things ourselves (the OpenNebula team) very frequently but it's something that will definitely benefit the cloud consumers.
The commits can also be GPG signed.
Even better :)
Well, if the nodes are deployed using rpm-ostree in its current form, then they are immutable, so there's no ability for each VM to drift. Now, the question is, do you need the ability to have some different package versions per node? Then it's more complicated - you can generate *multiple* trees which share storage (both on the compose server and on clients).
An example is, from: http://rpm-ostree.cloud.fedoraproject.org/composeui/#/
You can be running the "fedora-atomic/rawhide/x86_64/buildmaster/base/core" tree, and do:
ostree admin switch fedora-atomic/rawhide/x86_64/buildmaster/server/docker-io
Which does an atomic swap to that tree. This shows that OSTree is a *lot* more flexible than traditional image based systems where clients systems normally have no choice at all.
I see, that's quite powerful. I initally meant the having the exact same
package versions in all the nodes, so the ostree workflow would be even simpler than that, if I understood correctly. When you have a cloud service composed of multiple roles, it is quite important that the nodes are exact replicas, in order to be able to certify an application and to guarantee that an upgrade will work. So actually what I want to prevent is the VM drift.
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On Thu, Apr 10, 2014 at 6:24 PM, Jaime Melis jmelis@opennebula.org wrote:
Thanks for the further explanations Colin.
Can you describe a bit more what you mean by certify? Does that mean for example running a validated set of packages and versions? If so then yes, OSTree allows you to say "we certify commit <sha256>" which refers to a full filesystem tree, which was generated from a set of packages.
Yes, that's exactly what I meant. That's kind f a great deal, because we can just deliver as you say a full filesystem tree that is guaranteed to work (certified).
Yep, exactly.
I see, that's quite powerful. I initally meant the having the exact same package versions in all the nodes, so the ostree workflow would be even simpler than that, if I understood correctly.
Right. But I suspect you will have multiple trees, even if you want the same packageset on every client. Because it's not just about content, but also about *versions*. In reality, few operating system vendors have the luxury of just having exactly one release that everyone uses. Google ChromiumOS is fairly unique in that respect.
CentOS obviously has several major versions, and I'm sure many system builders/administrators here also support multiple parallel versions.
This is where the ability of OSTree to switch dynamically back and forth between *arbitrary* trees I think is a major advantage over a lot of image-based update systems out there.
The current fedora-atomic trees are all rawhide based, but let's fast forward a bit and say OSTree gets more integrated into Fedora release engineering, and we have:
fedora-atomic/22/x86_64/buildmaster/server/virthost
Now client systems can do:
ostree admin switch fedora-atomic/rawhide/x86_64/buildmaster/server/virthost ^^^^^^^
This retains the same semantic package set, but with newer versions. This would allow people to *try* new versions on staging servers, provide feedback, and where it gets really cool is that if something goes wrong, you still have the previous tree available to reboot into. "rpm-ostree rollback" on the client will undo the switch, and you'll be back to the original kernel and userspace versions you had, with total safety.
Note: /var is shared and not rolled back - I think people providing operating systems need to clearly communicate when daemons/services perform forwards-only data migrations, like databases.
On Sun, Apr 06, 2014 at 01:35:40PM -0500, Les Mikesell wrote:
Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000?
People who run 2,000 systems have a lot of religion as to how to do it. In the HPC world, there are a dozen free solutions for this kind of thing, all different.
-- greg
On Fri, Apr 4, 2014 at 3:09 PM, Karanbir Singh mail-lists@karan.org wrote:
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
Yep, also it is now useful on the *client* side: you can use "rpm-ostree upgrade" which at the moment just uses libostree. It will get significantly more exciting when I support adding packages on top of a base tree.
Also a quick note; the qcow2 image generation predates Anaconda support - it's mostly only useful until that lands. When that happens, Anaconda is the clearly correct thing to use to generate disk images.
(For example, the rpm-ostree qcow2 code hardcodes XFS and the disk size, etc.)
Details aside:
I'm quite interested to see if anyone in the CentOS community in this sort of deployment model. I presented at Devconf.cz on this topic: http://www.youtube.com/watch?v=Hy0ZEHPXJ9Q
And the feedback I got was that some of the admins could really imagine using it for server farms, but not necessarily on their desktops. Given you guys are more of the server farm, I'd like to explore that use case more. Any feedback appreciated!
On 04/04/2014 08:09 PM, Karanbir Singh wrote:
The Fedora cloud sig has been talking about using the gnome ostre to deliver a potential platform, i think this is fantastic and leads in quite nicely to the conversations we've all been having about a minimial manageable image ( either for virt or cloud usage, or even baremetal ).
I wonder if we can make this work for CentOS-6 now and 7 when its around ? the rpm-ostree stack seems to have a systemd dep though. Who fancies taking a dig at it ?
references: ostree: https://wiki.gnome.org/action/show/Projects/OSTree
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
so, this happened http://projectatomic.io/ - i guess ostree just got some serious traction on the distro level.
I'll post my scripts ( that build, deliver and are able to instantiate an ostree image from rhel7beta1 into either kvm hosts ( or any hvm host ) including AWS HvM instances ( once i get to a proper internet connection ).
- KB
I read about that too, today. Is there any thought of a Centos atomic spin? Is this an open source effort by redhat? Or maybe a spin more like CoreOS ( https://coreos.com) which looks like a different (simplified) take on the same general idea? Both atomic and CoreOS are even more than minimal images since they are built to do nothing else but run docker containers. I' m going to give CoreOS a try to see how it's put together; there seem to be a few good ideas there.
On Wed, Apr 16, 2014 at 11:46 PM, Karanbir Singh mail-lists@karan.orgwrote:
On 04/04/2014 08:09 PM, Karanbir Singh wrote:
The Fedora cloud sig has been talking about using the gnome ostre to deliver a potential platform, i think this is fantastic and leads in quite nicely to the conversations we've all been having about a minimial manageable image ( either for virt or cloud usage, or even baremetal ).
I wonder if we can make this work for CentOS-6 now and 7 when its around ? the rpm-ostree stack seems to have a systemd dep though. Who fancies taking a dig at it ?
references: ostree: https://wiki.gnome.org/action/show/Projects/OSTree
rpm-ostree: https://github.com/cgwalters/rpm-ostree ( this is what one would use to take a bunch of rpms and make them into an ostree VM, it spits out a qcow2 image )
so, this happened http://projectatomic.io/ - i guess ostree just got some serious traction on the distro level.
I'll post my scripts ( that build, deliver and are able to instantiate an ostree image from rhel7beta1 into either kvm hosts ( or any hvm host ) including AWS HvM instances ( once i get to a proper internet connection ).
- KB
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On Wed, Apr 16, 2014 at 10:39 PM, Mike Schmidt mike.schmidt@intello.com wrote:
I read about that too, today. Is there any thought of a Centos atomic spin? Is this an open source effort by redhat?
Of course!
Or maybe a spin more like CoreOS (https://coreos.com) which looks like a different (simplified) take on the same general idea? Both atomic and CoreOS are even more than minimal images since they are built to do nothing else but run docker containers. I' m going to give CoreOS a try to see how it's put together; there seem to be a few good ideas there.
It's clear the CoreOS team has some great ideas and has put a lot of thought into a new model for OS+app delivery.
But what I'd say on this is that I'd like Project Atomic to closely orbit the RPM ecosystem. For example, realistically you need content that goes into base images that gets reliable security updates. The OpenSSL scenario shows the danger of just pulling arbitrary application content.
The traditional package model has been able to deliver security updates, and we need to be careful not to throw that away - while still allowing people to have the option to run complete app images from the upstream app author directly and rely on them for security updates.
Furthermore of course on the host OS side, with rpm-ostree, you're taking *only* known RPM content into the host OS. While it's true that like Docker, the OSTree delivery vehicle is content-agnostic, you might note from the very name of rpm-ostree that the tool will closely bind together the RPM world of individual packages and the OSTree world of trees. I have some pretty exciting hybrid package/tree functionality on the roadmap, so stay tuned there =)
On Thu, Apr 17, 2014 at 12:51 PM, Colin Walters walters@verbum.org wrote:
It's clear the CoreOS team has some great ideas and has put a lot of thought into a new model for OS+app delivery.
But what I'd say on this is that I'd like Project Atomic to closely orbit the RPM ecosystem.
Wouldn't it be nicer to just make the standard package manager able to do updates repeatably? That is, so you could build or update one system, test it, then tell the package manager to install or update the same set of packages at the tested versions on one or more other systems? Without needing additional infrastructure or tooling...
But what I'd say on this is that I'd like Project Atomic to closely orbit the RPM ecosystem. For example, realistically you need content that goes into base images that gets reliable security updates.
Agreed 100%. This is one of the things that I did not see mentioned anywhere in CoreOS documentation - no mention of a packaging ecosystem used to manage image content. That's one of the reasons I am installing a copy on libvirt to see what the insides look like. Of course, I am also installing the F20 Atomic kvm machine.
On 04/17/2014 12:51 PM, Colin Walters wrote:
On Wed, Apr 16, 2014 at 10:39 PM, Mike Schmidt mike.schmidt@intello.com wrote:
I read about that too, today. Is there any thought of a Centos atomic spin? Is this an open source effort by redhat?
Of course!
We'll very likely be working on this in the immediate future. It relies on the 7 code base, so it will likely be proof-of-concept/demo for a bit. Once we have some time to discuss it as a project we'll see what we can put together.
Or maybe a spin more like CoreOS (https://coreos.com) which looks like a different (simplified) take on the same general idea? Both atomic and CoreOS are even more than minimal images since they are built to do nothing else but run docker containers. I' m going to give CoreOS a try to see how it's put together; there seem to be a few good ideas there.
It's clear the CoreOS team has some great ideas and has put a lot of thought into a new model for OS+app delivery.
But what I'd say on this is that I'd like Project Atomic to closely orbit the RPM ecosystem. For example, realistically you need content that goes into base images that gets reliable security updates. The OpenSSL scenario shows the danger of just pulling arbitrary application content.
To me, this is one of the easy wins. CoreOS is a separate code base. With Atomic, you keep the OS familiarity with tools and packages, and generate images via kickstart just like a normal deployment. This way it's not as drastic a workflow shift and for the most part, it helps keep things consistent, and you can track package updates the same way.
The traditional package model has been able to deliver security updates, and we need to be careful not to throw that away - while still allowing people to have the option to run complete app images from the upstream app author directly and rely on them for security updates.
Furthermore of course on the host OS side, with rpm-ostree, you're taking *only* known RPM content into the host OS. While it's true that like Docker, the OSTree delivery vehicle is content-agnostic, you might note from the very name of rpm-ostree that the tool will closely bind together the RPM world of individual packages and the OSTree world of trees. I have some pretty exciting hybrid package/tree functionality on the roadmap, so stay tuned there =)
Nice teaser. Now we're going to want more details. :-P
As a side note - the deploy tool (deployproject.org) can be used to pull together a set of packages (essentially a lightweight spin), test install and update, and deploy to one or more systems using standard tools (anaconda, yum). It can also be used to script ostree creation (or vm or ISO image creation, or ...) starting from the mini spin.
-Kay
On Thu, Apr 17, 2014 at 12:51 PM, Colin Waters <walters at verbum.org> wrote:
It's clear the CoreOS team has some great ideas and has put a lot of thought into a new model for OS+app delivery.
But what I'd say on this is that I'd like Project Atomic to closely orbit the RPM ecosystem.
Wouldn't it be nicer to just make the standard package manager able to do updates repeatably? That is, so you could build or update one system, test it, then tell the package manager to install or update the same set of packages at the tested versions on one or more other systems? Without needing additional infrastructure or tooling...
On Tue, Apr 22, 2014 at 11:49 AM, Kay Williams kay@deployproject.org wrote:
As a side note - the deploy tool (deployproject.org) can be used to pull together a set of packages (essentially a lightweight spin), test install and update, and deploy to one or more systems using standard tools (anaconda, yum). It can also be used to script ostree creation (or vm or ISO image creation, or ...) starting from the mini spin.
Does the deploy tool make reproducible copies? And if so, does it require a frozen copy of a repository or the specific packages to do it? That is, if you have 6 different types of servers that are updated on different schedules, would you have to maintain 6 different repository mirrors, each frozen at the time each instance was tested and kept for as long as you might want to reproduce that tested system?
Or does this even do updates at all?
You would use deploy to create an individual repository (mini spin) for each server. At that point, content would be frozen. When you wanted to update the repository for that server, you would run deploy again to integrate, test and release new packages.
-----Original Message----- From: centos-devel-bounces@centos.org [mailto:centos-devel-bounces@centos.org] On Behalf Of Les Mikesell Sent: Tuesday, April 22, 2014 11:08 AM To: The CentOS developers mailing list. Subject: Re: [CentOS-devel] ostree as a delivery model
On Tue, Apr 22, 2014 at 11:49 AM, Kay Williams kay@deployproject.org wrote:
As a side note - the deploy tool (deployproject.org) can be used to pull together a set of packages (essentially a lightweight spin), test install and update, and deploy to one or more systems using standard tools (anaconda, yum). It can also be used to script ostree creation (or vm or ISO image creation, or ...) starting from the mini spin.
Does the deploy tool make reproducible copies? And if so, does it require a frozen copy of a repository or the specific packages to do it? That is, if you have 6 different types of servers that are updated on different schedules, would you have to maintain 6 different repository mirrors, each frozen at the time each instance was tested and kept for as long as you might want to reproduce that tested system?
Or does this even do updates at all?
On Tue, Apr 22, 2014 at 1:54 PM, Kay Williams kay@deployproject.org wrote:
You would use deploy to create an individual repository (mini spin) for each server. At that point, content would be frozen.
So you do have to keep a frozen copy of every state that you might want to reproduce later? Instead of just tracking package versions?
When you wanted to update the repository for that server, you would run deploy again to integrate, test and release new packages.
I'm missing how this is supposed to make things easier or better than a 'yum install _big_list_of_packages_'.
We should really move further discussion to the deploy-users list, as it is a separate project. But briefly, deploy makes it easy both to create the _big_list_of_packages, and to automate testing, installation, and updates using it. We keep a repository of physical packages (rather than just a list of package names/versions/repo_locations) both because it is easier to work with and because we create some packages automatically as we go. The differences between repository versions could be optimized for disk space usage using hardlinks, and this is something we have considered and could implement relatively easily....
-----Original Message----- From: centos-devel-bounces@centos.org [mailto:centos-devel-bounces@centos.org] On Behalf Of Les Mikesell Sent: Tuesday, April 22, 2014 12:59 PM To: The CentOS developers mailing list. Subject: Re: [CentOS-devel] ostree as a delivery model
On Tue, Apr 22, 2014 at 1:54 PM, Kay Williams kay@deployproject.org wrote:
You would use deploy to create an individual repository (mini spin) for each server. At that point, content would be frozen.
So you do have to keep a frozen copy of every state that you might want to reproduce later? Instead of just tracking package versions?
When you wanted to update the repository for that server, you would run deploy again to integrate, test and release new packages.
I'm missing how this is supposed to make things easier or better than a 'yum install _big_list_of_packages_'.
On Tue, Apr 22, 2014 at 5:22 PM, Kay Williams kay@deployproject.org wrote:
We should really move further discussion to the deploy-users list, as it is a separate project. But briefly, deploy makes it easy both to create the _big_list_of_packages, and to automate testing, installation, and updates using it. We keep a repository of physical packages (rather than just a list of package names/versions/repo_locations) both because it is easier to work with and because we create some packages automatically as we go. The differences between repository versions could be optimized for disk space usage using hardlinks, and this is something we have considered and could implement relatively easily....
So even if you had dozens of deployments differing only by a few packages you would still require a full copy of everything in each of those states - kept for as long as you might want another deployment? I was hoping to find something that started with a more sensible premise.
Actually, it is really a minor difference whether we make full copies of packages between repositories or hard link them. Truth be told, we defaulted to hard linking in the past, but thought people might prefer full copies (even at the cost of extra disk space) to absolutely guarantee that changes in one repository don't affect another. Sound like we were wrong for at least one person. :-)
-----Original Message----- From: centos-devel-bounces@centos.org [mailto:centos-devel-bounces@centos.org] On Behalf Of Les Mikesell Sent: Tuesday, April 22, 2014 3:28 PM To: The CentOS developers mailing list. Subject: Re: [CentOS-devel] ostree as a delivery model
On Tue, Apr 22, 2014 at 5:22 PM, Kay Williams kay@deployproject.org wrote:
We should really move further discussion to the deploy-users list, as it is a separate project. But briefly, deploy makes it easy both to create the _big_list_of_packages, and to automate testing, installation, and updates using it. We keep a repository of physical packages (rather than just a list of package names/versions/repo_locations) both because it is easier to work with and because we create some packages automatically as we go. The differences between repository versions could be optimized for disk space usage using hardlinks, and this is something we have considered and
could implement relatively easily....
So even if you had dozens of deployments differing only by a few packages you would still require a full copy of everything in each of those states - kept for as long as you might want another deployment? I was hoping to find something that started with a more sensible premise.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On Tue, Apr 22, 2014 at 6:48 PM, Kay Williams kay@deployproject.org wrote:
Actually, it is really a minor difference whether we make full copies of packages between repositories or hard link them. Truth be told, we defaulted to hard linking in the past, but thought people might prefer full copies (even at the cost of extra disk space) to absolutely guarantee that changes in one repository don't affect another. Sound like we were wrong for at least one person. :-)
It still seems like an inordinate amount of infrastructure setup and maintenance, given that the packages are already versioned. Why can't you cache all the versions you might need together or pull them directly from the original repositories as needed? It sounds as awkward as having to save a full copy of a source tree for every edit instead of using sensible versioning.
It's pretty quick, easy, and efficient, really. Minimal infrastructure, lots of caching. How this all works is covered in the project documentation (deployproject.org/docs).
We can discuss further, but let's *do* move this offline or to one of the project mailing lists (deployproject.org/lists/listinfo).
-----Original Message----- From: centos-devel-bounces@centos.org [mailto:centos-devel-bounces@centos.org] On Behalf Of Les Mikesell Sent: Tuesday, April 22, 2014 5:28 PM To: The CentOS developers mailing list. Subject: Re: [CentOS-devel] ostree as a delivery model
On Tue, Apr 22, 2014 at 6:48 PM, Kay Williams kay@deployproject.org wrote:
Actually, it is really a minor difference whether we make full copies of packages between repositories or hard link them. Truth be told, we defaulted to hard linking in the past, but thought people might prefer full copies (even at the cost of extra disk space) to absolutely guarantee that changes in one repository don't affect another. Sound like we were wrong for at least one person. :-)
It still seems like an inordinate amount of infrastructure setup and maintenance, given that the packages are already versioned. Why can't you cache all the versions you might need together or pull them directly from the original repositories as needed? It sounds as awkward as having to save a full copy of a source tree for every edit instead of using sensible versioning.
On Wed, Apr 23, 2014 at 11:46 AM, Kay Williams kay@deployproject.org wrote:
It's pretty quick, easy, and efficient, really. Minimal infrastructure, lots of caching. How this all works is covered in the project documentation (deployproject.org/docs).
We can discuss further, but let's *do* move this offline or to one of the project mailing lists (deployproject.org/lists/listinfo).
I'm not sure there is anything to discuss. What I'm really looking for is a project that doesn't start with the idea that keeping a bazillion snapshot copies of already-versioned packages is easy and efficient. Or that you will only ever have one image in production. Are there any such things that can use the inherent package versioning to do reproducible installs and updates?