Greetings CentOS-devel,
We here at Inktank (and the Ceph project) are quite interested in the direction that CentOS is headed and would love to propose a 'Storage SIG' to help ensure that something so fundamental gets the proper attention.
There are definitely aspects that will overlap with Core, Cloud, and Virtualization, but we think there are plenty of places where storage should have it's own discourse. We are already building CentOS packages for Ceph and would like to encourage other storage offerings to do the same.
We are definitely in alignment with the requirements and want to ensure that:
1) the discourse remains primarily CentOS related, or at least tangentially applicable.
2) there is plenty of feedback and control into, and out of, the core CentOS community.
3) all communications take place in public (either IRC or mailing lists).
4) all code is compatible with a FOSS license (in our case, LGPL v2 w/ fragmented copyright).
5) all documentation is compatible with the CentOS wiki license.
6) the storage SIG would be very mindful of CentOS direction and coordinate effectively.
7) we have the appropriate buy-in from the CentOS devteam.
For a (very rough) outline of form and function I was thinking of the following. However, this will obviously grow and mature with feedback.
Mission To ensure that CentOS is at the forefront of next generation storage technology and has a wide range of mature options for both traditional and distributed systems.
Technical Goals * Assist with packaging and testing of storage technologies for CentOS repositories * Documenting paths to deployment and use for various storage technologies * Hold an open discourse around storage technology as it applies to CentOS * Work closely with providers of storage technology to ensure tight integration with CentOS
Community Goals * Provide an easy-to-consume menu of storage and deployment/orchestration options * Act as a resource for those exploring next generation storage technology * Ensure that all storage technology is addressed equally without favoritism * Work with users to publish use cases and real-world examples for review
I welcome any and all feedback and look forward to working with the CentOS community. Thanks.
Best Regards,
Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank
Hi Patrick,
On 01/21/2014 04:22 PM, Patrick McGarry wrote:
I welcome any and all feedback and look forward to working with the CentOS community. Thanks.
Thank you for this well presented SIG poposal, I think there is certainly scope for us to do something on the storage side as well.
Let me get back to you by the weekend and flesh out the points you raised.
Hey Karanbir et al,
Just wanted to check on this and see if anyone had thoughts on the formation of a storage SIG? I'm also usually idling on freenode (and in #centos) as 'scuttlemonkey' if someone would like to discuss this further. Thanks.
Best Regards,
Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank
On Wed, Jan 22, 2014 at 12:03 PM, Karanbir Singh mail-lists@karan.org wrote:
Hi Patrick,
On 01/21/2014 04:22 PM, Patrick McGarry wrote:
I welcome any and all feedback and look forward to working with the CentOS community. Thanks.
Thank you for this well presented SIG poposal, I think there is certainly scope for us to do something on the storage side as well.
Let me get back to you by the weekend and flesh out the points you raised.
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
Hi Patrick,
On 02/03/2014 02:31 PM, Patrick McGarry wrote:
Just wanted to check on this and see if anyone had thoughts on the formation of a storage SIG? I'm also usually idling on freenode (and in #centos) as 'scuttlemonkey' if someone would like to discuss this further. Thanks.
What we need to do is get the proposal stuff done, in front of the board for the approval and then start working to import content.
I know that ceph easily seperates the cluster end from the client side, it might then be easier to consider one part of the equation before the other ?
At fosdem, I did manange to have a series of conversations with people around the idea of shared code across SIG's ( the sort of thing we might need to look into for the qemu config changes needed to make ceph work in the client side ), and its going to be a harder problem to solve.
How are you placed to consider the storage and client sides of ceph as independant components ?
Finally, I presume you are stepping up to the Storage SIG representative :)
- KB
The bulk of the Ceph packaging is server side only. The main component which creates the client dependencies is the librbd package, which can be held in other repos alongside specific versions of Xen, Qemu, KVM etc. With the RHEL-OSP/RHEV3.3-based code for qemu, there is explicit dynamic loading of the library too so it doesn't need to be at build time either so that might give more flexible around cloud repo code.
Neil
On Tue, Feb 4, 2014 at 1:00 AM, Karanbir Singh mail-lists@karan.org wrote:
Hi Patrick,
On 02/03/2014 02:31 PM, Patrick McGarry wrote:
Just wanted to check on this and see if anyone had thoughts on the formation of a storage SIG? I'm also usually idling on freenode (and in #centos) as 'scuttlemonkey' if someone would like to discuss this further. Thanks.
What we need to do is get the proposal stuff done, in front of the board for the approval and then start working to import content.
I know that ceph easily seperates the cluster end from the client side, it might then be easier to consider one part of the equation before the other ?
At fosdem, I did manange to have a series of conversations with people around the idea of shared code across SIG's ( the sort of thing we might need to look into for the qemu config changes needed to make ceph work in the client side ), and its going to be a harder problem to solve.
How are you placed to consider the storage and client sides of ceph as independant components ?
Finally, I presume you are stepping up to the Storage SIG representative :)
- KB
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
hi Neil,
On 02/04/2014 05:55 PM, Neil Levine wrote:
The bulk of the Ceph packaging is server side only. The main component which creates the client dependencies is the librbd package, which can be held in other repos alongside specific versions of Xen, Qemu, KVM etc. With the RHEL-OSP/RHEV3.3-based code for qemu, there is explicit dynamic loading of the library too so it doesn't need to be at build time either so that might give more flexible around cloud repo code.
ah, i didnt know we could ship qemu modules out like this, it might change the whole problem space ( ie. who owns and manages this qemu ).
The details are buried in this advisory (search for Ceph):
http://rhn.redhat.com/errata/RHSA-2013-1754.html
My understanding is this is not in RHEL (the product), but is available in the RHEL platform code that RHEV and RHEL-OSP build on.
Neil
On Tue, Feb 4, 2014 at 9:59 AM, Karanbir Singh mail-lists@karan.org wrote:
hi Neil,
On 02/04/2014 05:55 PM, Neil Levine wrote:
The bulk of the Ceph packaging is server side only. The main component which creates the client dependencies is the librbd package, which can be held in other repos alongside specific versions of Xen, Qemu, KVM etc. With the RHEL-OSP/RHEV3.3-based code for qemu, there is explicit dynamic loading of the library too so it doesn't need to be at build time either so that might give more flexible around cloud repo code.
ah, i didnt know we could ship qemu modules out like this, it might change the whole problem space ( ie. who owns and manages this qemu ).
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
Hey KB,
It looks like Neil beat me to most of the meaty discussion, but I am happy to be the sacrificial lamb to hold the Storage SIG reins for now. Is there anything I could do in preparation for the proposal stuff that might make the road a bit smoother?
I see that most of the work is post devteam member involvement (new list, wiki section, etc) that I'm happy to start populating as soon as it makes sense. Just didn't know if there were specific proposal docs or other sundry that may help. Happy to run with whatever makes sense. Thanks.
Best Regards,
Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank
On Tue, Feb 4, 2014 at 4:00 AM, Karanbir Singh mail-lists@karan.org wrote:
Hi Patrick,
On 02/03/2014 02:31 PM, Patrick McGarry wrote:
Just wanted to check on this and see if anyone had thoughts on the formation of a storage SIG? I'm also usually idling on freenode (and in #centos) as 'scuttlemonkey' if someone would like to discuss this further. Thanks.
What we need to do is get the proposal stuff done, in front of the board for the approval and then start working to import content.
I know that ceph easily seperates the cluster end from the client side, it might then be easier to consider one part of the equation before the other ?
At fosdem, I did manange to have a series of conversations with people around the idea of shared code across SIG's ( the sort of thing we might need to look into for the qemu config changes needed to make ceph work in the client side ), and its going to be a harder problem to solve.
How are you placed to consider the storage and client sides of ceph as independant components ?
Finally, I presume you are stepping up to the Storage SIG representative :)
- KB
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
Hi All,
I got to know about the Storage SIG from below email thread and IMO it is a great initiative. It would help community to use Storage software like Ceph and GlusterFS directly without maintaining it.
There is a lot of interest in GlusterFS community to get involve with the initiative and make GlusterFS to be the part of Storage SIG.
We would love to work with CentOS to make it successful. The initial mail from Patrick was very clear about the objective of this initiative and was very helpful. Thanks Patrick.
http://grokbase.com/t/centos/centos-devel/141nvt6rde/storage-sig-proposal
However need help from you guys about what should be the next step for us. Suggestions , comments?
Thanks, Lala
.