hi,
Yesterday a few of us met for a face to face walk through of the CentOS Container Pipeline we've been talking about. The aim was to rescope the upstream projects we can lean on, share code with, and help - and then find the glue pieces that could bring this code base together.
The larger picture effectively boils down to : - Find the components needed to track code git repos ( either in git.centos.org or elsewhere ) - Find the components needed to now build that code, containerise the code, push it through a test process, and then deliver the containers either locally into a centos transitional container registry, or to a CDN like wider centos registry or if the user so desires, to a third part registry ( provided a good process can be find to handle the credentials needed ). - We'd want to use this pipeline both internally, for the CentOS Linux components ( eg. a LAMP container from CentOS Linux 7 ), for SIG components via cbs output ( eg. SCLo SIG folks shipping containers for their content ), as well as open this up via a trivial UI, for anyone in the community who'd like to come and consume this pipeline.
the key piece that we didnt have clarity on was the orchestration and glue that could bind the various software components. Vaclav Pavlin, brought up that we might be able to use openshift templates in order to get the job runs done - if the input for the templates could be derived from the cccp-index, either via the jjb work already done, or writing a new filter into JJB, we might be able to execute a fairly scaleable solution without needing to own any piece of the over all code.
Additionally, there is a fair interest from the Fedora team ( ie Adam! ) working on the same problem in their space, mostly consuming the identical code stack, within and for their infra, their constraints and aims.
Over the coming days we are going to try and work though PoC's, get some infra setup and trial running through some of the user stories we want to execute on.
Finally, a quick shout out to everyone for coming together at pretty much the last minute adhoc meeting - Fabian Arrotin, Brian Stinson, Christoph Goern, Vaclav Pavlin, Aaron Weitekamp, Tomas Tomecek, Adam Miller, Dusty Mabe, Honza Horak, Radek Vokal as well as Tim Waugh, Bama Charan and Mohammed Zeeshan for dialing into the meeting.
And to everyone else, want to build containers with us ? come talk to us - we'd love to make sure we include as many user stories to get great scope before we start implementing bits.
regards
On Sun, Feb 7, 2016 at 3:22 AM, Karanbir Singh kbsingh@centos.org wrote:
hi,
Yesterday a few of us met for a face to face walk through of the CentOS Container Pipeline we've been talking about. The aim was to rescope the upstream projects we can lean on, share code with, and help - and then find the glue pieces that could bring this code base together.
The larger picture effectively boils down to :
- Find the components needed to track code git repos ( either in
git.centos.org or elsewhere )
- Find the components needed to now build that code, containerise the
code, push it through a test process, and then deliver the containers either locally into a centos transitional container registry, or to a CDN like wider centos registry or if the user so desires, to a third part registry ( provided a good process can be find to handle the credentials needed ).
- We'd want to use this pipeline both internally, for the CentOS Linux
components ( eg. a LAMP container from CentOS Linux 7 ), for SIG components via cbs output ( eg. SCLo SIG folks shipping containers for their content ), as well as open this up via a trivial UI, for anyone in the community who'd like to come and consume this pipeline.
the key piece that we didnt have clarity on was the orchestration and glue that could bind the various software components. Vaclav Pavlin, brought up that we might be able to use openshift templates in order to get the job runs done - if the input for the templates could be derived from the cccp-index, either via the jjb work already done, or writing a new filter into JJB, we might be able to execute a fairly scaleable solution without needing to own any piece of the over all code.
Additionally, there is a fair interest from the Fedora team ( ie Adam! ) working on the same problem in their space, mostly consuming the identical code stack, within and for their infra, their constraints and aims.
Over the coming days we are going to try and work though PoC's, get some infra setup and trial running through some of the user stories we want to execute on.
Finally, a quick shout out to everyone for coming together at pretty much the last minute adhoc meeting - Fabian Arrotin, Brian Stinson, Christoph Goern, Vaclav Pavlin, Aaron Weitekamp, Tomas Tomecek, Adam Miller, Dusty Mabe, Honza Horak, Radek Vokal as well as Tim Waugh, Bama Charan and Mohammed Zeeshan for dialing into the meeting.
And to everyone else, want to build containers with us ? come talk to us
- we'd love to make sure we include as many user stories to get great
scope before we start implementing bits.
regards
What do you mean by "containers"? When I hear that word everything from docker and rocket images, to virtual machine images, and more, come to mind.
Troy
On 02/07/2016 08:29 AM, Troy Dawson wrote:
On Sun, Feb 7, 2016 at 3:22 AM, Karanbir Singh kbsingh@centos.org wrote:
hi,
Yesterday a few of us met for a face to face walk through of the CentOS Container Pipeline we've been talking about. The aim was to rescope the upstream projects we can lean on, share code with, and help - and then find the glue pieces that could bring this code base together.
The larger picture effectively boils down to :
- Find the components needed to track code git repos ( either in
git.centos.org or elsewhere )
- Find the components needed to now build that code, containerise the
code, push it through a test process, and then deliver the containers either locally into a centos transitional container registry, or to a CDN like wider centos registry or if the user so desires, to a third part registry ( provided a good process can be find to handle the credentials needed ).
- We'd want to use this pipeline both internally, for the CentOS Linux
components ( eg. a LAMP container from CentOS Linux 7 ), for SIG components via cbs output ( eg. SCLo SIG folks shipping containers for their content ), as well as open this up via a trivial UI, for anyone in the community who'd like to come and consume this pipeline.
the key piece that we didnt have clarity on was the orchestration and glue that could bind the various software components. Vaclav Pavlin, brought up that we might be able to use openshift templates in order to get the job runs done - if the input for the templates could be derived from the cccp-index, either via the jjb work already done, or writing a new filter into JJB, we might be able to execute a fairly scaleable solution without needing to own any piece of the over all code.
Additionally, there is a fair interest from the Fedora team ( ie Adam! ) working on the same problem in their space, mostly consuming the identical code stack, within and for their infra, their constraints and aims.
Over the coming days we are going to try and work though PoC's, get some infra setup and trial running through some of the user stories we want to execute on.
Finally, a quick shout out to everyone for coming together at pretty much the last minute adhoc meeting - Fabian Arrotin, Brian Stinson, Christoph Goern, Vaclav Pavlin, Aaron Weitekamp, Tomas Tomecek, Adam Miller, Dusty Mabe, Honza Horak, Radek Vokal as well as Tim Waugh, Bama Charan and Mohammed Zeeshan for dialing into the meeting.
And to everyone else, want to build containers with us ? come talk to us
- we'd love to make sure we include as many user stories to get great
scope before we start implementing bits.
regards
What do you mean by "containers"? When I hear that word everything from docker and rocket images, to virtual machine images, and more, come to mind.
Well, I think that depends on the group who wants the container.
For now, I would think we would orchestrate docker containers that work with atomic and containers that work in openshift-origin as starting points.
But I would also think that SIGs can support other container types as well, if they choose to.
Very fluid at this point, I think.
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here: https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger
the build process in the pipeline.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought
process is in the same line as yours or any changes, suggestions.
Few weeks back I created almost a similar system (without vagrant) and it was very much welcomed by the developers.
Thanks Bamacharan
Cheers, Vašek
On Sun, Feb 7, 2016 at 3:22 AM, Karanbir Singh kbsingh@centos.org
wrote:
hi,
Yesterday a few of us met for a face to face walk through of the CentOS Container Pipeline we've been talking about. The aim was to rescope the upstream projects we can lean on, share code with, and help - and then find the glue pieces that could bring this code base together.
The larger picture effectively boils down to :
- Find the components needed to track code git repos ( either in
git.centos.org or elsewhere )
- Find the components needed to now build that code, containerise the
code, push it through a test process, and then deliver the containers either locally into a centos transitional container registry, or to a CDN like wider centos registry or if the user so desires, to a third part registry ( provided a good process can be find to handle the credentials needed ).
- We'd want to use this pipeline both internally, for the CentOS Linux
components ( eg. a LAMP container from CentOS Linux 7 ), for SIG components via cbs output ( eg. SCLo SIG folks shipping containers for their content ), as well as open this up via a trivial UI, for anyone in the community who'd like to come and consume this pipeline.
the key piece that we didnt have clarity on was the orchestration and glue that could bind the various software components. Vaclav Pavlin, brought up that we might be able to use openshift templates in order to get the job runs done - if the input for the templates could be derived from the cccp-index, either via the jjb work already done, or writing a new filter into JJB, we might be able to execute a fairly scaleable solution without needing to own any piece of the over all code.
Additionally, there is a fair interest from the Fedora team ( ie Adam! ) working on the same problem in their space, mostly consuming the identical code stack, within and for their infra, their constraints and aims.
Over the coming days we are going to try and work though PoC's, get some infra setup and trial running through some of the user stories we want to execute on.
Finally, a quick shout out to everyone for coming together at pretty much the last minute adhoc meeting - Fabian Arrotin, Brian Stinson, Christoph Goern, Vaclav Pavlin, Aaron Weitekamp, Tomas Tomecek, Adam Miller, Dusty Mabe, Honza Horak, Radek Vokal as well as Tim Waugh, Bama Charan and Mohammed Zeeshan for dialing into the meeting.
And to everyone else, want to build containers with us ? come talk to us
- we'd love to make sure we include as many user stories to get great
scope before we start implementing bits.
regards
- --
*>* Karanbir Singh, Project Lead, The CentOS Project *>* +44-207-0999389 | http://www.centos.org/ http://www.centos.org/ | twitter.com/CentOS http://twitter.com/CentOS *>* GnuPG Key : http://www.karan.org/publickey.asc http://www.karan.org/publickey.asc *>* _______________________________________________ *>* CentOS-devel mailing list *>* CentOS-devel at centos.org https://lists.centos.org/mailman/listinfo/centos-devel *>* https://lists.centos.org/mailman/listinfo/centos-devel https://lists.centos.org/mailman/listinfo/centos-devel *>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
Once we have the end to end scaffolding in place, we should then work through a few use cases, for the service side, and look at scope of development needed from that point on.
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
I am not sure what you mean by " built image would be deployed to openshift instance".
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working
I as a developer want to add a yaml file to my repo and submit my repo url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Done:-) We can polish it later.
Makes sense?
Vašek
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo url
somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered.
To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-)
Cheers, Vašek
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo url
somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Hey, any progress here?
Vašek
On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered.
To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-)
Cheers, Vašek
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo
url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
Hi Vašek, Yes, I have created pipeline with a nodejs based application( https://github.com/bamachrn/cccp-demo-test) with openshift build system. Openshift template is taking the git repo URI as input, and building, testing within the containers. As a output this is pushing the ready to run containers to the registry.
Went through the atomic reactor for reading index.yaml and cccp.yaml written by rtnpro, kbsingh.
I am currently working to understand the openshift template and build config to automate the build on git push or scheduled time.
Today I am trying to trigger with Github webhook. This is not triggering the build due to my IP being in private network. also waiting for the ci.centos.org access to use jenkins in build process.
Please suggest if there is another way to achieve this.
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:29 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hey, any progress here?
Vašek
On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered.
To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-)
Cheers, Vašek
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo
url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote: > Hi, > > Vaclav presented the build pipeline very nicely and this would > take out lot of tension for building the code, checking the code > standards and test cases from the developer. > > I would like to add few points on this. > > On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com > mailto:vpavlin@redhat.com> wrote: > > > Hi all, > > As KB wrote, I brought up the idea of using OpenShift as a glue > (i.e. workflow controller). The result can be found here: > > https://github.com/vpavlin/cccp-demo-openshift > > TL;DR: > > The repository contains OpenShift Template defining the workflow - > build,test, delivery and (very poorly) implements the steps through > Docker images (i.e. Dockerfiles and run scripts). > > The developer should do only git push to his VCS and this should > trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
> > In this TDD process all the environments (including the build, > test, delivery) would be created as a container and once the step > is over it will destroy the environment. As output this will > generate a application runtime along with the successfully built > application code to registry. > > As you mentioned this would be tagged with test along with jenkins > build id, so that developer or QA can trace for which commit this > is built. > > Then for the next stages, successfully built image would be > deployed to openshift instance to get through the test, delivery > stages checking, along with the quality gates. > > all the stages should be linked to pipeline and should be easily > re-producible so that any one can check or regenerate the issues > instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume
sharing of the components.
> > It's easily runnable in Vagrant with use of Project Atomic > Developer Bundle. > > If you are interested in more info, I'd suggest to read the readme > in the repo, I hope it summarizes it clearly. > > It's a very minimal demo, but I think it suggests the path, which > could take us to the Unicorns land, quite well:). > > Let me know in case of any questions, suggestions or requests for > guidance in case anybody decides to take this further. > > I would like to take this further, please let me know if my > thought process is in the same line as yours or any changes, > suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Tue, Feb 16, 2016 at 2:15 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vašek, Yes, I have created pipeline with a nodejs based application( https://github.com/bamachrn/cccp-demo-test) with openshift build system. Openshift template is taking the git repo URI as input, and building, testing within the containers. As a output this is pushing the ready to run containers to the registry.
Went through the atomic reactor for reading index.yaml and cccp.yaml written by rtnpro, kbsingh.
I am currently working to understand the openshift template and build config to automate the build on git push or scheduled time.
Feel free to shoot questions, I've spent quite some time figuring out how these work:)
Today I am trying to trigger with Github webhook. This is not triggering the build due to my IP being in private network. also waiting for the ci.centos.org access to use jenkins in build process.
Yeah, you'll need a public IP/DNS name to be able trigger hooks, I don't think there is a workaround for this (well, apart from running your own gitlab on the same machine or something like that)
Please suggest if there is another way to achieve this.
Thanks Bamacharan
Cheers, Vašek
On Tue, Feb 16, 2016 at 6:29 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hey, any progress here?
Vašek
On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered.
To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-)
Cheers, Vašek
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo
url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu <bamachrn@gmail.com
wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 10/02/16 07:29, Bamacharan Kundu wrote: > > Hi, > > > > Vaclav presented the build pipeline very nicely and this would > > take out lot of tension for building the code, checking the code > > standards and test cases from the developer. > > > > I would like to add few points on this. > > > > On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com > > mailto:vpavlin@redhat.com> wrote: > > > > > > Hi all, > > > > As KB wrote, I brought up the idea of using OpenShift as a glue > > (i.e. workflow controller). The result can be found here: > > > > https://github.com/vpavlin/cccp-demo-openshift > > > > TL;DR: > > > > The repository contains OpenShift Template defining the workflow - > > build,test, delivery and (very poorly) implements the steps through > > Docker images (i.e. Dockerfiles and run scripts). > > > > The developer should do only git push to his VCS and this should > > trigger the build process in the pipeline. > > in an onprem story that would map well, but note that were aiming to > run a hosted service with a distinct UI ( even if the UI is no UI ) >
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
> > > > > In this TDD process all the environments (including the build, > > test, delivery) would be created as a container and once the step > > is over it will destroy the environment. As output this will > > generate a application runtime along with the successfully built > > application code to registry. > > > > As you mentioned this would be tagged with test along with jenkins > > build id, so that developer or QA can trace for which commit this > > is built. > > > > Then for the next stages, successfully built image would be > > deployed to openshift instance to get through the test, delivery > > stages checking, along with the quality gates. > > > > all the stages should be linked to pipeline and should be easily > > re-producible so that any one can check or regenerate the issues > > instantly. > > add another dimension there - collection of related containers, ie. > the entire microservice should be reproduceable. > > This means system needs to maintain all the linking and volume sharing of the components.
> > > > It's easily runnable in Vagrant with use of Project Atomic > > Developer Bundle. > > > > If you are interested in more info, I'd suggest to read the readme > > in the repo, I hope it summarizes it clearly. > > > > It's a very minimal demo, but I think it suggests the path, which > > could take us to the Unicorns land, quite well:). > > > > Let me know in case of any questions, suggestions or requests for > > guidance in case anybody decides to take this further. > > > > I would like to take this further, please let me know if my > > thought process is in the same line as yours or any changes, > > suggestions. > > we need to work through whats needed to now integrate with the > cccp-index content, and then map that back to deliverables. I had > asked Zeeshan to look at registry side for delivery space, unsure how > far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On 16/02/16 13:15, Bamacharan Kundu wrote:
Hi Vašek, Yes, I have created pipeline with a nodejs based application(https://github.com/bamachrn/cccp-demo-test) with openshift build system. Openshift template is taking the git repo URI as input, and building, testing within the containers. As a output this is pushing the ready to run containers to the registry.
Went through the atomic reactor for reading index.yaml and cccp.yaml written by rtnpro, kbsingh.
I am currently working to understand the openshift template and build config to automate the build on git push or scheduled time.
Today I am trying to trigger with Github webhook. This is not triggering the build due to my IP being in private network. also waiting for the ci.centos.org http://ci.centos.org access to use jenkins in build process.
Please suggest if there is another way to achieve this.
Looks like we need to setup a syncup point
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:29 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hey, any progress here? Vašek On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: Hi Vaclav, On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: Hi Bamacharan, I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache. Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only. To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered. To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-) Cheers, Vašek I am not sure what you mean by " built image would be deployed to openshift instance". I was saying about building the docker container images in openshift for the build, test, delivery environments. My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working. Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put. I as a developer want to add a yaml file to my repo and submit my repo url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results. Yes, this yaml will help to handle all the linking and container management stuff. Done:-) We can polish it later. Makes sense? Sure, going ahead with this. Regards Bamacharan On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh <kbsingh@centos.org <mailto:kbsingh@centos.org>> wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/02/16 07:29, Bamacharan Kundu wrote: > Hi, > > Vaclav presented the build pipeline very nicely and this would > take out lot of tension for building the code, checking the code > standards and test cases from the developer. > > I would like to add few points on this. > > On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com> > <mailto:vpavlin@redhat.com <mailto:vpavlin@redhat.com>>> wrote: > > > Hi all, > > As KB wrote, I brought up the idea of using OpenShift as a glue > (i.e. workflow controller). The result can be found here: > > https://github.com/vpavlin/cccp-demo-openshift > > TL;DR: > > The repository contains OpenShift Template defining the workflow - > build,test, delivery and (very poorly) implements the steps through > Docker images (i.e. Dockerfiles and run scripts). > > The developer should do only git push to his VCS and this should > trigger the build process in the pipeline. in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI ) Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of. > > In this TDD process all the environments (including the build, > test, delivery) would be created as a container and once the step > is over it will destroy the environment. As output this will > generate a application runtime along with the successfully built > application code to registry. > > As you mentioned this would be tagged with test along with jenkins > build id, so that developer or QA can trace for which commit this > is built. > > Then for the next stages, successfully built image would be > deployed to openshift instance to get through the test, delivery > stages checking, along with the quality gates. > > all the stages should be linked to pipeline and should be easily > re-producible so that any one can check or regenerate the issues > instantly. add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable. This means system needs to maintain all the linking and volume sharing of the components. > > It's easily runnable in Vagrant with use of Project Atomic > Developer Bundle. > > If you are interested in more info, I'd suggest to read the readme > in the repo, I hope it summarizes it clearly. > > It's a very minimal demo, but I think it suggests the path, which > could take us to the Unicorns land, quite well:). > > Let me know in case of any questions, suggestions or requests for > guidance in case anybody decides to take this further. > > I would like to take this further, please let me know if my > thought process is in the same line as yours or any changes, > suggestions. we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that. I believe, I should look for integration with cccp-index content? Regards Bamacharan -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Hi Vesak, KB, could we meet over bluejeans some time? I am confused with what we are trying to achieve and why.
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:54 PM, Karanbir Singh mail-lists@karan.org wrote:
On 16/02/16 13:15, Bamacharan Kundu wrote:
Hi Vašek, Yes, I have created pipeline with a nodejs based application(https://github.com/bamachrn/cccp-demo-test) with openshift build system. Openshift template is taking the git repo URI as input, and building, testing within the containers. As a output this is pushing the ready to run containers to the registry.
Went through the atomic reactor for reading index.yaml and cccp.yaml written by rtnpro, kbsingh.
I am currently working to understand the openshift template and build config to automate the build on git push or scheduled time.
Today I am trying to trigger with Github webhook. This is not triggering the build due to my IP being in private network. also waiting for the ci.centos.org http://ci.centos.org access to use jenkins in build process.
Please suggest if there is another way to achieve this.
Looks like we need to setup a syncup point
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:29 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hey, any progress here? Vašek On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: Hi Vaclav, On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: Hi Bamacharan, I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache. Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only. To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered. To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-) Cheers, Vašek I am not sure what you mean by " built image would be deployed to openshift instance". I was saying about building the docker container images in openshift for the build, test, delivery environments. My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working. Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put. I as a developer want to add a yaml file to my repo and submit my repo url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results. Yes, this yaml will help to handle all the linking and container management stuff. Done:-) We can polish it later. Makes sense? Sure, going ahead with this. Regards Bamacharan On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh <kbsingh@centos.org <mailto:kbsingh@centos.org>>
wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/02/16 07:29, Bamacharan Kundu wrote: > Hi, > > Vaclav presented the build pipeline very
nicely and this would
> take out lot of tension for building the code,
checking the code
> standards and test cases from the developer. > > I would like to add few points on this. > > On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <
vpavlin@redhat.com mailto:vpavlin@redhat.com
> <mailto:vpavlin@redhat.com <mailto:
vpavlin@redhat.com>>>
wrote: > > > Hi all, > > As KB wrote, I brought up the idea of using
OpenShift as a glue
> (i.e. workflow controller). The result can be
found here:
> > https://github.com/vpavlin/cccp-demo-openshift > > TL;DR: > > The repository contains OpenShift Template
defining the workflow -
> build,test, delivery and (very poorly)
implements the steps through
> Docker images (i.e. Dockerfiles and run
scripts).
> > The developer should do only git push to his
VCS and this should
> trigger the build process in the pipeline. in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI ) Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of. > > In this TDD process all the environments
(including the build,
> test, delivery) would be created as a
container and once the step
> is over it will destroy the environment. As
output this will
> generate a application runtime along with the
successfully built
> application code to registry. > > As you mentioned this would be tagged with
test along with jenkins
> build id, so that developer or QA can trace
for which commit this
> is built. > > Then for the next stages, successfully built
image would be
> deployed to openshift instance to get through
the test, delivery
> stages checking, along with the quality gates. > > all the stages should be linked to pipeline
and should be easily
> re-producible so that any one can check or
regenerate the issues
> instantly. add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable. This means system needs to maintain all the linking and volume sharing of the components. > > It's easily runnable in Vagrant with use of
Project Atomic
> Developer Bundle. > > If you are interested in more info, I'd
suggest to read the readme
> in the repo, I hope it summarizes it clearly. > > It's a very minimal demo, but I think it
suggests the path, which
> could take us to the Unicorns land, quite
well:).
> > Let me know in case of any questions,
suggestions or requests for
> guidance in case anybody decides to take this
further.
> > I would like to take this further, please let
me know if my
> thought process is in the same line as yours
or any changes,
> suggestions. we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that. I believe, I should look for integration with cccp-index content? Regards Bamacharan -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:
CentOS-devel@centos.org>
https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
I was informed by few people that my how to at https://github.com/vpavlin/cccp-demo-openshift does not work with ADB 1.7.0 so I fixed it.
Adam, there is an oc command the ADB box...try to just copy the commands snippets to terminal and you should be good..don't skip anything:-P
Vašek
On Wed, Feb 17, 2016 at 7:46 AM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vesak, KB, could we meet over bluejeans some time? I am confused with what we are trying to achieve and why.
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:54 PM, Karanbir Singh mail-lists@karan.org wrote:
On 16/02/16 13:15, Bamacharan Kundu wrote:
Hi Vašek, Yes, I have created pipeline with a nodejs based application(https://github.com/bamachrn/cccp-demo-test) with openshift build system. Openshift template is taking the git repo URI as input, and building, testing within the containers. As a output this is pushing the ready to run containers to the registry.
Went through the atomic reactor for reading index.yaml and cccp.yaml written by rtnpro, kbsingh.
I am currently working to understand the openshift template and build config to automate the build on git push or scheduled time.
Today I am trying to trigger with Github webhook. This is not triggering the build due to my IP being in private network. also waiting for the ci.centos.org http://ci.centos.org access to use jenkins in build process.
Please suggest if there is another way to achieve this.
Looks like we need to setup a syncup point
Thanks Bamacharan
On Tue, Feb 16, 2016 at 6:29 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hey, any progress here? Vašek On Wed, Feb 10, 2016 at 4:19 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: Hi Vaclav, On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin <vpavlin@redhat.com <mailto:vpavlin@redhat.com>> wrote: Hi Bamacharan, I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache. Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only. To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered. To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-) Cheers, Vašek I am not sure what you mean by " built image would be deployed to openshift instance". I was saying about building the docker container images in openshift for the build, test, delivery environments. My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working. Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put. I as a developer want to add a yaml file to my repo and submit my repo url somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results. Yes, this yaml will help to handle all the linking and container management stuff. Done:-) We can polish it later. Makes sense? Sure, going ahead with this. Regards Bamacharan On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu <bamachrn@gmail.com <mailto:bamachrn@gmail.com>> wrote: On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh <kbsingh@centos.org <mailto:kbsingh@centos.org>>
wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/02/16 07:29, Bamacharan Kundu wrote: > Hi, > > Vaclav presented the build pipeline very
nicely and this would
> take out lot of tension for building the
code, checking the code
> standards and test cases from the developer. > > I would like to add few points on this. > > On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin
<vpavlin@redhat.com mailto:vpavlin@redhat.com
> <mailto:vpavlin@redhat.com <mailto:
vpavlin@redhat.com>>>
wrote: > > > Hi all, > > As KB wrote, I brought up the idea of using
OpenShift as a glue
> (i.e. workflow controller). The result can be
found here:
> >
https://github.com/vpavlin/cccp-demo-openshift
> > TL;DR: > > The repository contains OpenShift Template
defining the workflow -
> build,test, delivery and (very poorly)
implements the steps through
> Docker images (i.e. Dockerfiles and run
scripts).
> > The developer should do only git push to his
VCS and this should
> trigger the build process in the pipeline. in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI ) Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of. > > In this TDD process all the environments
(including the build,
> test, delivery) would be created as a
container and once the step
> is over it will destroy the environment. As
output this will
> generate a application runtime along with the
successfully built
> application code to registry. > > As you mentioned this would be tagged with
test along with jenkins
> build id, so that developer or QA can trace
for which commit this
> is built. > > Then for the next stages, successfully built
image would be
> deployed to openshift instance to get through
the test, delivery
> stages checking, along with the quality gates. > > all the stages should be linked to pipeline
and should be easily
> re-producible so that any one can check or
regenerate the issues
> instantly. add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable. This means system needs to maintain all the linking and volume sharing of the components. > > It's easily runnable in Vagrant with use of
Project Atomic
> Developer Bundle. > > If you are interested in more info, I'd
suggest to read the readme
> in the repo, I hope it summarizes it clearly. > > It's a very minimal demo, but I think it
suggests the path, which
> could take us to the Unicorns land, quite
well:).
> > Let me know in case of any questions,
suggestions or requests for
> guidance in case anybody decides to take this
further.
> > I would like to take this further, please let
me know if my
> thought process is in the same line as yours
or any changes,
> suggestions. we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that. I believe, I should look for integration with cccp-index content? Regards Bamacharan -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:
CentOS-devel@centos.org>
https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel -- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/ _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 <tel:%2B420%20739%20666%20824> -- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824 _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org <mailto:CentOS-devel@centos.org> https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On 10/02/16 11:47, Bamacharan Kundu wrote:
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
What might be worth doing is take this PoC and bring it up as a service somewhere, and get Ratnadeep/Praveen to do the same for the cccp-index work they've done so far.
we might need to look at the JJB code and see what mod's are needed there ( ideally abstracted from the user ).
In the mean time I will circle back with Fabian and Brian and try to bring up some infra we can use for this work, Ideally in DevCloud.
Regards
On 07/02/16 14:29, Troy Dawson wrote:
What do you mean by "containers"? When I hear that word everything from docker and rocket images, to virtual machine images, and more, come to mind.
pretty much all of the above
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build, test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
Cheers, Vašek
On Sun, Feb 7, 2016 at 10:22 AM, Karanbir Singh kbsingh@centos.org wrote:
hi,
Yesterday a few of us met for a face to face walk through of the CentOS Container Pipeline we've been talking about. The aim was to rescope the upstream projects we can lean on, share code with, and help - and then find the glue pieces that could bring this code base together.
The larger picture effectively boils down to :
- Find the components needed to track code git repos ( either in
git.centos.org or elsewhere )
- Find the components needed to now build that code, containerise the
code, push it through a test process, and then deliver the containers either locally into a centos transitional container registry, or to a CDN like wider centos registry or if the user so desires, to a third part registry ( provided a good process can be find to handle the credentials needed ).
- We'd want to use this pipeline both internally, for the CentOS Linux
components ( eg. a LAMP container from CentOS Linux 7 ), for SIG components via cbs output ( eg. SCLo SIG folks shipping containers for their content ), as well as open this up via a trivial UI, for anyone in the community who'd like to come and consume this pipeline.
the key piece that we didnt have clarity on was the orchestration and glue that could bind the various software components. Vaclav Pavlin, brought up that we might be able to use openshift templates in order to get the job runs done - if the input for the templates could be derived from the cccp-index, either via the jjb work already done, or writing a new filter into JJB, we might be able to execute a fairly scaleable solution without needing to own any piece of the over all code.
Additionally, there is a fair interest from the Fedora team ( ie Adam! ) working on the same problem in their space, mostly consuming the identical code stack, within and for their infra, their constraints and aims.
Over the coming days we are going to try and work though PoC's, get some infra setup and trial running through some of the user stories we want to execute on.
Finally, a quick shout out to everyone for coming together at pretty much the last minute adhoc meeting - Fabian Arrotin, Brian Stinson, Christoph Goern, Vaclav Pavlin, Aaron Weitekamp, Tomas Tomecek, Adam Miller, Dusty Mabe, Honza Horak, Radek Vokal as well as Tim Waugh, Bama Charan and Mohammed Zeeshan for dialing into the meeting.
And to everyone else, want to build containers with us ? come talk to us
- we'd love to make sure we include as many user stories to get great
scope before we start implementing bits.
regards
-- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
We had a call earlier today to try and bring the big picture together, and it looks like we have the pieces coming together.
The public index to jenkins job process is complete, as well as an end to end setup for the actual build, test and delivery part. The final stage of the process : the container registry seems to also be coming together. The plan now is to try and get the pieces together and chain it into an end to end service running from DevCloud and start consuming some user-stories / build runs. As we migrate pieces into a production environ, this setup in the devcloud can become our test/dev/pre-prod staging environ.
Here is what we are doing for the next target:
Vasek: to find a few Dockerfiles we can push through, ideally a few different kinds, to also include atleast one nulecule app.
Ratnadeep/Praveen: to setup the jenkins, jjb, cccp-index process ( and provide a template ( .yml ? ) that would go into the user-git-repos ), also own the trigger piece for the build process that Bama Charan is working on.
Bama Charan: Setup a openshift instance that can build, do some test ( even if the test at this point is 1+1=2), and do a delivery.
Zeeshan: to setup a docker registry for the internal push. Initially he will quickly do a docker upstream registry on a machine, but then replace with pupl/crane asap. As a next step, containerise and run the pulp/crane stack from containers built in the pipeline.
KB: try and keep the conversation going, make sure infra is available, ensure the user story lines up with delivery
We aim to have this done for the week starting Mon 29th Feb.
As the PaaS SIG comes up, we will try and align ourselves into that group, and try to consume the openshift codebase curated in that group.
regards
On Wed, Feb 24, 2016 at 01:42:24AM +0000, Karanbir Singh wrote:
Vasek: to find a few Dockerfiles we can push through, ideally a few different kinds, to also include atleast one nulecule app.
I happened to run into Vasek yesterday, and one idea we had was possibly using Copr as source for this. I think it'd be pretty interesting if one could add a Dockerfile to a Copr, and then have a checkbox next to the build button which makes the build also feed into the CentOS container pipeline and have a container come out the other end. Since Copr already supports both Fedora and EL build targets, this seems like an interesting possibility for collaboration. I haven't sold this idea to the Copr devs yet, but I'm curious what you think.
On Thu, Feb 25, 2016 at 10:27 AM, Matthew Miller mattdm@mattdm.org wrote:
On Wed, Feb 24, 2016 at 01:42:24AM +0000, Karanbir Singh wrote:
Vasek: to find a few Dockerfiles we can push through, ideally a few different kinds, to also include atleast one nulecule app.
I happened to run into Vasek yesterday, and one idea we had was possibly using Copr as source for this. I think it'd be pretty interesting if one could add a Dockerfile to a Copr, and then have a checkbox next to the build button which makes the build also feed into the CentOS container pipeline and have a container come out the other end. Since Copr already supports both Fedora and EL build targets, this seems like an interesting possibility for collaboration. I haven't sold this idea to the Copr devs yet, but I'm curious what you think.
On that note, what ever happened to Dopr? I thought it was supposed to cater to that.
-AdamM
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Wed, Mar 30, 2016 at 05:24:59PM -0500, Adam Miller wrote:
I happened to run into Vasek yesterday, and one idea we had was possibly using Copr as source for this. I think it'd be pretty interesting if one could add a Dockerfile to a Copr, and then have a checkbox next to the build button which makes the build also feed into the CentOS container pipeline and have a container come out the other end. Since Copr already supports both Fedora and EL build targets, this seems like an interesting possibility for collaboration. I haven't sold this idea to the Copr devs yet, but I'm curious what you think.
On that note, what ever happened to Dopr? I thought it was supposed to cater to that.
https://fedorahosted.org/council/ticket/38#comment:30
From msuchy:
"We had internal discussion and we decided that the maintenance overhead would be too big. So we canceled this one. We will introduce something similar, but not with DockerHub, but with OSBS in future."
On Thu, Mar 31, 2016 at 12:30 AM, Matthew Miller mattdm@mattdm.org wrote:
On Wed, Mar 30, 2016 at 05:24:59PM -0500, Adam Miller wrote:
I happened to run into Vasek yesterday, and one idea we had was possibly using Copr as source for this. I think it'd be pretty interesting if one could add a Dockerfile to a Copr, and then have a checkbox next to the build button which makes the build also feed into the CentOS container pipeline and have a container come out the other end. Since Copr already supports both Fedora and EL build targets, this seems like an interesting possibility for collaboration. I haven't sold this idea to the Copr devs yet, but I'm curious what you think.
On that note, what ever happened to Dopr? I thought it was supposed to cater to that.
https://fedorahosted.org/council/ticket/38#comment:30
From msuchy:
"We had internal discussion and we decided that the maintenance overhead would be too big. So we canceled this one. We will introduce something similar, but not with DockerHub, but with OSBS in future."
Yes, the idea is to rather integrate COPR with whichever OSBS based build service/pipeline is available.
Vašek
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On 31/03/16 05:43, Vaclav Pavlin wrote:
Yes, the idea is to rather integrate COPR with whichever OSBS based build service/pipeline is available.
if we are able to find a trusted path between the copr builders and the container pipeline, we should be able to support consuming copr built rpms in the pipeline as well. We'd just need to workout the mechanics around howto track the upstream copr repos in that case. No idea how we do that, but we should look into it.
regards
On 25/02/16 16:27, Matthew Miller wrote:
On Wed, Feb 24, 2016 at 01:42:24AM +0000, Karanbir Singh wrote:
Vasek: to find a few Dockerfiles we can push through, ideally a few different kinds, to also include atleast one nulecule app.
I happened to run into Vasek yesterday, and one idea we had was possibly using Copr as source for this. I think it'd be pretty interesting if one could add a Dockerfile to a Copr, and then have a checkbox next to the build button which makes the build also feed into the CentOS container pipeline and have a container come out the other end. Since Copr already supports both Fedora and EL build targets, this seems like an interesting possibility for collaboration. I haven't sold this idea to the Copr devs yet, but I'm curious what you think.
Not sure what role Copr might be playing here, since the pipeline does builds already.