On Wed, Feb 10, 2016 at 3:33 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
Hi Vaclav,
On Wed, Feb 10, 2016 at 6:53 PM, Vaclav Pavlin vpavlin@redhat.com wrote:
Hi Bamacharan,
I'd be careful with per commit builds in case of build from Dockerfile as it takes time and resources (presumably a lot of both) because we have to build in clean env and with --no-cache.
Why don't we take a layered based aproach, we can take the base image from local system then build it, this will save lot of time? I saw multiple dockerfiles for build,test,delivery which are intern the same image only.
To un-confuse people who didn't see our IRC convo - the Dockerfiles in cccp-demo-openshift repo represent containers which implement individual steps of the workflow - not the images/containers which are tested and delivered.
To answer "Why don't we take a layered based aproach": We do, building from Dockerfile follows the layered approach. The problem I am trying to emphasize is that we need clean environment for every build, otherwise we could introduce inconsistency again..But I think this could be easily solved by using Atomic Reactor instead of my custom hacky script:-)
Cheers, Vašek
I am not sure what you mean by " built image would be deployed to openshift instance".
I was saying about building the docker container images in openshift for the build, test, delivery environments.
My idea would be to go with the yaml files I saw in cccp-index and rtnpro's example repo as an UI right now - keep the code as little as possible. Hook it up to my example with some scripting, setup OpenShift and registry and try to get whole workflow working.
Yes, I was going through the same. I tried with building the example you have put. I am going through the cccp-index and yaml file to add with the example you have put.
I as a developer want to add a yaml file to my repo and submit my repo url
somewhere, so that it gets rebuilt, tested and pushed to a given registry regularly (like 4 times a day for start). I also want to be notified about new build and test results.
Yes, this yaml will help to handle all the linking and container management stuff.
Done:-) We can polish it later.
Makes sense?
Sure, going ahead with this.
Regards Bamacharan
On Wed, Feb 10, 2016 at 12:47 PM, Bamacharan Kundu bamachrn@gmail.com wrote:
On Wed, Feb 10, 2016 at 4:30 PM, Karanbir Singh kbsingh@centos.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/02/16 07:29, Bamacharan Kundu wrote:
Hi,
Vaclav presented the build pipeline very nicely and this would take out lot of tension for building the code, checking the code standards and test cases from the developer.
I would like to add few points on this.
On Tue, Feb 9, 2016 at 8:08 PM, Vaclav Pavlin <vpavlin@redhat.com mailto:vpavlin@redhat.com> wrote:
Hi all,
As KB wrote, I brought up the idea of using OpenShift as a glue (i.e. workflow controller). The result can be found here:
https://github.com/vpavlin/cccp-demo-openshift
TL;DR:
The repository contains OpenShift Template defining the workflow - build,test, delivery and (very poorly) implements the steps through Docker images (i.e. Dockerfiles and run scripts).
The developer should do only git push to his VCS and this should trigger the build process in the pipeline.
in an onprem story that would map well, but note that were aiming to run a hosted service with a distinct UI ( even if the UI is no UI )
Yes, now I got it. I had a thought to minimize the number of Dockerfiles, so that the user does not get confused of.
In this TDD process all the environments (including the build, test, delivery) would be created as a container and once the step is over it will destroy the environment. As output this will generate a application runtime along with the successfully built application code to registry.
As you mentioned this would be tagged with test along with jenkins build id, so that developer or QA can trace for which commit this is built.
Then for the next stages, successfully built image would be deployed to openshift instance to get through the test, delivery stages checking, along with the quality gates.
all the stages should be linked to pipeline and should be easily re-producible so that any one can check or regenerate the issues instantly.
add another dimension there - collection of related containers, ie. the entire microservice should be reproduceable.
This means system needs to maintain all the linking and volume sharing
of the components.
It's easily runnable in Vagrant with use of Project Atomic Developer Bundle.
If you are interested in more info, I'd suggest to read the readme in the repo, I hope it summarizes it clearly.
It's a very minimal demo, but I think it suggests the path, which could take us to the Unicorns land, quite well:).
Let me know in case of any questions, suggestions or requests for guidance in case anybody decides to take this further.
I would like to take this further, please let me know if my thought process is in the same line as yours or any changes, suggestions.
we need to work through whats needed to now integrate with the cccp-index content, and then map that back to deliverables. I had asked Zeeshan to look at registry side for delivery space, unsure how far he's gotten with that.
I believe, I should look for integration with cccp-index content?
Regards Bamacharan
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Developer Experience Team Brno, Czech Republic Phone: +420 739 666 824
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
-- Bamacharan Kundu IRC Nick- bamachrn http://bamacharankundu.wordpress.com/
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel