Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
1. put the slave in maintenance mode 2. take snapshot of the slave's disk 3. we will delete all the workspaces 4. re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
On Thu, Dec 12, 2019 at 3:06 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
Done Please let me know once you are done cleaning the workspace. Once again, thank you for stepping in for this.
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote: > > Hello guys, > > from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks. > > Could you please take a look at what's happening there? Thank you! > > Have a nice day, > Katka > _______________________________________________ > Ci-users mailing list > Ci-users@centos.org > https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
Hi Vipul,
I ended up killing the jobs manually because they were taking very long.
Can you re-enable the slave now?
Thanks a lot for the swift support Vipul.
Cheers, Jaime
On Thu, 12 Dec 2019 at 10:39, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 3:06 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
Done Please let me know once you are done cleaning the workspace. Once again, thank you for stepping in for this.
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote: > > The Jenkins master has good space left, it could be that the slave > node itself is full. > Can you please share the build link? > > On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote: > > > > Hello guys, > > > > from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks. > > > > Could you please take a look at what's happening there? Thank you! > > > > Have a nice day, > > Katka > > _______________________________________________ > > Ci-users mailing list > > Ci-users@centos.org > > https://lists.centos.org/mailman/listinfo/ci-users > > > > -- > Vipul Siddharth > He/His/Him > Fedora | CentOS CI Infrastructure Team > Red Hat > w: vipul.dev > > _______________________________________________ > Ci-users mailing list > Ci-users@centos.org > https://lists.centos.org/mailman/listinfo/ci-users >
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
On Thu, Dec 12, 2019 at 3:34 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
I ended up killing the jobs manually because they were taking very long.
Can you re-enable the slave now?
Done :)
Thanks a lot for the swift support Vipul.
I am glad this was done with not a lot of delay :)
Cheers, Jaime
On Thu, 12 Dec 2019 at 10:39, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 3:06 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
Done Please let me know once you are done cleaning the workspace. Once again, thank you for stepping in for this.
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote: > > Hi Vipul, > > The slave is full. Basically too many workspaces with a fair amount of > data in them: > > 15.5 GiB /workspace > > And these are the top 10 dirs under /workspace: > > 547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 > 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck > 464.2 MiB /devtools-kubernetes-model@2 > 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master > 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master > 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master > 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 > 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 > 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 > 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth > > So, I would propose to delete all those, in order to do that I would > like to request the following: > > 1. put the slave in maintenance mode > 2. take snapshot of the slave's disk > 3. we will delete all the workspaces > 4. re-enable the slave node > > Can you do 1, 2 and 4, and coordinate with me to do 3? Thank you Jaime for the help, I am looking into it Appreciate your help > > Thanks, > Jaime > > > On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote: > > > > The Jenkins master has good space left, it could be that the slave > > node itself is full. > > Can you please share the build link? > > > > On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote: > > > > > > Hello guys, > > > > > > from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks. > > > > > > Could you please take a look at what's happening there? Thank you! > > > > > > Have a nice day, > > > Katka > > > _______________________________________________ > > > Ci-users mailing list > > > Ci-users@centos.org > > > https://lists.centos.org/mailman/listinfo/ci-users > > > > > > > > -- > > Vipul Siddharth > > He/His/Him > > Fedora | CentOS CI Infrastructure Team > > Red Hat > > w: vipul.dev > > > > _______________________________________________ > > Ci-users mailing list > > Ci-users@centos.org > > https://lists.centos.org/mailman/listinfo/ci-users > > > > > -- > Jaime Melis > Application SRE team, Service Delivery > Red Hat > jmelis@redhat.com >
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
Thank you very much guys for your quick response and solving the issue!
On Thu, Dec 12, 2019 at 11:17 AM Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 3:34 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
I ended up killing the jobs manually because they were taking very long.
Can you re-enable the slave now?
Done :)
Thanks a lot for the swift support Vipul.
I am glad this was done with not a lot of delay :)
Cheers, Jaime
On Thu, 12 Dec 2019 at 10:39, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 3:06 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
Done Please let me know once you are done cleaning the workspace. Once again, thank you for stepping in for this.
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com
wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com
wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave
in
maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com
wrote:
> > On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com
wrote:
> > > > Hi Vipul, > > > > The slave is full. Basically too many workspaces with a fair
amount of
> > data in them: > > > > 15.5 GiB /workspace > > > > And these are the top 10 dirs under /workspace: > > > > 547.9 MiB
/devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2
> > 547.3 MiB
/devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck
> > 464.2 MiB /devtools-kubernetes-model@2 > > 460.9 MiB
/devtools-kubernetes-model-fabric8-push-build-master
> > 422.0 MiB
/devtools-openshift-jenkins-s2i-config-fabric8-push-build-master
> > 362.3 MiB
/devtools-kubernetes-client-fabric8-push-build-master
> > 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 > > 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 > > 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 > > 206.5 MiB
/devtools-contract-test-consumer-fabric8-wit-fabric8-auth
> > > > So, I would propose to delete all those, in order to do that
I would
> > like to request the following: > > > > 1. put the slave in maintenance mode > > 2. take snapshot of the slave's disk > > 3. we will delete all the workspaces > > 4. re-enable the slave node > > > > Can you do 1, 2 and 4, and coordinate with me to do 3? > Thank you Jaime for the help, > I am looking into it > Appreciate your help > > > > Thanks, > > Jaime > > > > > > On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth <
vipul@redhat.com> wrote:
> > > > > > The Jenkins master has good space left, it could be that
the slave
> > > node itself is full. > > > Can you please share the build link? > > > > > > On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok <
kkanova@redhat.com> wrote:
> > > > > > > > Hello guys, > > > > > > > > from yesterday evening we are experiencing failures in
our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
> > > > > > > > Could you please take a look at what's happening there?
Thank you!
> > > > > > > > Have a nice day, > > > > Katka > > > > _______________________________________________ > > > > Ci-users mailing list > > > > Ci-users@centos.org > > > > https://lists.centos.org/mailman/listinfo/ci-users > > > > > > > > > > > > -- > > > Vipul Siddharth > > > He/His/Him > > > Fedora | CentOS CI Infrastructure Team > > > Red Hat > > > w: vipul.dev > > > > > > _______________________________________________ > > > Ci-users mailing list > > > Ci-users@centos.org > > > https://lists.centos.org/mailman/listinfo/ci-users > > > > > > > > > -- > > Jaime Melis > > Application SRE team, Service Delivery > > Red Hat > > jmelis@redhat.com > > > > > -- > Vipul Siddharth > He/His/Him > Fedora | CentOS CI Infrastructure Team > Red Hat > w: vipul.dev >
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Thanks, I see it's offline. I will wait until the running jobs finish, delete the workspaces folder, and ask you to re-enable.
On Thu, 12 Dec 2019 at 10:35, Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The backup is done, can you please put the slave in maintenance mode?
On Thu, 12 Dec 2019 at 10:14, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:40 PM Jaime Melis jmelis@redhat.com wrote:
I'm backing it up manually, so no need for step 2.
Can I ping you once the backup is done so you can put the slave in maintenance mode?
yes please do :)
On Thu, 12 Dec 2019 at 10:05, Vipul Siddharth vipul@redhat.com wrote:
On Thu, Dec 12, 2019 at 2:32 PM Jaime Melis jmelis@redhat.com wrote:
Hi Vipul,
The slave is full. Basically too many workspaces with a fair amount of data in them:
15.5 GiB /workspace
And these are the top 10 dirs under /workspace:
547.9 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck@2 547.3 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-prcheck 464.2 MiB /devtools-kubernetes-model@2 460.9 MiB /devtools-kubernetes-model-fabric8-push-build-master 422.0 MiB /devtools-openshift-jenkins-s2i-config-fabric8-push-build-master 362.3 MiB /devtools-kubernetes-client-fabric8-push-build-master 241.8 MiB /devtools-kubernetes-client-fabric8-push-prcheck@4 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@6 236.0 MiB /devtools-kubernetes-client-fabric8-push-prcheck@5 206.5 MiB /devtools-contract-test-consumer-fabric8-wit-fabric8-auth
So, I would propose to delete all those, in order to do that I would like to request the following:
- put the slave in maintenance mode
- take snapshot of the slave's disk
- we will delete all the workspaces
- re-enable the slave node
Can you do 1, 2 and 4, and coordinate with me to do 3?
Thank you Jaime for the help, I am looking into it Appreciate your help
Thanks, Jaime
On Thu, 12 Dec 2019 at 09:41, Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote: > > Hello guys, > > from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks. > > Could you please take a look at what's happening there? Thank you! > > Have a nice day, > Katka > _______________________________________________ > Ci-users mailing list > Ci-users@centos.org > https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
-- Jaime Melis Application SRE team, Service Delivery Red Hat jmelis@redhat.com
slave04 is full, @Katerina Foniok, I see there are a lot of things inside your workspace Can you clear some of them (if you have access, else the person who has access to the node can) I will work with others in clearing more of it
On Thu, Dec 12, 2019 at 2:09 PM Vipul Siddharth vipul@redhat.com wrote:
The Jenkins master has good space left, it could be that the slave node itself is full. Can you please share the build link?
On Thu, Dec 12, 2019 at 1:53 PM Katerina Foniok kkanova@redhat.com wrote:
Hello guys,
from yesterday evening we are experiencing failures in our jobs on ci.centos.org saying No space left on device. It's blocking our PR checks.
Could you please take a look at what's happening there? Thank you!
Have a nice day, Katka _______________________________________________ Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
-- Vipul Siddharth He/His/Him Fedora | CentOS CI Infrastructure Team Red Hat w: vipul.dev