hi folks,
does anyone have insight into signing a vagrant box, so the origin can
be confirmed ? A brief look around seems to indicate there is no such
support in vagrant itself ? Surely, I must be missing something.
Regards
--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc
An updated version of CentOS Atomic Host (version 7.20160224) is now
available for download[1]. CentOS Atomic Host is a lean operating system
designed to run Docker containers, built from standard CentOS 7 RPMs,
and tracking the component versions included in Red Hat Enterprise Linux
Atomic Host.
[1] https://wiki.centos.org/SpecialInterestGroup/Atomic/Download
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted
Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image.
These images are available for download at cloud.centos.org. The backing
ostree repo is published to mirror.centos.org.
CentOS Atomic Host includes these core component versions:
- kernel-3.10.0-327.10.1.el7.x86_64
- cloud-init-0.7.5-10.el7.centos.1.x86_64
- atomic-1.6-6.gitca1e384.el7.x86_64
- kubernetes-1.2.0-0.6.alpha1.git8632732.el7.x86_64
- etcd-2.2.2-5.el7.x86_64
- ostree-2016.1-2.atomic.el7.x86_64
- docker-1.8.2-10.el7.centos.x86_64
- flannel-0.5.3-9.el7.x86_64
Upgrading
If you're running a previous version of CentOS Atomic Host, you can
upgrade to the current image by running the following command:
$ sudo atomic host upgrade
Images
Vagrant
CentOS-Atomic-Host-7-Vagrant-Libvirt.box (421 MB) and
CentOS-Atomic-Host-7-Vagrant-Virtualbox.box (435 MB) are Vagrant boxes
for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud
setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For
example, getting the VirtualBox instance up would involve running the
following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up --provider virtualbox
ISO
The installer ISO (742 MB) can be used via regular install methods (PXE,
CD, USB image, etc.) and uses the Anaconda installer to deliver the
CentOS Atomic Host. This image allows users to control the install using
kickstarts and to define custom storage, networking and user accounts.
This is the recommended option for getting CentOS Atomic Host onto bare
metal machines, or for generating your own image sets for custom
environments.
QCOW2
The CentOS-Atomic-Host-7-GenericCloud.qcow2 (1 GB) image is suitable for
use in on-premise and local virtualized environments. We test this on
OpenStack, AWS and local Libvirt installs. If your virtualization
platform does not provide its own cloud-init metadata source, you can
create your own NoCloud iso image.
Amazon Machine Images
Region Image ID
------ --------
us-east-1 ami-e6d5e88c
us-west-2 ami-3fb05d5f
us-west-1 ami-fd62129d
eu-west-1 ami-451ea236
eu-central-1 ami-ce8663a1
ap-southeast-1 ami-6c4c850f
ap-northeast-1 ami-c74644a9
ap-southeast-2 ami-bae8ced9
ap-northeast-2 ami-5732fc39
sa-east-1 ami-059d1f69
SHA Sums
d4e43826fc9f641272e589dfb8d979cd592809b34cdbdaee8b7abc9a09ff30d2
CentOS-Atomic-Host-7.1602-GenericCloud.qcow2
33bd4f732c2857c698bd00bc6db29ae2a4d7d0b768f0353d4e28a5c5ab1c999e
CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.gz
ee9d9b4d78906ea9c33b0b87c8ad3387e997b479626e64ffedfd3f415a84cded
CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.xz
39a548f95022a9ab100d64dbf3579d40c66add1bc56ca938b7dba38b73c2ea87
CentOS-Atomic-Host-7.1602-Installer.iso
2f965b2a502c3839b6be84dee5ee4e60328d9f074e1494ded58b418a309df060
CentOS-Atomic-Host-7.1602-Vagrant-Libvirt.box
bc976d197cac629fd68a6d8faf6bcfaeca8afd0020bf573ef343622a7ae1581b
CentOS-Atomic-Host-7.1602-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise
Linux Atomic Host cadence. After sources are released, they're rebuilt
and included in new images. After the images are tested by the SIG and
deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the CentOS Atomic SIG, based on
upstream work from Project Atomic. If you'd like to work on testing
images, help with packaging, documentation -- join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel
channel, and you'll often find us in #atomic and/or #centos-devel if you
have questions. You can also join the atomic-devel mailing list if you'd
like to discuss the direction of Project Atomic, its components, or have
other questions.
Getting Help
If you run into any problems with the images or components, feel free to
ask on the centos-devel mailing list.
Have questions about using Atomic? See the atomic mailing list or find
us in the #atomic channel on Freenode.
Hi all,
I've been directed here by ktdreyer in the ceph-devel irc. Today we learned that ceph hammer is no longer being built for centos6, so we won't be getting 0.94.6 or newer. AFAICT, it was dropped due to complications in their build system: https://github.com/ceph/ceph-build/commit/42c7cb3f412e9fc1f525a037f07697fc8…
Would it make sense for the Storage-SIG to build hammer on centos6? Technically it should be doable -- but would you guys need some help getting this going?
Eventually (client only) builds of jewel on el6 would be really nice to have, but I don't yet know if that's even possible.
Cheers,
Dan van der Ster
CERN IT Storage Group
Ceph Community "Academic Liaison"
Hi. [Follow up from https://github.com/openshift/openshift-ansible/issues/1384]
I did not RTFM, this is a fresh-eyes-I-just-want-to-download-an-image perspective...
Looking at http://cloud.centos.org/centos/7/images/, I see -1602 is latest version.
- If for some reason I want to use the unversioned
CentOS-7-x86_64-GenericCloud.* files, it's hard to be sure what I'll get
(other than by downloading => I am getting 1602).
- sha256sum.txt{,.asc} contain no hashes for the unversioned files.
File size does suggest it's 1602.
Ideally the file listing would actually show them as "name -> target" symlink,
and/or downloading would return an HTTP redirect to the current version.
Currently it returns the content directly, only identifying headers are
`Last-Modified: Tue, 23 Feb 2016 17:53:08 GMT` and
`ETag: "fcc0480-52c739f3d2900"` (for the .xz).
[Be careful with redirect: some scripts/libraries by default don't
follow them, e.g. any script using `curl` without `-L` would break :-(]
- http://cloud.centos.org/centos/7/images/sha256sum.txt{,.asc} are not
available over HTTPS. I can verify the hash but I can't trust
the hash itself. That's what .asc is signed for, but lazy folks
like me don't necessery know which key to trust...
(`gpg --search-keys F4A80EB5` worked but then `gpg --verify` says
"WARNING: This key is not certified with a trusted signature!".
No idea what that means - I'm clueless with GPG;
trusting https://cloud.centos.org would be trivial for me.)
Looking at https://wiki.centos.org/Download:
- It only links to the unversioned cloud images, doesn't say it's 1602
(other places on that page give the impression everything 7 is 1511),
and doesn't list hashes.
- I don't see a link to release notes for cloud images;
https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7 is for 1511
and only talks of the regular ISOs.
https://wiki.centos.org/Cloud doesn't mention any specific versions,
release notes or hashes either.
Googling "centos cloud 1602" didn't lead me to any "official" announcement.
Nothing on centos-announce this February. Is -1602 "officially" released?
(I personally don't really care, but "what changed" is the first natural
question people ask beyond "I just want the latest"...)
Hope this is useful feedback.
Those of you using the automated build functionality of docker's hub may
have been experiencing some build failures lately. It appears that
docker has migrated to something using a different graph driver on the
back-end or have otherwise limited the capabilities available during the
build process.
The result is that builds with packages like httpd will fail, because
they require the cap_set_file privilege as part of the install process.
There have been a few bugs filed with the Docker folks for this, but I
don't know what the end result of this will be. For now, if you use the
automated build framework on dockerhub, be aware that your builds may
fail, and it's outside our control.
--
Jim Perrin
The CentOS Project | http://www.centos.org
twitter: @BitIntegrity | GPG Key: FA09AD77
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Today in the evening we suffered from a big power outage that
affected some of our infra services.
Some small public services like :
- - planet.centos.org
- - seven.centos.org
were down, but connection was restored and services back online.
We had to spend more time on the DevCloud setup
(https://wiki.centos.org/DevCloud) : we first had to verify the
underlying block devices and status for that gluster setup (in
Distributed/Replicated mode), and then slowly restart the hypervisor
controller and then Virtual Machines. From a kvm hypervisor level, all
VMs are powered back on, but we haven't verified each VM (and so the
status)
If you recently deployed some VMs in that DevCloud enviroment, verify
that you can still reach your testing VMs.
Sorry for the inconvenience.
On behalf of the infra team,
- --
Fabian Arrotin
The CentOS Project | http://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEYEARECAAYFAlbOSioACgkQnVkHo1a+xU6KSACgmrfAEbzynNR3ef8It7du0e65
/xsAn3OmZZljXugAU31E129Iv58flyfY
=xT0k
-----END PGP SIGNATURE-----
Hi,
The first release of Ceph Hammer by the Storage SIG is available on
CentOS 7 x86_64 for testing purposes.
It is mainly intended as a preview release as we are still working on
the release process and associated documentation.
A quickstart guide is available in the wiki:
https://wiki.centos.org/SpecialInterestGroup/Storage/ceph-Quickstart
We are of course looking for feedback from potential Ceph users on
CentOS, especially from the Virtualization (oVirt), Cloud/RDO and
Atomic Host SIGs.
Our next steps are updating to 0.94.6, working on CI, and completing
the quickstart guide. Co-maintainers are welcome of course!
Cheers
François
TL;DR:
* Mitaka 3 test day, March 10, 11:
https://www.rdoproject.org/testday/mitaka/milestone3/
* On-site test day in Brno:
https://www.rdoproject.org/testday/mitaka/brno-on-site/
* Demo of deploying with TripleO:
https://www.youtube.com/watch?v=4O8KvC66eeU
Mitaka milestone 3 is scheduled for the week of February 29 - March 4
[1] and, as per usual, we're planning a test day about a week out from
that - March 10th and 11th. We've got the usual page of instructions [2]
for testing.
This time, however, we have two bonus events.
First, we're delighted that Eliska Malikova is putting together an
on-site test day at the Red Hat office in Brno, for anyone in that
general area. If you wish to attend, please register [3] so that we know
how much pizza to order. (Attendance is limited to 50, so please
register sooner rather than later.)
Second, as we attempt to get more people testing TripleO (formerly known
as RDO Manager), John Trownbridge (that's Trown on IRC) will be doing a
demo of the TripleO Quickstart. This will be conducted as a YouTube live
stream [4] and will also be recorded, and available at that same
location after the fact.
So, please, come help us ensure that Mitaka is the best RDO yet. As
usual, we'll be on #rdo (on Freenode IRC), and here on rdo-list, to
field any questions. Full details are on the test day website [5] and, a
usual, we can use lots of help making the test case instructions better,
so that more people can participate.
Thanks!
--Rich
[1] http://releases.openstack.org/mitaka/schedule.html
[2] https://www.rdoproject.org/testday/mitaka/milestone3/
[3]
https://www.eventbrite.com/e/rdo-on-site-test-day-brno-tickets-5934822213
[4] https://www.youtube.com/watch?v=4O8KvC66eeU
[5] https://www.rdoproject.org/testday/mitaka/milestone3/
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Dear All,
Following FOSDEM discussion, I owe an update on the koji development
for aarch64.
Thanks to Fabian, at this we have a builder configured and we are able
to build on specific targets for aarch64. I tested few glusterfs
builds to ensure all operations are working well and no issue was found.
At this time we will support only new builds, it won't be possible to
rebuild an old NVR. A future koji update may help us with this
requirement. ("merge scratch build" feature seems absent from latest
release, I discussed it with Mike at Fosdem, I'll send an update when
I have more news).
In practice it means you will have to bump your release for getting
aarch64
The last missing bit for enabling aarch64 for SIGs is to have a common
URL for all arches.
In koji an external repository can be defined with a $arch paramter in
the url.
http://mirror.centos.org/altarch/7/os/$arch/
Unfortunately at this time we have not a single URL that presents all
$arch (previous link misses x86_64).
After discussing with the team, the easier way would be to add a
x86_64 link (e.g : http://mirror.centos.org/altarch/7/os/x86_64
pointing to http://mirror.centos.org/centos-7/7/os/x86_64/ ) for the
internal koji mirror as it would solve all our issues.
The pro is this solution will allow us to reconfigure koji once and be
compatible for all future arches.
The con is that mock configs generated by koji won't work for external
users.
Let me know if you have a better idea or some concerns.
As soon as we agree on a solution I'll be able to enable it for the
Storage SIG glusterfs targets as Niels De Vos "agreed" to be our first
user :)
- --
Thomas 'alphacc' Oulevey
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iF4EAREIAAYFAlbERAQACgkQXU8JKmZfxovZCwD/S6B6A4ICpDAKg+IeIlMhS06y
6/vPh8FyhRMTahzdU2oA/0j/SwEBr6zYOm7RP42petsNvU+fL+FKoJFEnQFqQuLn
=EGqh
-----END PGP SIGNATURE-----
In order to get an in-house 32-bit ppc application running on CentOS 7, I've been rebuilding a number of SRPM's from 7.2.1511 on vault.centos.org. I'm down to three packages (glibc, gcc, and mesa) needing rebuilds to resolve "Protected multilib" errors when installing the Yum group that defines my application's "platform".
Each of these three is failing to rebuild in different ways, so I'm looking for some pointers as to what I'm doing wrong.
For glibc, the failure is that rpmbuild finds a number of installed but unpackaged files in the buildroot under /usr/lib/debug. I've tried setting a debuginfo-related macro in the mock.cfg, but I don't see that setting in the glibc mock.cfg on buildlogs.centos.org, so I think that might be a red herring.
For gcc, mock fails to set up the chroot because it can't find a package to provide /usr/lib64/libc.so (which is called out explicitly in a BuildRequires), even though the mock.cfg includes a repo containing glibc-devel.ppc64 and glibc.ppc64.
For mesa, the configure test for DRI3PROTO fails with dri3proto >= 1.0 not found. The F19 ppc backing repo I'm using has packages for Mesa 9.x, whereas 7.2.1511 is at Mesa 10.x. I'm guessing that some sort of manual intervention is needed to bootstrap across the major-version bump, but I haven't been able to work out the sequence.
I realize that ppc RPM's are in the cards, but I'm trying to do as much risk-reduction as I can due to an internal deadline. Any hints as to what I'm missing on these three packages would be greatly appreciated.
Thanks,
-Bryson
________________________________
This message (including any attachments) may contain confidential information intended for a specific individual and purpose. If you are not the intended recipient, you should delete this message and any attachments.