A reminder: The CERN Dojo will be held in Meyrin, Switzerland, on
October 19th, and registration closes soon, as we have to issue security
badges for all attendees.
Attendance is free, but you must register to get in the front door.
We'll have a full day of deep-dive CentOS content, in the midst of one
of the most famous research facilities in the world. You don't want to
miss it!
http://cern.ch/centos
See you at CERN!
--
Rich Bowen - rbowen(a)redhat.com
@CentOSProject // @rbowen
859 351 9166
Next weekend, CentOS will be sponsoring Ohio LinuxFest, and will have a
table there. If you expect to be at OLF, and can spare an hour, or even
a half hour, to sit at the table and answer questions, it would be a
great help to me.
Please have a look at the schedule - https://ohiolinux.org/schedule/ -
and let me know what time(s) you might be able to spend at the table.
For your convenience, I've made a list of available times here:
https://docs.google.com/spreadsheets/d/1TqTqaLoswMYzyfvQCnzjkVazlQkP87CCKBS…
Thanks, and I hope to see some of you in Columbus.
--Rich
--
Rich Bowen - rbowen(a)redhat.com
@CentOSProject // @rbowen
859 351 9166
Hi,
Since last few months, we were working on re-architecturing the
CentOS Community Container Pipeline Service to make
https://registry.centos.org as a stable and reliable source for CentOS
based container images.
We are now done with the changes for better throughput and stability.
We are working on deploying the new service in production in coming
weekend (between 28th Sep 2018- 1st Oct 2018). During this time service
will be in maintenance mode. We will not be building new images for
registry.centos.org. Existing images will not be updated based on RPM,
or git source updates.
However, during this migration registry.centos.org will be available
to pull images from. We are trying to make sure the down time for CentOS
Community Container pipeline service is minimal.
Service: https://registry.centos.org
Maintenance window: Service will not have downtime
Impact: Users will be able to pull images from https://registry.centos.org
Service: CentOS Container Community Pipeline Service
Maintenance window: 28th Sep 2018 to 1st Oct 2018 (We are working on to
make it minimal)
Impact: PRs to https://github.com/centos/container-index will not be
merged and no image updates are pushed to registry.centos.org.
Sorry for the inconvenience, we will keep posted.
Regards
Bama Charan Kundu
The Call for Presentations for the FOSDEM Dojo -
https://wiki.centos.org/Events/Dojo/Brussels2019 - closes next weekend.
And, so far, it's looking pretty grim, with only THREE submissions.
If you anticipate being at FOSDEM, and would like to present at our
annual FOSDEM Dojo on topics relevant to CentOS or Fedora, please
consider submitting a talk: https://goo.gl/forms/XkXbC2AZBgKvfDNF2
Please see https://wiki.centos.org/Events/Dojo/Brussels2018 for some
idea of what kind of talks we had last year. But we're also asking the
Fedora community to submit talks this year.
--Rich
--
Rich Bowen - rbowen(a)redhat.com
@CentOSProject // @rbowen
859 351 9166
Hello everyone,
we switched from using qemu-img from CentOS 7.x to qemu-img-ev from the
Virt SIG several months ago, to allow us to generate Hyper-V images.
My attempts to build Vagrant images are failing the automated tests
since the beginning of July, apparently due to corrupt filesystems. I
see XFS metadata corruption in the CentOS 7 images when using
qemu-img-ev, as well as ext4 superblock corruption in the CentOS 6
images.[1]
The distro installer runs just once and the resulting disk image is
converted by Image Factory to different formats, depending on the
virtualization target. Since the VirtualBox images are working as
expected, while the libvirt images don't even boot due to filesystem
corruption, I would assume that the installation produces a valid image,
but the conversion of the disk images for libvirt-kvm fall prey to bugs
in qemu-img-ev and the stock qemu-img. If anyone has the possibility to
test with VmWare or Hyper-V, please write me off-list.
I noticed that qemu-img-ev is at version 2.1.2, while Debian Stable has
version 2.8 and Fedora 28 has version 2.11. Maybe such bugs, if real,
were already fixed upstream - would it be possible to use a newer
version than what qemu-img-ev provides? We reverted to using the EL7
qemu-img, but this still produces broken libvirt images and wețd have to
drop the Hyper-V images as well.
Any help or suggestions are appreciated.
Best regards,
Laurențiu
[1] https://people.centos.org/bstinson/vagrant/c6-vagrant.png
I've begun to draft the October newsletter on blog.centos.org If your
SIG would like to report anything to the broader CentOS community,
please do let me know what you'd like to say.
Or, if you'd like to edit it yourself, please go right ahead and make
your edits directly in my draft.
If there's other content you'd like to contribute to the newsletter,
please do let me know. The centos-promo list is the best place to
discuss your newsletter ideas.
The deadline for content is end of day, October 1st.
If you've not edited anything in the blog before: Blog logins are tied
to your CentOS account. After you've logged into the blog editing
interface once - https://blog.centos.org/wp-admin - just let me know,
and I can add you to the Editors group.
Thanks!
--
Rich Bowen - rbowen(a)redhat.com
@CentOSProject // @rbowen
859 351 9166
Hello,
Trevor Vardeman <Vorrtex>, Adam Kimball <baha>, and myself <mjturek>
have been looking into TripleO container builds for ppc64le. This led to
finding some missing dependencies. The current one we're struggling with
is sensu.
It seems like all of the dependencies for running sensu are published in
ppc64le-opstools [0]. Additionally, the sensu package itself is noarch.
Is there anything we could do to get this package published?
Thanks!
Mike Turek <mjturek>
[0] http://mirror.centos.org/altarch/7/opstools/ppc64le/sensu/
<paste>
Recently I had to update the existing code running behind
mirrorlist.centos.org (the service that returns you a list of validated
mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was
still using the Maxmind GeoIP Legacy country database. As you can
probably know, Maxmind announced that they're discontinuing the Legacy
DB, so that was one reason to update the code. Switching to GeoLite2 ,
with python2-geoip2 package was really easy to do and so was done
already and pushed last month.
But that's when I discussed with Anssi (if you don't know him, he's
maintaining the CentOS external mirrors DB up2date, including through
the centos-mirror list ) that we thought about not only doing that
change there, but in the whole chain (so on our "mirror crawler" node,
and also for the isoredirect.centos.org service), and random chat like
these are good because suddenly we don't only want to "fix" one thing,
but also take time on enhancing it and so adding more new features.
The previous code was already supporting both IPv4 and IPv6, but it was
consuming different data sources (as external mirrors were validated
differently for ipv4 vs ipv6 connnectivity). So the first thing was to
rewrite/combine the new code on the "mirror crawler" process for
dual-stack tests, and also reflect that change o nthe frontend (aka
mirrorlist.centos.org) nodes.
While we were working on this, Anssi proposed to also not adapt the
isoredirect.centos.org code, but convert it in the same python format as
the mirrorlist.centos.org, which he did.
Last big change also that was added is the following : only some
repositories/architectures were checked/validated in the past but not
all the other ones (so nothing from the SIGs and nothing from AltArch,
so no mirrorlist support for i386/armhfp/aarch64/ppc64/ppc64le).
While it wasn't a real problem in the past when we launched the SIGs
concept, and that we added after that the other architectures (AltArch),
we suddenly started suffering from some side-effects :
* More and more users "using" RPM content from mirror.centos.org
(mainly through SIGs - which is a good indicator that those are
successful, which is a good "problem to solve")
* We are currently losing some nodes in that mirror.centos.org network
(it's still entirely based on free dedicated servers donated to the project)
To address first point, offloading more content to the 600+ external
mirrors we have right now would be really good, as those nodes have
better connectivity than we do, and with more presence around the globe
too, so slowly pointing SIGs and AltArch to those external mirrors will
help.
The other good point is that , as we switched to the GeoLite2 City DB,
it gives us more granularity and also for example, instead of "just"
returning you a list of 10 validated mirrors for USA (if your request
was identified as coming from that country of course), you now get a
list of validated mirrors in your state/region instead. That means that
then for such big countries having a lot of mirrors, we also better
distribute the load amongst all of those, which is a big win for
everybody - users and mirrors admins - )
For people interested in the code, you'll see that we just run several
instances of the python code, behind Apache running with
mod_proxy_balancer. That means that if we just need to increase the
number of "instances", it's easy to do but so far it's running great
with 5 running instances per node (and we have 4 nodes behind
mirrorlist.centos.org). Worth noting that on average, each of those
nodes gets 36+ millions requests per week for the mirrorlist service (so
144+ millions in total per week)
So in (very) short summary :
mirrorlist.centos.org code now supports SIGs/AltArch repositories (we'll
sync with SIGs to update their .repo file to use mirrorlist= instead of
baseurl= soon)
we have better accuracy for large countries, so we redirect you to a
'closer' validated mirror
</paste>
So that means that now the following combination are possible through
mirrorlist:
testing base os for aarch64 :
curl 'http://mirrorlist.centos.org/?release=7&arch=aarch64&repo=os'
testing RDO Rocky release for ppc64le :
curl
'http://mirrorlist.centos.org/?release=7&arch=ppc64le&repo=cloud-openstack-r…'
And so on .... :)
There is no "pressure" to update your -release pkg to switch from
baseurl=mirror.centos.org to mirrorlist=mirrorlist.centos.org but I just
wanted to inform that it's now "live" and you can start thinking about
that change, and why not with an update pkg just pushed to -testing (aka
buildlogs.centos.org) and then move to that later ?
Cheers,
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab
If you're having trouble with the formatting, this release announcement is
available online https://blogs.rdoproject.org/2018/09/rdo-rocky-released/
---
The RDO community is pleased to announce the general availability of the
RDO build for OpenStack Rocky for RPM-based distributions, CentOS Linux and
Red Hat Enterprise Linux. RDO is suitable for building private, public, and
hybrid clouds. Rocky is the 18th release from the OpenStack project, which
is the work of more than 1400 contributors from around the world
<http://stackalytics.com/>.
The release already available on the CentOS mirror network at
http://mirror.centos.org/centos/7/cloud/x86_64/openstack-rocky/
The RDO community project curates, packages, builds, tests and maintains a
complete OpenStack component set for RHEL and CentOS Linux and is a member
of the CentOS Cloud Infrastructure SIG
<https://wiki.centos.org/SpecialInterestGroup/Cloud>. The Cloud
Infrastructure SIG focuses on delivering a great user experience for CentOS
Linux users looking to build and maintain their own on-premise, public or
hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform
<https://www.redhat.com/en/technologies/linux-platforms/openstack-platform>,
is 100% open source, with all code changes going upstream first.
]4 <https://www.goodfreephotos.com/> Photo via Good Free Photos New and
Improved
Interesting things in the Rocky release include:
- New neutron ML2 driver networking-ansible
<https://networking-ansible.readthedocs.io/en/latest/> has been included
in RDO. This module abstracts management and interaction with switching
hardware to Ansible Networking.
- Swift3 has been moved to swift package as the “s3api” middleware.
Other improvements include:
- Metalsmith <https://metalsmith.readthedocs.io/en/latest/> is now
included in RDO. This is a simple tool to provision bare metal machines
using ironic, glance and neutron.
Contributors
During the Rocky cycle, we saw the following new contributors:
- Bob Fournier
- Bogdan Dobrelya
- Carlos Camacho
- Carlos Goncalves
- Cédric Jeanneret
- Charles Short
- Dan Smith
- Dustin Schoenbrun
- Florian Fuchs
- Goutham Pacha Ravi
- Ilya Etingof
- Konrad Mosoń
- Luka Peschke
- mandreou
- Nate Johnston
- Sandhya Dasu
- Sergii Golovatiuk
- Tobias Urdin
- Tony Breeds
- Victoria Martinez de la Cruz
- Yaakov Selkowitz
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all
SIXTY-NINE contributors who participated in producing this release. This
list includes commits to rdo-packages and rdo-infra repositories:
- Ade Lee
- Alan Bishop
- Alan Pevec
- Alex Schultz
- Alfredo Moralejo
- Bob Fournier
- Bogdan Dobrelya
- Brad P. Crochet
- Carlos Camacho
- Carlos Goncalves
- Cédric Jeanneret
- Chandan Kumar
- Charles Short
- Christian Schwede
- Daniel Alvarez
- Daniel Mellado
- Dansmith
- Dmitry Tantsur
- Dougal Matthews
- Dustin Schoenbrun
- Emilien Macchi
- Eric Harney
- Florian Fuchs
- Goutham Pacha Ravi
- Haikel Guemar
- Honza Pokorny
- Ilya Etingof
- James Slagle
- Jason Joyce
- Javier Peña
- Jistr
- Jlibosva
- Jon Schlueter
- Juan Antonio Osorio Robles
- karthik s
- Kashyap Chamarthy
- Kevin Tibi
- Konrad Mosoń
- Lon
- Luigi Toscano
- Luka Peschke
- marios
- Martin André
- Matthew Booth
- Matthias Runge
- Mehdi Abaakouk
- Nate Johnston
- Nmagnezi
- Oliver Walsh
- Pete Zaitcev
- Pradeep Kilambi
- rabi
- Radomir Dopieralski
- Ricardo Noriega
- Sandhya Dasu
- Sergii Golovatiuk
- shrjoshi
- Steve Baker
- Thierry Vignaud
- Tobias Urdin
- Tom Barron
- Tony Breeds
- Tristan de Cacqueray
- Victoria Martinez de la Cruz
- Yaakov Selkowitz
- yatin
The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Stein
<https://wiki.openstack.org/wiki/Release_Naming/S_Proposals> which has a
slightly longer release cycle due to the PTG Summit co-location next year
<https://ttx.re/future-of-ptg.html> with an estimated GA the week of 08-12
April 2019. The full schedule is available at
https://releases.openstack.org/stein/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after
the first and third milestones; therefore, the upcoming test days are 01-02
November 2018 for Milestone One and 14-15 March 2019 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try
an All-In-One Packstack installation
<https://www.rdoproject.org/install/packstack/>. You can run RDO on a
single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart
<https://www.rdoproject.org/tripleo/> and you’ll be running a production
cloud in short order.
Finally, for those that don’t have any hardware or physical resources,
there’s the OpenStack Global Passport Program
<https://www.openstack.org/passport>. This is a collaborative effort
between OpenStack public cloud providers to let you experience the freedom,
performance and interoperability of open source infrastructure. You can
quickly and easily gain access to OpenStack infrastructure via trial
programs from participating OpenStack public cloud providers around the
world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org.
We also have our [users(a)lists.rdoproject.org[(
https://lists.rdoproject.org/mailman/listinfo/users) for RDO-specific users
and operrators. For more developer-oriented content we recommend joining
the dev(a)lists.rdoproject.org mailing list
<https://lists.rdoproject.org/mailman/listinfo/dev>. Remember to post a
brief introduction about yourself and your RDO story. The mailing lists
archives are all available at https://mail.rdoproject.org
<https://lists.rdoproject.org/mailman/listinfo>. You can also find
extensive documentation on RDOproject.org <https://www.rdoproject.org/>.
The #rdo channel on Freenode IRC is also an excellent place to find and
give help.
We also welcome comments and requests on the CentOS mailing lists
<https://lists.centos.org/mailman/listinfo> and the CentOS and TripleO IRC
channels (#centos, #centos-devel, and #tripleo on irc.freenode.net),
however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO
contribute pages <https://www.rdoproject.org/contribute/>, peruse the CentOS
Cloud SIG page <https://wiki.centos.org/SpecialInterestGroup/Cloud>, and
inhale the RDO packaging documentation
<https://www.rdoproject.org/documentation/rdo-packaging/>.
Join us in #rdo on the Freenode IRC network and follow us on Twitter
@RDOCommunity <http://twitter.com/rdocommunity/>. You can also find us on
Facebook <https://facebook.com/rdocommunity>, Google+
<https://plus.google.com/communities/110409030763231732154> and YouTube
<https://www.youtube.com/channel/UCWYIPZ4lm4P3_pzZ9Hx9awg>.