On 1/11/23 02:09, Simon Matter wrote: >> I plan to upgrade an existing C7 computer which currently has one 256 GB >> SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest >> of the system remaining the same. The system also has one additional >> internal and one external harddisk but these should not be touched. The >> system will continue to run C7. .... trimming >> >> - I do not see any benefit to breaking up the LVM2/LUKS partition >> containing /root, /swap and /home into more than one RAID1 partition or am >> I wrong? If the SSD fails, the entire SSD would fail and break the system, >> hence I might as well keep it as one single RAID1 partition, or? > What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another segment with an error on the other disk and > still have redundancy if the error is in another part. > > That said, it's a bit more work to setup but has helped me several times > in the decades ago. Ah, now I begin to get it. Separate partitions RAIDed. > >> - Is the next step after the RAID1 partitioning above then to do a minimal >> install of C7 followed by using clonezilla to restoring the LVM2/LUKS >> partition?? >> >> - Any advice on using clonezilla? Or the external partitioning tool? >> >> - Finally, since these new SSDs are huge, perhaps I should take the >> opportunity to increase the space for both /root and /swap? >> >> - /root is 50 GB - should I increase it to eg 100 GB? >> >> - The system currently has 32 GB of memory but I will likely upgrade it to >> 64 GB (or even 128 GB), perhaps I should at this time already increase the >> /swap space to 64 GB/128 GB? > I'm also interested here to learn what others are doing in higher memory > situations. I have some systems with half a TB memory and never configured > more than 16GB of swap. I has usually worked well and when a system > started to use swap heavily, there was something really wrong in an > application and had to be fixed there. Additionally we've tuned the kernel > VM settings so that it didn't want to swap too much. Because swapping was > always slow anyway even on fast U.2 NVME SSD storage. Perhaps you have not dealt with Firefox? :) On my Fedora 35 notebook, it slowly gobbles memory and I have to quit it after some number of days and restart. Now I only have 16GB of memory, 16GB physical swap, and 8GB zram swap. Building a F37 system now and see how that works, I doubt there is any improved behavior with Firefox. > > Regards, > Simon > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos