Linux References

Various Tips

RAID 1 - replacing a disk and installing GRUB on a RH6 box

I had RAID 1 (a mirror) installed on all partitions of the sda and sdb. The /boot directory was on sda1/sdb1. In 2013-2017 it used 2 disks ST2000DM001 (2TB). In 2017 one disk failed at power off/on - it had to be disconnected to let BIOS to proceed. After removing the disk the computer booted right away. The replacement was a ST2000DM006. In May 2018 the other ST2000DM001 failed after power off/on. The BIOS was alive but the box would not boot. Looking later at the logs I found that the disk had too many errors and was automatically removed from the RAID array a few days before that booting issue. The newer ST2000DM006 disk was alive, but had no valid MBR. In order to install the MBR, I booted into a live openSUSE13 stick. It uses GRUB (not GRUB2), as RH6 does. Installing MBR on sdb:
#grub
grub> root (hd1,0)
grub> setup (hd1)
grub> quit
  
Then, I installed a replacement disk - another ST2000DM006 - as sdb. The box booted from the sda. The partition table was copied (it had to be forced because of some cylinder boundaries issues):
#sfdisk -d /dev/sda | sdfisk --force /dev/sdb
  
There is no need to format (ext4 in this case) the partitions. For every RAID partition I used a command as:
#mdadm --manage /dev/md0 --add /dev/sdb1
  
It adds the new disk and re-syncs the partition. I waited till the previous re-sync job was done.

The next step was to install GRUB on sdb:

#grub
grub> find /grub/stage1
find /grub/stage1
 (hd0,0)
 (hd1,0)
grub> device (hd0) /dev/sdb
device (hd0) /dev/sdb
grub> root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  27 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+27 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub>  find /grub/stage1
 find /grub/stage1
 (hd0,0)
 (hd1,0)
grub> quit
quit

  
The MBR record was:
# dd if=/dev/sdb bs=512 count=1 | xxd
1+0 records in
1+0 records out
512 bytes (512 B) copied, 3.6087e-05 s, 14.2 MB/s
0000000: eb48 9000 0000 0000 0000 0000 0000 0000  .H..............
0000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000030: 0000 0000 0000 0000 0000 0000 0000 0302  ................
0000040: ff00 0020 0100 0000 0002 fa90 90f6 c280  ... ............
0000050: 7502 b280 ea59 7c00 0031 c08e d88e d0bc  u....Y|..1......
0000060: 0020 fba0 407c 3cff 7402 88c2 52f6 c280  . ..@|<.t...R...
0000070: 7454 b441 bbaa 55cd 135a 5272 4981 fb55  tT.A..U..ZRrI..U
0000080: aa75 43a0 417c 84c0 7505 83e1 0174 3766  .uC.A|..u....t7f
0000090: 8b4c 10be 057c c644 ff01 668b 1e44 7cc7  .L...|.D..f..D|.
00000a0: 0410 00c7 4402 0100 6689 5c08 c744 0600  ....D...f.\..D..
00000b0: 7066 31c0 8944 0466 8944 0cb4 42cd 1372  pf1..D.f.D..B..r
00000c0: 05bb 0070 eb7d b408 cd13 730a f6c2 800f  ...p.}....s.....
00000d0: 84f0 00e9 8d00 be05 7cc6 44ff 0066 31c0  ........|.D..f1.
00000e0: 88f0 4066 8944 0431 d288 cac1 e202 88e8  ..@f.D.1........
00000f0: 88f4 4089 4408 31c0 88d0 c0e8 0266 8904  ..@.D.1......f..
0000100: 66a1 447c 6631 d266 f734 8854 0a66 31d2  f.D|f1.f.4.T.f1.
0000110: 66f7 7404 8854 0b89 440c 3b44 087d 3c8a  f.t..T..D.;D.}<.
0000120: 540d c0e2 068a 4c0a fec1 08d1 8a6c 0c5a  T.....L......l.Z
0000130: 8a74 0bbb 0070 8ec3 31db b801 02cd 1372  .t...p..1......r
0000140: 2a8c c38e 0648 7c60 1eb9 0001 8edb 31f6  *....H|`......1.
0000150: 31ff fcf3 a51f 61ff 2642 7cbe 7f7d e840  1.....a.&B|..}.@
0000160: 00eb 0ebe 847d e838 00eb 06be 8e7d e830  .....}.8.....}.0
0000170: 00be 937d e82a 00eb fe47 5255 4220 0047  ...}.*...GRUB .G
0000180: 656f 6d00 4861 7264 2044 6973 6b00 5265  eom.Hard Disk.Re
0000190: 6164 0020 4572 726f 7200 bb01 00b4 0ecd  ad. Error.......
00001a0: 10ac 3c00 75f4 c300 0000 0000 0000 0000  ..<.u...........
00001b0: 0000 0000 0000 0000 0000 0000 0000 8020  ............... 
00001c0: 2100 fd0e 50fe 0008 0000 0000 7d00 000e  !...P.......}...
00001d0: 51fe fdfe ffff 0008 7d00 0000 093d 00fe  Q.......}....=..
00001e0: ffff fdfe ffff 0008 863d 0000 093d 00fe  .........=...=..
00001f0: ffff 05fe ffff 0008 8f7a b080 516e 55aa  .........z..QnU.
  
The RAID arrays are:
# cat /proc/mdstat 
Personalities : [raid1] 
md3 : active raid1 sdb5[3] sda5[2]
      204798908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

md5 : active raid1 sdb7[3] sda7[2]
      20478908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md1 : active raid1 sdb2[3] sda2[2]
      511998844 blocks super 1.1 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md2 : active raid1 sdb3[3] sda3[2]
      511998844 blocks super 1.1 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md7 : active raid1 sdb9[3] sda9[2]
      20478908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md6 : active raid1 sdb8[3] sda8[2]
      20478908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sdb1[3] sda1[2]
      4095988 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md11 : active raid1 sdb13[3] sda13[2]
      573151100 blocks super 1.1 [2/2] [UU]
      bitmap: 0/5 pages [0KB], 65536KB chunk

md8 : active raid1 sdb10[3] sda10[2]
      20478908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md4 : active raid1 sdb6[3] sda6[2]
      40958908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md9 : active raid1 sdb11[3] sda11[2]
      12286908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md10 : active raid1 sdb12[3] sda12[2]
      12286908 blocks super 1.1 [2/2] [UU]

# df -T
Filesystem          Type      1K-blocks          Used    Available Use% Mounted on
/dev/md9            ext4       11962868        597872     10750652   6% /
tmpfs               tmpfs       1938456        292712      1645744  16% /dev/shm
/dev/md0            ext4        3966132        105464      3655872   3% /boot
/dev/md1            ext4      503832676     418832344     59400392  88% /data2a
/dev/md2            ext4      503832676      32393700    445839036   7% /data2b
/dev/md11           ext4      564026548      44734484    490634512   9% /data2c
/dev/md3            ext4      201453468      57012820    134200704  30% /home
/dev/md8            ext4       20026168       3798120     15204104  20% /opt
/dev/md6            ext4       20026168         69228     18932996   1% /tmp
/dev/md4            ext4       40184116       6137276     31998896  17% /usr
/dev/md7            ext4       20026168         47872     18954352   1% /usr/local
/dev/md5            ext4       20026168       3255152     15747072  18% /var

  

Other useful commands:

#lsblk
#blkid
#ls -alF /dev/disk/by-uuid/
#smartctl -a /dev/sda
  

These ST disks report plenty of errors from the start. Here are the reports from the old (sda) and new (sdb) disks after one day or operations:

#smartctl -a /dev/sda
.................
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       120032616
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       21
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   079   060   030    Pre-fail  Always       -       91615351
  9 Power_On_Hours          0x0032   093   093   000    Old_age   Always       -       6974
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       21
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   064   053   045    Old_age   Always       -       36 (Min/Max 33/41)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       9
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       558
194 Temperature_Celsius     0x0022   036   047   000    Old_age   Always       -       36 (0 21 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       190945656052507
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       7832152235
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       173695300526
.................
#smartctl -a /dev/sdb
.................
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   106   100   006    Pre-fail  Always       -       11886704
  3 Spin_Up_Time            0x0003   100   100   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       1
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   100   253   030    Pre-fail  Always       -       185949
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       23
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       1
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   099   099   000    Old_age   Always       -       1
190 Airflow_Temperature_Cel 0x0022   061   056   045    Old_age   Always       -       39 (Min/Max 24/44)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       1
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       28
194 Temperature_Celsius     0x0022   039   044   000    Old_age   Always       -       39 (0 24 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       69844758167568
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       3910734586
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       634481
.................
  
For a comparison, 3 smaller Seagate Barracuda disks ST3250820AS (250 GB) have served in another box in RAID 5 for 9 years - twice longer than the 2 TB Barracuda disk. They report many errors but have not failed so far:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   091   006    Pre-fail  Always       -       186999469
  3 Spin_Up_Time            0x0003   094   094   070    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       61
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   030    Pre-fail  Always       -       610293231
  9 Power_On_Hours          0x0032   009   009   000    Old_age   Always       -       80031
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       63
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   058   049   045    Old_age   Always       -       42 (Min/Max 23/47)
194 Temperature_Celsius     0x0022   042   051   000    Old_age   Always       -       42 (0 18 0 0 0)
195 Hardware_ECC_Recovered  0x001a   065   052   000    Old_age   Always       -       202893386
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0000   100   253   000    Old_age   Offline      -       0
202 Data_Address_Mark_Errs  0x0032   100   253   000    Old_age   Always       -       0
  

Add a sudo-enabled user on RH7

Adding a username "supu":
#useradd -m -c "His name" supu
#passwd supu
#usermod -aG wheel supu
  

Installation of RHEL7 from a JLab CD on a desktop

The desktop has two 3TB disks. I wanted to use them as RAID1 (a mirror) and to have several partitions: /, /boot, /home etc. The installation happened to be a lot more fail-prone and tedious than it used to be (with RH6 or earlier RH at JLab). It is the first time I used >2TB disks, which require the GPT partitioning scheme and a small (2MB) "bios_grub" partition for using grub for booting. The story of tries and lessons learned:
  1. Using the installation tool to partition the disks. Only one "bios_grub" partition can be created (on one of two disks) - not what I wanted, but I continued. The RAID1 partitions on both disks were created. The installation proceeded to install the packages and after a few hours reported that the boot records were not set properly and the system would not boot, offering "stop" or "continue". I selected "stop". In fact, the system booted after that, but obviously some installation scrips have not run - the system did not know how to "yum update". I gave up.
  2. Same as 1), but early at the installation I went to a console (Crtl-Alt-F2) and made the "bios_grub" partitions on both disks. It came to the same warning about the boot record, I selected "continue". Rebooting failed at an early stage and I gave up on it.
  3. I selected the default partitioning scheme (no RAID). It made huge "xfs" partitions. Rebooting failed, stuck on "Starting Show Plymouth Boot Screen". After some studies I gave up.
  4. At an early stage of installation I went to the console and using gdisk partitioned the /dev/sda to my liking, but not with the RAID type. At the installation partitioning menu I assigned the existing partitions to the mount points and selected the ext4 file system. It rebooted OK. A minor thing: setting a user account in the installation menu I specified a particular uid. As a results, the uid-s in the /etc/passwd and /home directory were set different. The next step was to make a RAID1 system by hand.

Converting to RAID1

The are many WWW instructions. I mostly followed this one. My resulting partitioning/mounting schemes are:
[root@genl2 gen]# parted /dev/sda print
Model: ATA ST3000DM008-2DM1 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system     Name        Flags
 1      1049kB  2097kB  1049kB                              bios_grub
 2      2097kB  8592MB  8590MB  ext4                        raid
 3      8592MB  760GB   752GB   ext4                        raid
 4      760GB   1512GB  752GB   ext4                        raid
 5      1512GB  1898GB  387GB   ext4                        raid
 6      1898GB  2036GB  137GB   ext4                        raid
 7      2036GB  2105GB  68.7GB  ext4                        raid
 8      2105GB  2173GB  68.7GB  linux-swap(v1)              raid
 9      2173GB  2208GB  34.4GB  ext4            Linux RAID  raid
10      2208GB  2994GB  786GB                   Linux RAID  raid

[root@genl2 sgen]# parted /dev/sdb print
Model: ATA ST3000DM008-2DM1 (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name        Flags
 1      1049kB  2097kB  1049kB                           bios_grub
 2      2097kB  8592MB  8590MB                           raid
 3      8592MB  760GB   752GB                            raid
 4      760GB   1512GB  752GB                            raid
 5      1512GB  1898GB  387GB                            raid
 6      1898GB  2036GB  137GB                            raid
 7      2036GB  2105GB  68.7GB                           raid
 8      2105GB  2173GB  68.7GB                           raid
 9      2173GB  2208GB  34.4GB               Linux RAID  raid
10      2208GB  2994GB  786GB                Linux RAID  raid

[root@genl2 sgen]# cat /proc/mdstat 
Personalities : [raid1] 
md10 : active raid1 sda10[2] sdb10[1]
      767425536 blocks super 1.2 [2/2] [UU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md9 : active raid1 sda9[2] sdb9[1]
      33520640 blocks super 1.2 [2/2] [UU]
      
md6 : active raid1 sdb6[1] sda6[2]
      134085632 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md4 : active raid1 sdb4[1] sda4[2]
      733871104 blocks super 1.2 [2/2] [UU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md3 : active raid1 sdb3[1] sda3[2]
      733871104 blocks super 1.2 [2/2] [UU]
      bitmap: 1/6 pages [4KB], 65536KB chunk

md5 : active raid1 sdb5[1] sda5[2]
      377355264 blocks super 1.2 [2/2] [UU]
      bitmap: 0/3 pages [0KB], 65536KB chunk

md8 : active raid1 sda8[2] sdb8[1]
      67042304 blocks super 1.2 [2/2] [UU]
      
md7 : active raid1 sda7[2] sdb7[1]
      67042304 blocks super 1.2 [2/2] [UU]
      
md2 : active raid1 sda2[2] sdb2[1]
      8379392 blocks super 1.2 [2/2] [UU]

[root@genl2 sgen]# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/md6       131850252 7362284 117767304   6% /
devtmpfs         4040516       0   4040516   0% /dev
tmpfs            4057644      12   4057632   1% /dev/shm
tmpfs            4057644    9716   4047928   1% /run
tmpfs            4057644       0   4057644   0% /sys/fs/cgroup
/dev/md7        65858092 2081280  60408316   4% /var
/dev/md5       371301840   76744 352340952   1% /home
/dev/md9        32862992  773668  30396908   3% /opt
/dev/md2         8116664  274268   7407044   4% /boot
/dev/md4       722222080   73752 685438392   1% /data2b
/dev/md3       722222080   73752 685438392   1% /data2a
/dev/md10      755249888   73752 716788476   1% /data2c
tmpfs             811532      28    811504   1% /run/user/1001

  
  1. Copy the partition scheme of /dev/sda to /dev/sdb
    #sgdisk /dev/sda -R /dev/sdb      # copy the partition table
    #sgdisk -G /dev/sdb               # set different UUID
    #blockdev --rereadpt -v /dev/sdb  # reread the partition table by the kernel
      
  2. Using gdisk /dev/sdb toggle the partition types to RAID (apart from the "bios_grub" partition)
  3. Create the RAID1 filesystems on /dev/sdb and install filesystems, as:
    #for i in 2 3 4 5 6 7 8 9 10; do  mdadm --create /dev/md$i --level=1 --raid-disks=2 missing /dev/sdb$i ; done
    #for i in 2 3 4 5 6 7 9 10; do  mkfs.ext4 /dev/md$i ; done
    #mkswap /dev/md8
      
  4. Copy the data existing at this stage from /dev/sda to /dev/sdb.
    #for i in 2 5 6 7 9; do  mkdir /mnt/md$i ; done
    #for i in 2 5 6 7 9; do  mount /dev/md$i /mnt/md$i ; done
    #cd /boot
    #cp -ax . /mnt/md2
    #cd /home
    #cp -ax . /mnt/md5
    #cd /
    #cp -ax . /mnt/md6
    #cd /var
    #cp -ax . /mnt/md7
    #cd /opt
    #cp -ax . /mnt/md9
    #for i in 2 5 6 7 9; do  umount /mnt/md$i ; done
    #for i in 2 5 6 7 9; do  rmdir /mnt/md$i ; done
      
  5. Mount the full structure on /mnt and run from it:
    #mount /dev/md6 /mnt
    #mount /dev/md2 /mnt/boot
    #mount /dev/md3 /mnt/data2a
    #mount /dev/md4 /mnt/data2b
    #mount /dev/md5 /mnt/home
    #mount /dev/md7 /mnt/var
    #mount /dev/md9 /mnt/opt
    #mount /dev/md10 /mnt/data2c
    #mount --bind /proc /mnt/proc
    #mount --bind /dev  /mnt/dev
    #mount --bind /sys  /mnt/sys
    #mount --bind /run  /mnt/run
    #touch /mnt/.autorelabel
    #blkid | grep /dev/md  # Use these UUIDs to edit the /etc/fstab - see later
    #ls -l /dev/disk/by-id/md-uuid* # note the UUID for the root filesystem - it may be needed for grub.cfg
    #chroot /mnt
    #emacs /etc/fstab # use the UUIDs from the previous command
    #mdadm --detail --scan > /etc/mdadm.conf 
    #mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
    #dracut --mdadmconf --force /boot/initramfs-$(uname -r).img $(uname -r)
    #emacs /etc/default/grub #
       #GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet"
       GRUB_CMDLINE_LINUX="crashkernel=auto rd.auto rd.auto=1 rhgb quiet"
       GRUB_PRELOAD_MODULES="mdraidlx"
    #cp -a /boot/grub2/grub.cfg /boot/grub2/grub.cfg_old
    #grub2-mkconfig -o /boot/grub2/grub.cfg
    #emacs /boot/grub2/grub.cfg # check if the UUID for root= is correct
    #grub2-install /dev/sdb
      
  6. Reboot from sdb. Add /dev/sda to the RAID:
    #sgdisk /dev/sdb -R /dev/sda
    #sgdisk -G /dev/sda
    #blockdev --rereadpt -v /dev/sda
    #for i in 2 3 4 5 6 7 8 9 10; do  mdadm --manage /dev/md$i --add /dev/sda$i ; done
    #watch -n1 "cat /proc/mdstat"
    #cp -a /boot/grub2/grub.cfg /boot/grub2/grub.cfg_old_1
    #grub2-mkconfig -o /boot/grub2/grub.cfg
    #grub2-install /dev/sda
    #grub2-install /dev/sdb
      

Installation of wine

The purpose is to run the package SuperFish (Poisson) from LANL. The available SuperFish (version 7) 32-bit installation executable was made for the XP or earlier OS.

Installation of wine on Fedora 30

On Fedora 30 there are both X86_64 and i686 wine packages. First I installed the 64-bit one, the newest version 4.14, selecting Windows 7 in the winecfg. It started running
[gen@genl2 ~]$ wine Downloads/PoissonSuperfish_7.19.exe                 
  
and opened a few windows. On one of the windows it got stuck with the frozen X. I had to make a remote login. Killing a process Xwayland restarted the X. I removed the X86_64 version, installed the i686 version, downgraded it to version 4.5 and selected Windows XP in the winecfg. It froze again in Xwayland. Then, in winecfg configuration menu I clicked on Graphics and unchecked two boxes: "Allow the window manager to decorate the windows" and "Allow the window manager to control the windows". Then, the problematic window went full screen and the installation was successful.

Installation of wine on RHEL 7.7

The RHEL7 distribution does not provide a 32-bit version of wine. I installed the X86_64 version. It did not work for the SuperFish:
[gen@genl2 ~]$ wine Downloads/PoissonSuperfish_7.19.exe                 
wine: Bad EXE format for Z:\home\gen\Downloads\PoissonSuperfish_7.19.exe.
  
After searching the WWW and trying a few things it became clear that one needs a i686 (32-bit) wine, either you find it or compile/build it yourself.

Richard Grainger has compiled wine.i686 and is still maintaining a repository "wine32" for RHEL7/CentOS_7, see the announcement. His instructions are:

yum install https://harbottle.gitlab.io/wine32/7/i386/wine32-release.rpm
yum install wine.i686
  
It actually did not work for me at the first try, because at that moment the latest versions of two packages - vulkan and spirv-tools-libs - in the standard RHEL7 repositories became incompatible. I downgraded the vulkan version by hand.
[root@genl2]# yum install https://harbottle.gitlab.io/wine32/7/i386/wine32-release.rpm
[root@genl2]# yum install spirv-tools-libs vulkan-1.1.73.0
[root@genl2]# yum install spirv-tools-libs.i686 vulkan-1.1.73.0-1.el7.i686
[root@genl2]# yum list all vulkan spirv-tools-libs
Loaded plugins: enabled_repos_upload, langpacks, package_upload, product-id, search-disabled-repos, subscription-manager
Installed Packages
spirv-tools-libs.i686                                           2019.1-1.el7                                           @wine32                          
spirv-tools-libs.x86_64                                         2019.1-1.el7                                           @epel                            
vulkan.i686                                                     1.1.73.0-1.el7                                         @rhel-7-workstation-optional-rpms
vulkan.x86_64                                                   1.1.73.0-1.el7                                         @rhel-7-workstation-optional-rpms
Available Packages
vulkan.i686                                                     1.1.97.0-1.el7                                         rhel-7-workstation-optional-rpms 
vulkan.x86_64                                                   1.1.97.0-1.el7                                         rhel-7-workstation-optional-rpms 
.....
[root@genl2]# yum install wine.i686
  
After that I created the standard .wine directory and installed the SuperFish:
[gen@genl2 ~]$ winecfg 
[gen@genl2 ~]$ wine Downloads/PoissonSuperfish_7.19.exe 
  

Useful commands

Obsolete tips

Archive of obsolete tips