You are not logged in.
So this is one of those things that happen after a long work week. I intended to re-format a pen-drive but accidentally re-formatted a boot sector on my HDD. Right now I am running a back-up of everything in the user space of this computer.
All may not be doom and gloom as I have two HDDs in this machine and boot sectors ar RAIDed for redundancy. All in all my partitioning scheme looks like this:
/dev/sda
part filesys mount size comments
/dev/sda1 ext4 0,2 GiB boot, part of RAID1 md0
/dev/sda2 linux-swap 1,0 GiB swap, part of RAID0 md1
/dev/sda3 extended 499 GiB
/dev/sda5 btrfs / 9,77 GiB system, btrfs RAID(1/0)
/dev/sda6 btrfs /home 400 GiB user space, btrfs RAID(0/0)
/dev/sda7 ext4 /extra 55,6 GiB VMs etc, part of RAID0 md2
/dev/sdb
part filesys mount size comments
/dev/sdb1 ext4 0,2 GiB boot, part of RAID1 md0
/dev/sdb2 linux-swap 1,0 GiB swap, part of RAID0 md1
/dev/sdb3 extended 499 GiB
/dev/sdb5 btrfs / 9,77 GiB system, btrfs RAID(1/0)
/dev/sdb6 btrfs /home 400 GiB user space, btrfs RAID(0/0)
/dev/sdb7 ext4 /extra 55,6 GiB VMs etc, part of RAID0 md2
Am I right in thinking that the RAID set-up may help me save me from a complete re-install? If yes, I would appreciate a step-by-step guide (here or a link to) to recovery.
TIA,
/Martin (time to go to bed)
Last edited by Martin (2019-09-07 09:16:37)
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
I'll take a stab at it, as this is an area I have experience in breaking.
I think you mean boot partition, instead of boot sector. Image. Is that correct?
The goal of redundant RAID is if one drive dies the other (in your case two drives) contains a viable copy. Necessarily, what gets written to one drive gets written to the other, including a format.
A lot depends where the GRUB folder, kernel images, initrd.img, etc were/are stored. I'm guessing /dev/sda1.
Try Rescatux, and/or Super Grub2 Disk, if only to try and boot into your Linux system and restore from there. Good luck.
8bit
We don't need no steekin' tag line.
Last edited by deleted0 (2019-09-07 13:44:18)
Hmm, I start to suspect, strongly suspech, that what I abused yesterday was not a disk partition but rather md0 -- RAID1 does not help me then I guess.
I haven't touched anything yet but mdadm indicates md0 is still a working RAID1 set-up. One reason for worry is that it shows up as non-mounted 'disk' in Thunar. It never did that before!
Also, I just ran an up-date in Synaptic and there is a bunch of error messages I don't think I have seen in the past:
cryptsetup: WARNING: failed to detect canonical device of /dev/sda5
cryptsetup: WARNING: failed to detect canonical device of /dev/sdb5
cryptsetup: WARNING: could not determine root device from /etc/fstab
So it fails to find my root partition(s).
For now I try not to switch off or re-boot and continue research hoping I will not have to re-install Helium from scratch.
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
Can you go back in your BASH history, or whatever shell, and paste the command you used to "format" for us to look at?
8bit
"Now, now my good man, this is no time for making enemies."
- Voltaire (1694-1778) on his deathbed in response to a priest asking that he renounce Satan.
I used GParted...
It is a pity Lithium is not ready for prime time as it seems I created the perfect excuse for moving on from Helium.
/Martin
Last edited by Martin (2019-09-07 21:16:49)
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
If the setup is correct and it is raided for redundancy (as you said) you should be fine and it will survive the reboot. If it is raided for speed then you will not be fine.
Offline
This looks like a good process if I have indeed wiped md0 and need to re-install Grub (or have I misunderstood how these things work???). Since I have not shut down or re-booted my computer (posting from it now) I assume I could go to step 8 and do:
grub-install --root-directory=/ /dev/sda
without the need for a Live-CD?
Right??
/Martin
Last edited by Martin (2019-09-07 22:00:17)
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
Suggestion:
run GParted, grab a screenshot and post results.
8bit
"I'll moider da bum."
- Heavyweight boxer Tony Galento, when asked what he thought of William Shakespeare
This is what GParted looks like when studying md0 (now back to ext4 from a temporary fat32 life):
I notice there is an "Attempt Data Rescue" instance in the "Device" menu. Worth trying?
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
^ Not as helpful as hoped for. So little data on /dev/md0, 11.83 MiB, this is / and I would expect a lot more.
This looks like normal space reserved by the file system more than actual data.
(now back to ext4 from a temporary fat32 life)
an up-date in Synaptic
And too many moving parts. This is a whole other conversation; how did you do the conversion, etc.
The first rule of "file system errors" is to do as little as possible.
I notice there is an "Attempt Data Rescue" instance in the "Device" menu. Worth trying?
I think that's more for simple errors (restoring a deleted partition type error) than anything more serious.
So many questions and no access to the machine, it's hard to say...
cryptsetup: WARNING: could not determine root device from /etc/fstab
Does the UUID in fstab match the UUID of the actual partition?
And so forth...
Hey Devs, we have a motivated alpha tester here!
8bit
I have no special talent. I am only passionately curious. - Albert Einstein
Last edited by deleted0 (2019-09-08 11:40:47)
I notice there is an "Attempt Data Rescue" instance in the "Device" menu. Worth trying?
/Martin
Your partitionscheme in the first post, is that output from an command (if so, which) or list of how it was once?
Do you remeber if you set upp raid0 or raid1?
The swap should not be mirrored (raid1), it should be raid0.
The /home looks suspicious with "RAID(0/0)".
The output from:
# lsblk -f
and
# mdadm --detail /dev/md*
would give information enough to proceed.
Last edited by rbh (2019-09-08 12:16:13)
// Regards rbh
Please read before requesting help: "Guide to getting help", "Introduction to the Bunsenlabs Lithium Desktop" and other help topics under "Help & Resources" on the BunsenLabs menu
Offline
md0 (now back to ext4 from a temporary fat32 life):
I missed this before. Did you convert fat32>ext4 now after your missfortune?
Why did you convert it? The size and partitiontype hints that it is an uefi-partion. UEFI partition must be in fat to support booting.
If you have the time to spare, you can try to fix your missfortune, else it might be better to reinstall.
You can install Helium and upgrade to buster/lithium-dev.
Last edited by rbh (2019-09-08 12:15:12)
// Regards rbh
Please read before requesting help: "Guide to getting help", "Introduction to the Bunsenlabs Lithium Desktop" and other help topics under "Help & Resources" on the BunsenLabs menu
Offline
^
Wheeew. Finally someone who sounds like they know what their talking about.
Martin wrote:I notice there is an "Attempt Data Rescue" instance in the "Device" menu. Worth trying?
/MartinYour partitionscheme in the first post, is that output from an command (if so, which) or list of how it was once?
Do you remeber if you set upp raid0 or raid1?
The swap should not be mirrored (raid1), it should be raid0.
The /home looks suspicious with "RAID(0/0)".
The table in the first post shows what it all looked like after building the system in December last year.
The output from:
# lsblk -f
and
# mdadm --detail /dev/md*would give information enough to proceed.
martin@he2:~$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 linux_raid_member he2:0 f8d3a77f-8a8d-c09e-2535-1387b1a0d42d
│ └─md0 ext4 4666049c-8f1e-4f0d-89bc-10d8e14a5a6c
├─sda2 linux_raid_member he2:1 4bcd4789-cb0b-b8f2-ba5a-47f71e3e0cd8
│ └─md1 swap 89c3dd1a-ed27-4b77-8c29-a2cc9e599a96 [SWAP]
├─sda5 btrfs d112bfbd-53ca-4d8e-a8a9-f7a4b5d15f52 /
├─sda6 btrfs 84136c02-e53e-43f6-b5d2-85999816add0 /home
└─sda7 linux_raid_member he2:2 b1343f78-6b8d-1f99-211f-073d323d1683
└─md2 ext4 f0210d74-89ad-46c8-ba11-c91b9245bedd /extra
sdb
├─sdb1 linux_raid_member he2:0 f8d3a77f-8a8d-c09e-2535-1387b1a0d42d
│ └─md0 ext4 4666049c-8f1e-4f0d-89bc-10d8e14a5a6c
├─sdb2 linux_raid_member he2:1 4bcd4789-cb0b-b8f2-ba5a-47f71e3e0cd8
│ └─md1 swap 89c3dd1a-ed27-4b77-8c29-a2cc9e599a96 [SWAP]
├─sdb5 btrfs d112bfbd-53ca-4d8e-a8a9-f7a4b5d15f52
├─sdb6 btrfs 84136c02-e53e-43f6-b5d2-85999816add0
└─sdb7 linux_raid_member he2:2 b1343f78-6b8d-1f99-211f-073d323d1683
└─md2 ext4 f0210d74-89ad-46c8-ba11-c91b9245bedd /extra
sr0
and
martin@he2:~$ sudo mdadm --detail /dev/md*
[sudo] lösenord för martin:
mdadm: /dev/md does not appear to be an md device
/dev/md0:
Version : 1.2
Creation Time : Sat Dec 8 11:38:44 2018
Raid Level : raid1
Array Size : 204608 (199.81 MiB 209.52 MB)
Used Dev Size : 204608 (199.81 MiB 209.52 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Sep 8 10:00:44 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : he2:0 (local to host he2)
UUID : f8d3a77f:8a8dc09e:25351387:b1a0d42d
Events : 1084
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
/dev/md1:
Version : 1.2
Creation Time : Sat Dec 8 11:39:15 2018
Raid Level : raid0
Array Size : 2045952 (1998.00 MiB 2095.05 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Dec 8 11:39:15 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : he2:1 (local to host he2)
UUID : 4bcd4789:cb0bb8f2:ba5a47f7:1e3e0cd8
Events : 0
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
/dev/md2:
Version : 1.2
Creation Time : Sat Dec 8 11:39:47 2018
Raid Level : raid0
Array Size : 114081792 (108.80 GiB 116.82 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Dec 8 11:39:47 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : he2:2 (local to host he2)
UUID : b1343f78:6b8d1f99:211f073d:323d1683
Events : 0
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
Martin wrote:md0 (now back to ext4 from a temporary fat32 life):
I missed this before. Did you convert fat32>ext4 now after your missfortune?
Why did you convert it? The size and partitiontype hints that it is an uefi-partion. UEFI partition must be in fat to support booting.
What I intended to to late on Friday was to re-format a pen-drive. Hence fat32. Too late it struck me as rather odd that said pen-drive was ext4 just before re-formatting. The rest is history... Don't do these kind of things late on Friday after a looong work week...
If you have the time to spare, you can try to fix your missfortune, else it might be better to reinstall.
You can install Helium and upgrade to buster/lithium-dev.
I hoped to find a way back involving less work than re-installing Helium. This is my primary machine so going for Lithium alpha or even beta is not really what I would like to do. That is for virtual machines or my tertiary machine.
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
Does the UUID in fstab match the UUID of the actual partition?
And so forth...
Just checked this and found disagrement. I edited fstab and md0 promptly disapeared from Thunar. Checking with GParted I now find it has the mount point /boot (missing in the screen shot above). lsblk does not see this mount point though:
martin@he2:~$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 linux_raid_mem he2:0 f8d3a77f-8a8d-c09e-2535-1387b1a0d42d
│ └─md0 ext4 4666049c-8f1e-4f0d-89bc-10d8e14a5a6c
├─sda2 linux_raid_mem he2:1 4bcd4789-cb0b-b8f2-ba5a-47f71e3e0cd8
│ └─md1 swap 89c3dd1a-ed27-4b77-8c29-a2cc9e599a96 [SWAP]
├─sda5 btrfs d112bfbd-53ca-4d8e-a8a9-f7a4b5d15f52 /
├─sda6 btrfs 84136c02-e53e-43f6-b5d2-85999816add0 /home
└─sda7 linux_raid_mem he2:2 b1343f78-6b8d-1f99-211f-073d323d1683
└─md2 ext4 f0210d74-89ad-46c8-ba11-c91b9245bedd /extra
sdb
├─sdb1 linux_raid_mem he2:0 f8d3a77f-8a8d-c09e-2535-1387b1a0d42d
│ └─md0 ext4 4666049c-8f1e-4f0d-89bc-10d8e14a5a6c
├─sdb2 linux_raid_mem he2:1 4bcd4789-cb0b-b8f2-ba5a-47f71e3e0cd8
│ └─md1 swap 89c3dd1a-ed27-4b77-8c29-a2cc9e599a96 [SWAP]
├─sdb5 btrfs d112bfbd-53ca-4d8e-a8a9-f7a4b5d15f52
├─sdb6 btrfs 84136c02-e53e-43f6-b5d2-85999816add0
└─sdb7 linux_raid_mem he2:2 b1343f78-6b8d-1f99-211f-073d323d1683
└─md2 ext4 f0210d74-89ad-46c8-ba11-c91b9245bedd /extra
sdc vfat SANSA CLIPP 10C8-FCE4 /media/martin/SANSA CL
sr0
/Martin
Last edited by Martin (2019-09-08 20:56:00)
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
Lets sort out all contradictory information:
You hawe formated md0 (sda1-sdb1, raid1) /boot.
In december it was not mounted either..
/boot will be populated again when you reinstall grubb.
(200 mb is small, with disks of today. If you reinstall, set at least 500 mb aside for /boot. Btw, I prefer ext2 on /boot. No need for journaling FS on /boot...)
Md0 is healthy.
Md1 swap (sda1-sdb1 raid0) - healthy
Mdx (sda5-sdb5) / - missing
Only sda5 mounted on /
What does your /etc/mdadm/mdadm.conf says?
Mdx has to be reasembled.
Mdxx (sda6-sdb6) /home - missing
Only sda6 mounted on /home
Mdxx has to be reasembled.
Md2 (sda7-sdb7 raid0!) /extra - healty
Whith raid0, all information is lost if one in the raidgroup gets lost. Good for perfomrance, bad for redundancy... You are aware of that you have mixed raid9 and raid1?
I would ensure that partion on sda5 and sda6 is healthy before reasembling.
First backup essential data, then reboot on an rescue-cd.
If you are not familiar with raid management, it probably means more work than reinstalling BL Helium...
Installing BL-helium and uppgrade Debian to Buster, is safe.
You can choose to stay with Helium until official release of lithium. But, Lithium-dev, is not unstable...
Last edited by rbh (2019-09-09 10:06:23)
// Regards rbh
Please read before requesting help: "Guide to getting help", "Introduction to the Bunsenlabs Lithium Desktop" and other help topics under "Help & Resources" on the BunsenLabs menu
Offline
You are aware of that you have mixed raid9 and raid1?
Emphasis mine.
Following your good advice with interest. Is that a typo, you meant raid0?
Thanks for helping with Martin"s missfortune.
8bit
Last edited by deleted0 (2019-09-09 12:19:39)
rbh wrote:You are aware of that you have mixed raid9 and raid1?
Is that a typo, you meant raid0?
Yes! Fat fingers and old, tired eyes. Did not catch that typo...
// Regards rbh
Please read before requesting help: "Guide to getting help", "Introduction to the Bunsenlabs Lithium Desktop" and other help topics under "Help & Resources" on the BunsenLabs menu
Offline