Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account. Todays update on Manjaro Testing brings zfs 0. The underlying import command give the following error:. If I move the existing cache file out of the way, the manual import creates a new cache file but it is showing the same error during boot: cache file corrupt or invalid. The text was updated successfully, but these errors were encountered:.
So why is that not working at boot? I did create a new initramfs. And the kernel update did this anyways. Sorry, something went wrong. Update: I edited zfs-import-cache. Does this sound like a reasonable approach? Since we've moving the default zpool.
If I may ask, other platforms have kernel arguments to control what pool should be imported as root pool and other pools imported later from startup scripts. Or is there some other reason? Personally, using initramfs is already an ugly hack, and should in ideal situations not be required.
This make it feel like you are adding another hack on-top of the first hack. Isn't this why we have "failmode" pool property?
I have mine set to 'continue' for root pools. Wouldn't you also need the zpool. How do you arrange to only open the pools once the devices are online?
If you do go with zpool. That way if the device path changes, you will still import the correct pool especially important when every system as a pool called "rpool" or whatever.
That's correct. Here's a sample file: zpool. Makes mroe sense than our current default. It should also know about a file backed zpool, and should auto-import the following pools on boot. I also think such a file should both be read on boot, then subesequently in userspace as part of the standard init process after initramfs is exited and we're in the live system.
One additional problem with the current zpool. When the node comes back online; it sees from zpool. Just my 2c. Alphalead A storage pool that is in the cachefile shouldn't be imported if it was not exported cleanly and was last accessed from a different machine.
This should normally be the case when a failed cluster node comes back online, because another node will have taken over the zpool by then. That was what I thought but the behavior is definitely there. I had a testing cluster I was playing with and after I brought one of the nodes back from being killed; it autoimported the pool that the other node had already taken over.
I know it's not timing because I have failed nodes set to power off and had made sure of failover before restarting the dead node. A basic version of this was merged some time ago.
That said, I'd like to leave this issue open since it has some good discussion of the basic issue. I'll just change the subject to be more appropriate.
Skip to content. Star 7. New issue. Jump to bottom. Locked zdb is broken. Replies 5 Views 4K. Jun 6, matthewowen Locked zdb command not giving expected results. Jared Smith Apr 10, Storage 2.
Replies 30 Views 7K. Oct 1, HoneyBadger. Replies 3 Views 1K. Apr 6, axadiw. Running zdp on pool. Replies 2 Views Sep 7, hescominsoon. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register. By continuing to use this site, you are consenting to our use of cookies. Accept Learn more….
0コメント