Hi Honza,
just tried it:
Before:
# /opt/ovs-agent-latest/utils/repos.py -l
[ * ] b2638dcd-fa49-4646-a7da-83e91e7e26c4 => /dev/hdb1
# du /var/ovs/mount/B2638DCDFA494646A7DA83E91E7E26C4/running_pool
2316896 /var/ovs/mount/B2638DCDFA494646A7DA83E91E7E26C4/running_pool/30_OEL1
Then:
# /opt/ovs-agent-latest/utils/cleanup.py
This is a cleanup script for ovs-agent.
It will try to do the following:
*) stop o2cb heartbeat
*) offline o2cb
*) remove o2cb configuration file
*) umount ovs-agent storage repositories
*) cleanup ovs-agent local database
Would you like to continue? [y/N] y
Cleanup done.
No Repos anymore:
# /opt/ovs-agent-latest/utils/repos.py -l
Create New:
/opt/ovs-agent-latest/utils/repos.py -n /dev/hdb1
[ NEW ] b2638dcd-fa49-4646-a7da-83e91e7e26c4 => /dev/hdb1
/opt/ovs-agent-latest/utils/repos.py -r b2638dcd-fa49-4646-a7da-83e91e7e26c4
Not mounted yet:
# df -k
/dev/hda2 4466156 930456 3305168 22% /
/dev/hda1 101086 45803 50064 48% /boot
tmpfs 296536 0 296536 0% /dev/shm
Initializing:
/opt/ovs-agent-latest/utils/repos.py -i
Mounted!:
# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 4466156 930468 3305156 22% /
/dev/hda1 101086 45803 50064 48% /boot
tmpfs 296536 0 296536 0% /dev/shm
/dev/hdb1 33551720 6040232 27511488 19% /var/ovs/mount/B2638DCDFA494646A7DA83E91E7E26C4
And data still there:
# du /var/ovs/mount/B2638DCDFA494646A7DA83E91E7E26C4/running_pool
2316896 /var/ovs/mount/B2638DCDFA494646A7DA83E91E7E26C4/running_pool/30_OEL1
However as noted above: The /OVS link to the /var/ovs/mount point will be created when you register the server with OVM Manager.
All additional data will be left on the device.
Nessun commento:
Posta un commento