Here’s how to list all your Bare Metal Servers (BMS) volumes within a project.
gcloud bms volumes list --project PROJECT_ID --region us-central1 |
cloud engineer
Here’s how to list all your Bare Metal Servers (BMS) volumes within a project.
gcloud bms volumes list --project PROJECT_ID --region us-central1 |
gcloud bms volumes list --project PROJECT_ID --region us-central1
In GCP console you can see the VM’s Device Names. In this case, it’s boot and persistent-disk-1.
Name Device name
server-boot boot
server-data persistent-disk-1 |
Name Device name server-boot boot server-data persistent-disk-1
To display the volume names on the OS, run this command.
$ ls -l /dev/disk/by-id total 0 lrwxrwxrwx. 1 root root 9 Aug 2 01:12 google-boot -> ../../sda lrwxrwxrwx. 1 root root 10 Aug 2 01:12 google-boot-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Aug 18 18:48 google-persistent-disk-1 -> ../../sdb lrwxrwxrwx. 1 root root 9 Aug 2 01:12 scsi-0Google_PersistentDisk_boot -> ../../sda lrwxrwxrwx. 1 root root 10 Aug 2 01:12 scsi-0Google_PersistentDisk_boot-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Aug 18 18:48 scsi-0Google_PersistentDisk_persistent-disk-1 -> ../../sdb |
$ ls -l /dev/disk/by-id total 0 lrwxrwxrwx. 1 root root 9 Aug 2 01:12 google-boot -> ../../sda lrwxrwxrwx. 1 root root 10 Aug 2 01:12 google-boot-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Aug 18 18:48 google-persistent-disk-1 -> ../../sdb lrwxrwxrwx. 1 root root 9 Aug 2 01:12 scsi-0Google_PersistentDisk_boot -> ../../sda lrwxrwxrwx. 1 root root 10 Aug 2 01:12 scsi-0Google_PersistentDisk_boot-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Aug 18 18:48 scsi-0Google_PersistentDisk_persistent-disk-1 -> ../../sdb
As you can see, boot and persistent-disk-1 are displayed along with its device names /dev/sda and /dev/sdb.
Here’s how to setup a XFS volume.
file -s /dev/nvme2n1 mkfs -t xfs /dev/nvme2n1 |
file -s /dev/nvme2n1 mkfs -t xfs /dev/nvme2n1
Mount to /data.
mkdir /data mount /dev/nvme2n1 /data |
mkdir /data mount /dev/nvme2n1 /data
Add to /etc/fstab.
vim /etc/fstab # # UUID=xxxxxxxxxxxxxxxxxxxxxx /data xfs defaults 0 0 |
vim /etc/fstab # # UUID=xxxxxxxxxxxxxxxxxxxxxx /data xfs defaults 0 0
AWS introduced Nitro-based instances which are modular. They are meant for high performance, high availability, and high security systems. Nitro building blocks provide direct access to high-speed local storage over a PCI interface and transparently encrypts all data using dedicated hardware. It also provides hardware-level isolation between storage devices and EC2 instances so that bare metal instances can benefit from local NVMe storage. The following are Nitro-based instances: A1, C5, C5d, C5n, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and z1d. Bare metal: c5.metal, c5n.metal, i3.metal, i3en.metal, m5.metal, m5d.metal, r5.metal, r5d.metal, u-6tb1.metal, u-9tb1.metal, u-12tb1.metal, and z1d.metal.
Although Nitro-based instances looks like regular volumes (/dev/xvda) from the AWS Console, inside the operating system, they look (/dev/nvme6n1) completely different.
In AWS Console, the storage devices will look like this.
/dev/sda1 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdh /dev/xvdf /dev/xvdi /dev/xvdg /dev/xvdj |
/dev/sda1 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdh /dev/xvdf /dev/xvdi /dev/xvdg /dev/xvdj
In the operating system, invoking df -h, results in this.
/dev/nvme0n1p2 30G 7.0G 24G 24% / /dev/nvme4n1 50G 20G 31G 40% /vol1 /dev/nvme1n1 10G 753M 9.3G 8% /vol2 /dev/nvme8n1 500G 67G 433G 14% /backups /dev/nvme2n1 400G 12G 388G 3% /vol3 /dev/nvme6n1 150G 150G 755M 100% /vol4 /dev/nvme7n1 10G 33M 10G 1% /vol5 /dev/nvme5n1 10G 553M 9.5G 6% /vol6 /dev/nvme9n1 100G 91G 10G 91% /vol7 |
/dev/nvme0n1p2 30G 7.0G 24G 24% / /dev/nvme4n1 50G 20G 31G 40% /vol1 /dev/nvme1n1 10G 753M 9.3G 8% /vol2 /dev/nvme8n1 500G 67G 433G 14% /backups /dev/nvme2n1 400G 12G 388G 3% /vol3 /dev/nvme6n1 150G 150G 755M 100% /vol4 /dev/nvme7n1 10G 33M 10G 1% /vol5 /dev/nvme5n1 10G 553M 9.5G 6% /vol6 /dev/nvme9n1 100G 91G 10G 91% /vol7
The big question is, how can you tell which volume is associated with which. You’ll need nvme program to map out the volumes. Install nvme-cli first.
yum install nvme-cli |
yum install nvme-cli
Then run the command below.
# run nvme sudo nvme id-ctrl -v /dev/nvme6n1 | grep xv # the result 0000: 2f 64 65 76 2f 73 64 6a 20 20 20 20 20 20 20 20 "/dev/xvdf..." |
# run nvme sudo nvme id-ctrl -v /dev/nvme6n1 | grep xv # the result 0000: 2f 64 65 76 2f 73 64 6a 20 20 20 20 20 20 20 20 "/dev/xvdf..."
Here the steps to encrypt an unencrypted volume.
AWS CLI
# CREATE A SNAPSHOT aws ec2 create-snapshot \ --volume-id vol-1234567890abcdef0 \ --description "This is my snapshot" # COPY SNAPSHOT aws ec2 copy-snapshot \ --source-region us-west-2 --source-snapshot-id snap-066877671789bd71b \ --region us-east-1 --description "This is my copied snapshot." # CREATE A VOLUME aws ec2 create-volume \ --region us-east-1 --availability-zone us-east-1a \ --snapshot-id snap-066877671789bd71b --volume-type io1 --iops 1000 # STOP AN INSTANCE aws ec2 stop-instances --instance-ids i-1234567890abcdef0 # DETACH A VOLUME aws ec2 detach-volume --volume-id vol-1234567890abcdef0 # ATTACH A VOLUME aws ec2 attach-volume --volume-id vol-1234567890abcdef0 \ --instance-id i-01474ef662b89480 --device /dev/sdf # START AN INSTANCE aws ec2 start-instances --instance-ids i-1234567890abcdef0 |
# CREATE A SNAPSHOT aws ec2 create-snapshot \ --volume-id vol-1234567890abcdef0 \ --description "This is my snapshot" # COPY SNAPSHOT aws ec2 copy-snapshot \ --source-region us-west-2 --source-snapshot-id snap-066877671789bd71b \ --region us-east-1 --description "This is my copied snapshot." # CREATE A VOLUME aws ec2 create-volume \ --region us-east-1 --availability-zone us-east-1a \ --snapshot-id snap-066877671789bd71b --volume-type io1 --iops 1000 # STOP AN INSTANCE aws ec2 stop-instances --instance-ids i-1234567890abcdef0 # DETACH A VOLUME aws ec2 detach-volume --volume-id vol-1234567890abcdef0 # ATTACH A VOLUME aws ec2 attach-volume --volume-id vol-1234567890abcdef0 \ --instance-id i-01474ef662b89480 --device /dev/sdf # START AN INSTANCE aws ec2 start-instances --instance-ids i-1234567890abcdef0
Everything you need to know about block devices, e.g. UUID
blkid # or blkid /dev/xvda |
blkid # or blkid /dev/xvda
You can also run file -s. You need to be root.
file -s /dev/xvda1 |
file -s /dev/xvda1
If you have a ton of snapshots, they tend to be difficult to find within the AWS console. Although AWS has optimized the snapshot view, it’s still in beta. Filtering a volume id is still a problem. It might be faster just to use the AWS CLI to display snapshot information. Here’s an example of how to display snapshots given a certain volume id.
aws ec2 describe-snapshots \ --filters Name=volume-id,Values=vol-xxxxxxxxxxxxxx \ --query "Snapshots[*].{ID:SnapshotId,Time:StartTime,Progress:Progress}" \ --profile default |
aws ec2 describe-snapshots \ --filters Name=volume-id,Values=vol-xxxxxxxxxxxxxx \ --query "Snapshots[*].{ID:SnapshotId,Time:StartTime,Progress:Progress}" \ --profile default
The result is something like this.
{ "Progress": "99%", "ID": "snap-xxxxxxxxxxxxxxx", "Time": "2018-12-25T10:19:51.385Z" }, { "Progress": "99%", "ID": "snap-xxxxxxxxxxxxxxx", "Time": "2018-12-24T09:35:09.357Z" } |
{ "Progress": "99%", "ID": "snap-xxxxxxxxxxxxxxx", "Time": "2018-12-25T10:19:51.385Z" }, { "Progress": "99%", "ID": "snap-xxxxxxxxxxxxxxx", "Time": "2018-12-24T09:35:09.357Z" }