I’m having trouble logging in using Google SDK Compute SSH on a Mac Terminal.
Here’s the fix.
gcloud compute ssh USERNAME@SERVER --zone ZONE --project PROJECTID --internal-ip 2>&1 |
There was an issue with a redirect to another shell.
cloud engineer
by Ulysses
I’m having trouble logging in using Google SDK Compute SSH on a Mac Terminal.
Here’s the fix.
gcloud compute ssh USERNAME@SERVER --zone ZONE --project PROJECTID --internal-ip 2>&1 |
gcloud compute ssh USERNAME@SERVER --zone ZONE --project PROJECTID --internal-ip 2>&1
There was an issue with a redirect to another shell.
by Ulysses
Occasionally I was getting this random error when running Terraform.
╷ │ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found. │ │ Please see https://registry.terraform.io/providers/hashicorp/aws │ for more information about providing credentials. │ │ Error: RequestError: send request failed │ caused by: Post "https://sts.amazonaws.com/": read tcp xx.xx.xx.xx:59422->xx.xx.xx.xx:443: read: connection reset by peer │ │ │ with provider["registry.terraform.io/hashicorp/aws"], │ on main.tf line 10, in provider "aws": │ 10: provider "aws" { |
╷ │ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found. │ │ Please see https://registry.terraform.io/providers/hashicorp/aws │ for more information about providing credentials. │ │ Error: RequestError: send request failed │ caused by: Post "https://sts.amazonaws.com/": read tcp xx.xx.xx.xx:59422->xx.xx.xx.xx:443: read: connection reset by peer │ │ │ with provider["registry.terraform.io/hashicorp/aws"], │ on main.tf line 10, in provider "aws": │ 10: provider "aws" {
Here’s the fix. Place this in your ~/.bash_profile.
export AWS_SDK_LOAD_CONFIG=1 |
export AWS_SDK_LOAD_CONFIG=1
This forces Terraform to use both config and credentials file.
by Ulysses
Use Saml2Aws CLI as an alternative to SAML to AWS STS Key Conversion.
Install on Mac.
brew install saml2aws saml2aws --version |
brew install saml2aws saml2aws --version
Configure. Provide information.
saml2aws configure |
saml2aws configure
It will create a ~/.saml2aws config file. Set session to 8 hours.
aws_session_duration = 28800 |
aws_session_duration = 28800
Login.
saml2aws login |
saml2aws login
After authentication and/or MFA, your ~/.aws/credentials will be updated.
by Ulysses
Here’s the command to display only the current directory.
export PROMPT_DIRTRIM=1 |
export PROMPT_DIRTRIM=1
Place the command in your ~/.bashrc to make it permanent.
by Ulysses
Here are command lines you can within SFTP.
Login.
sftp username@servername |
sftp username@servername
Commands available from the local client.
lls lcd lmkdir lpwd lumask |
lls lcd lmkdir lpwd lumask
Commands available on remote server.
cd chmod chown exit get help ln ls mkdir put pwd rename rm rmdir version ! |
cd chmod chown exit get help ln ls mkdir put pwd rename rm rmdir version !
by Ulysses
How to extend a LVM logical volume.
Extend the physical drive.
gcloud compute disks resize data-drive --size=40GB |
gcloud compute disks resize data-drive --size=40GB
SSH to a server and extend the physical volume.
$ pvresize /dev/sdb Physical volume "/dev/sdb" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized |
$ pvresize /dev/sdb Physical volume "/dev/sdb" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
$ pvs PV VG Fmt Attr PSize PFree /dev/sdb vg lvm2 a-- <40.00g 10.00g |
$ pvs PV VG Fmt Attr PSize PFree /dev/sdb vg lvm2 a-- <40.00g 10.00g
$ vgs VG #PV #LV #SN Attr VSize VFree vg 1 1 0 wz--n- <40.00g 10.00g |
$ vgs VG #PV #LV #SN Attr VSize VFree vg 1 1 0 wz--n- <40.00g 10.00g
$ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg -wi-ao---- <30.00g |
$ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg -wi-ao---- <30.00g
Extend the logical volume.
$ lvresize -l +100%FREE /dev/vg/data Size of logical volume vg/data changed from <30.00 GiB (7679 extents) to <40.00 GiB (10239 extents). Logical volume vg/data successfully resized. |
$ lvresize -l +100%FREE /dev/vg/data Size of logical volume vg/data changed from <30.00 GiB (7679 extents) to <40.00 GiB (10239 extents). Logical volume vg/data successfully resized.
Check the volume size.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 40G 0 disk └─vg-data 253:0 0 40G 0 lvm /mnt |
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 40G 0 disk └─vg-data 253:0 0 40G 0 lvm /mnt
Still at 30GB.
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi /dev/mapper/vg-data xfs 30G 247M 30G 1% /mnt tmpfs tmpfs 81M 0 81M 0% /run/user/1001 |
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi /dev/mapper/vg-data xfs 30G 247M 30G 1% /mnt tmpfs tmpfs 81M 0 81M 0% /run/user/1001
Extend the file system.
$ xfs_growfs /dev/vg/data meta-data=/dev/mapper/vg-data isize=512 agcount=12, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=7863296, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 7863296 to 10484736 |
$ xfs_growfs /dev/vg/data meta-data=/dev/mapper/vg-data isize=512 agcount=12, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=7863296, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 7863296 to 10484736
Let’s check again.
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi /dev/mapper/vg-data xfs 40G 319M 40G 1% /mnt tmpfs tmpfs 81M 0 81M 0% /run/user/1001 |
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi /dev/mapper/vg-data xfs 40G 319M 40G 1% /mnt tmpfs tmpfs 81M 0 81M 0% /run/user/1001
The logical volume is now at 40GB.
by Ulysses
How to setup Logical Volume Manager on a VM.
Install LVM.
yum install lvm2 |
yum install lvm2
Check the disks available. We are going to run LVM on /dev/sdb.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 20G 0 disk |
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 20G 0 disk
Create a physical volume on /dev/sdb.
$ pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created. $ pvs PV VG Fmt Attr PSize PFree /dev/sdb vg lvm2 a-- <20.00g 0 |
$ pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created. $ pvs PV VG Fmt Attr PSize PFree /dev/sdb vg lvm2 a-- <20.00g 0
Create a volume group called vg.
$ vgcreate vg /dev/sdb Volume group "vg" successfully created $ vgs VG #PV #LV #SN Attr VSize VFree vg 1 0 0 wz--n- <20.00g <20.00g |
$ vgcreate vg /dev/sdb Volume group "vg" successfully created $ vgs VG #PV #LV #SN Attr VSize VFree vg 1 0 0 wz--n- <20.00g <20.00g
Create a 10GB logical volume group called data.
$ lvcreate -L 10G -n data vg Logical volume "data" created. $ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg -wi-a----- 10.00g |
$ lvcreate -L 10G -n data vg Logical volume "data" created. $ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data vg -wi-a----- 10.00g
Format the volume group and mount it.
$ mkfs.xfs /dev/vg/data meta-data=/dev/vg/data isize=512 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done. $ mount /dev/vg/data /mnt |
$ mkfs.xfs /dev/vg/data meta-data=/dev/vg/data isize=512 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done. $ mount /dev/vg/data /mnt
Check your logical volume. It says 10GB.
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 10G 104M 9.9G 2% /mnt |
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 10G 104M 9.9G 2% /mnt
Let’s now extend the logical volume to 20GB.
$ lvextend -l +100%FREE /dev/vg/data Size of logical volume vg/data changed from 10.00 GiB (2560 extents) to <20.00 GiB (5119 extents). Logical volume vg/data successfully resized. |
$ lvextend -l +100%FREE /dev/vg/data Size of logical volume vg/data changed from 10.00 GiB (2560 extents) to <20.00 GiB (5119 extents). Logical volume vg/data successfully resized.
Although lsblk says 20GB, our logical volume still says 10GB.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 20G 0 disk └─vg-data 253:0 0 20G 0 lvm /mnt $ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 10G 104M 9.9G 2% /mnt |
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 20G 0 disk └─vg-data 253:0 0 20G 0 lvm /mnt $ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 10G 104M 9.9G 2% /mnt
We need to grow the file system.
$ xfs_growfs /dev/vg/data meta-data=/dev/mapper/vg-data isize=512 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2621440 to 5241856 |
$ xfs_growfs /dev/vg/data meta-data=/dev/mapper/vg-data isize=512 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2621440 to 5241856
Let’s check again.
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 20G 176M 20G 1% /mnt |
$ df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 385M 0 385M 0% /dev tmpfs tmpfs 403M 0 403M 0% /dev/shm tmpfs tmpfs 403M 5.5M 398M 2% /run tmpfs tmpfs 403M 0 403M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 2.9G 17G 15% / /dev/sda1 vfat 200M 5.8M 195M 3% /boot/efi tmpfs tmpfs 81M 0 81M 0% /run/user/1000 /dev/mapper/vg-data xfs 20G 176M 20G 1% /mnt
It now says 20GB.
by Ulysses
If you’re getting a “HTTP Error 404 – Not Found” error when trying to run yum update, it could be a corrupted cache. Clear the cache on the system by running the following.
yum clean all rm -rf /var/cache/yum/* |
yum clean all rm -rf /var/cache/yum/*
Run yum update again. The errors should be gone.