With the latest “Transit VPC” feature in the VMware Cloud on AWS (VMC) 1.12 release, you can now inject static routes in the VMware managed Transit Gateway (or VTGW) to forward SDDC egress traffic to a 3rd-party firewall appliance for security inspection. The firewall appliance is deployed in a Security/Transit VPC to provide transit routing and policy enforcement between SDDCs and workload VPCs, on-premises data center and the Internet.
I’ve always wanted to find a lightweight VM template for running on nested vSphere lab environment, or sometimes for demonstrating live cloud migration such as vMotion to the VMware Cloud on AWS. Recently I have managed to achieve this by using the Tiny Core Linux distribution and it ticked all of my requirements:
ultra lightweight – the VM runs stable with only 1 vCPU, 256MB RAM and 64MB hard disk!
common linux tools installed – such as curl, wget, openssh etc
a lightweight http server serving a static site for running networking or load-balancing tests
In this post I will walk you through the process for creating a Tiny Core based Linux VM template including all of the above requirements. To begin, download the Tiny Core ISO fromhere. (For reference, I’m using the CorePlus-v11.1 release as I was getting some weird issues with OpenSSH on the latest v12.0 release)
Below are the settings I’ve used for my VM template:
VM hardware version 11 – compatible with ESXi 6.0 and later
Guest OS = Linux \ Other 3.x Linux (32-bit)
Memory = 256MB (this is the lowest I could go for getting a stable machine)
Hard Disk = 64MB – change drive type to IDE and set the virtual device node to IDE0:0
CDROM – change the virtual device node to IDE1:0
iSCSI controller – remove this as it’s not required
Also, you should use the below minimal settings for installing the Tiny Core OS. For detailed installation instructions, you can follow the step-by-step guide at here:
Once the OS has been installed and you are into the shell, create a below script to configure static IP settings for eth0 (and disable DHCP if required).
tc@box:~$ cat /opt/interfaces.sh
# If you are booting Tiny Core from a very fast storage such as SSD / NVMe Drive and getting
# "ifconfig: SIOCSIFADDR: No such Device" or "route: SIOCADDRT: Network is unreachable"
# error during system boot, use this sleep statement, otherwise you can remove it -
# kill dhcp client for eth0
if [ -f /var/run/udhcpc.eth0.pid ]; then
kill `cat /var/run/udhcpc.eth0.pid`
# configure interface eth0
ifconfig eth0 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add default gw 192.168.0.254
echo nameserver 192.168.0.254 >> /etc/resolv.conf
tc@box:~$sudo chmod 777 /opt/interfaces.sh
You may also want to reset the password for the default user “tc” (this can be used later for SSH access), and reset the root password as well:
Changing password for tc
tc@box:~$ sudo su
Changing password for root
Now install all the required packages and extensions, and your onboot package list should look like below:
Next, we’ll need to save all the settings and make them persistent across reboots, especially we’ll need to add the open-vm-tools and openssh onto the startup script (bootlocal.sh) — otherwise none of these services would be started after a reboot.
Now you can go ahead and safely reboot the VM, and once it comes back online you should be able to SSH into it. Also the the open-vm-tools service should be automatically started and you can see the correct IP address and VM tool version reported in vCenter.
In addition, you should be able to see a static page like below by browsing to the VM address — the script (httpd.sh) should report back the VM’s IP address which could be handy for running a LB related testings.
Recently I have tried out the Terraform NSX-T Provider and it worked like a charm. In this post, I will demonstrate a simple example on how to leverage Terraform to provision a basic NSX tenant network environment, which includes the following:
create a Tier-1 router
create (linked) routed ports on the new T1 router and the existing upstream T0 router
link the T1 router to the upstream T0 router
create three logical switches with three logical ports
create three downlink LIFs (with subnets/gateway defined) on the T1 router, and link each of them to the logical switch ports accordingly
Once the tenant environment is provisioned by Terraform, the 3x tenant subnets will be automatically published to the T0 router and propagated to the rest of the network (if BGP is enabled), and we should be able to reach the individual LIF addresses. Below is a sample topology deployed in my lab — (here I’m using pre-provisioned static routes between the T0 and upstream network for simplicity reasons).
Software Versions Used & Verified
Terraform – v0.12.25
NSX-T Provider – v3.0.1 (auto downloaded by Terraform)
NSX-T Data Center -v3.0.2 (build 0.0.16887200)
Sample Terraform Script
You can find the sample Terraform script at my Git repo here — remember to update the variables based on your own environment.
Run the Terraform script and this should take less than a minute to complete.
We can review and reverify that the required NSX components were built successfully via the NSX manager UI — Note: you’ll need to switch to the “Manager mode” to be able to see the newly create elements (T1 router, logical switches etc), as Terraform was interacting with the NSX management plane (via MP-API) directly.
In addition, we can also check and confirm the3x tenant subnets are published via T1 to T0 by SSH into the active edge node. Make sure you connect to the correct VRF table for the T0 service router (SR) in order to see the full route table — here we can see the 3x /24 subnets are indeed advertised from T1 to T0 as directly connected (t1c) routes.
As expected I can reach to each of the three LIFs on the T1 router from the lab terminal VM.
This will be a quick blog to demonstrate how to enable the (embedded) Harbor Image Registry in vSphere 7 with Kubernetes. Harbor was originally developed by VMware as a enterprise-grade private container registry. It was then donated to the CNCF in 2018 and recently became a CNCF graduated project.
For this demo, we’ll activate the embedded Harbor register within the vSphere 7 Kubernetes environment, and integrate it with the Supervisor Cluster for container management and deployment.
Enabling the embedded Harbor Registry in vSphere 7 with Kubernetes
To begin, go to your vSphere 7 “Workload Cluster —> Namespaces —> Image Registry”, and then click “Enable Harbor”.
Make sure to select the vSAN storage policy to provide persistent storage as required for the Harbor installation.
The process will take a few minutes, and you should see 7x vSphere Pods after Harbor is installed and enabled. Take a note of the Harbor URL — this is an external address of the K8s load balancer that is created by NSX-T.
Push Container Images to Harbor Registry
First, let’s log into the Harbor UI and take a quick look. Since this is embedded within vSphere, it supports the SSO login 🙂
Harbor will automatically create a project for every vSphere namespace we have created. In my case, there are two projects “dev01” and “guestbook” created, which are mapped to the two namespaces in my vSphere workload cluster.
Click the “dev01” project, and then “repository” — as expected it is currently empty, and we’ll be pushing container images to this repository for a quick test. However, before we can do that we’ll need to download and import the certificate to our client machine for certificate-based authentication. Click the “Registry Certificate” to download the ca.crt file.
Next, on the local client create a new directory under /etc/docker/cert.d/using the same name as the registry FQDN (URL).
[root@pacific-ops01 ~]# cd /etc/docker/certs.d/
[root@pacific-ops01 certs.d]# mkdir 192.168.100.133
[root@pacific-ops01 certs.d]# cd 192.168.100.133/
[root@pacific-ops01 192.168.100.133]# vim ca.crt
Now, let’s get a test (nginx) image, tag it, and try to push it to the dev01 repository.
[root@pacific-ops01 ~]# docker login 192.168.100.133 --username email@example.com
[root@pacific-ops01 ~]# docker pull nginx
Using default tag: latest
Trying to pull repository docker.io/library/nginx ...
latest: Pulling from docker.io/library/nginx
bf5952930446: Pull complete
cb9a6de05e5a: Pull complete
9513ea0afb93: Pull complete
b49ea07d2e93: Pull complete
a5e4a503d449: Pull complete
Status: Downloaded newer image for docker.io/nginx:latest
[root@pacific-ops01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 4bb46517cac3 3 days ago 133 MB
[root@pacific-ops01 ~]# docker tag docker.io/nginx 192.168.100.133/dev01/nginx
[root@pacific-ops01 ~]# docker push 192.168.100.133/dev01/nginx
The push refers to a repository [192.168.100.133/dev01/nginx]
latest: digest: sha256:179412c42fe3336e7cdc253ad4a2e03d32f50e3037a860cf5edbeb1aaddb915c size: 1362
It works, perfect! Now refresh the repository and we can see the new nginx image we just pushed through.
Deploy Kubernetes Pods to Supervisor Cluster from the Harbor Registry
Let’s run a quick test to deploy a Pod using the nginx image from our Harbor Registry. First, log into the Supervisor Cluster and switch to the “dev01” namespace/context.
[root@pacific-ops01 ~]# kubectl apply -f nginx-demo.yaml
Monitor the events and soon we can see the Pod is deployed successfully from the image fetched from the Harbor repository.
[root@pacific-ops01 ~]# kubectl get events -n dev01
LAST SEEN TYPE REASON OBJECT MESSAGE
48s Normal Status image/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 pacific-esxi-3: Image status changed to Resolving
40s Normal Resolve image/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 pacific-esxi-3: Image resolved to ChainID sha256:80b21afd8140706d5fe3b7106ae6147e192e6490b402bf2dd2df5df6dac13db8
40s Normal Bind image/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 Imagedisk 80b21afd8140706d5fe3b7106ae6147e192e6490b402bf2dd2df5df6dac13db8-v0 successfully bound
32s Normal Status image/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 Image status changed to Fetching
14s Normal Status image/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 Image status changed to Ready
7s Normal SuccessfulRealizeNSXResource pod/nginx-demo Successfully realized NSX resource for Pod
<unknown> Normal Scheduled pod/nginx-demo Successfully assigned dev01/nginx-demo to pacific-esxi-1
50s Normal Image pod/nginx-demo Image nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 bound successfully
39s Normal Pulling pod/nginx-demo Waiting for Image dev01/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0
14s Normal Pulled pod/nginx-demo Image dev01/nginx-4f70b77c704ff28acdf14ce0405bc1811e8ee077-v0 is ready
7s Normal SuccessfulMountVolume pod/nginx-demo Successfully mounted volume default-token-bqxc2
7s Normal Created pod/nginx-demo Created container nginx-demo
7s Normal Started pod/nginx-demo Started container nginx-demo
[root@pacific-ops01 ~]# kubectl get pods -n dev01
NAME READY STATUS RESTARTS AGE
nginx-demo 1/1 Running 0 60s
Use kubectl describe pod to confirm the nginx Pod is indeed running on the image pulled from the Harbor registry.
This blog provides a guide to help you deploying Contour Ingress Controller onto a Tanzu Kubernetes Grid (TKG) cluster. Contour is an open source Kubernetes ingress controller that exposes HTTP/HTTPS routes for internal services so they are reachable from outside the cluster. Like many other ingress controllers, Contour can provide advanced L7 URL/URI based routing and load balancing, as well as SSL/TLS termination capabilities.
Contour was originally developed by Heptio (VMware) and has been recently handed over to CNCF as an incubating project. Contour consists of a control plane that is provisioned via a K8s deployment, and an Envoy-based data plane running as a Daemonset on every cluster worker node.
Download the Tanzu Kubernetes Grid 1.1 Extension manifestsat here
For this lab, we’ll install the Contour ingress controller onto a TKG cluster, and we’ll then deploy a sample app (supplied within the manifest) for testing the Ingress services. The overall service topology will look like this:
Install the Contour Ingress Controller
To begin, unzip the TKG extension manifest (I’m using v1.1.0).
[root@pacific-ops01 ~]# tar -xzf tkg-extensions-manifests-v1.1.0-vmware.1.tar.gz
Log into your TKG cluster and make sure you are in the correct context.
Next, install the Cert-Manager (for Contour Ingress) onto the TKG cluster.
Before we can install Contour and Envoy, we’ll need to make a small change to the Envoy service config (02-service-envoy.yaml). As illustrated in the service topology, we will deploy a LoadBalancer in front of the ingress controller. So we’ll update the Envoy service type from NodePort (default) to LoadBalancer.
Now deploy Contour and Envoy onto the cluster.
We can see a Contour deployment, and an Envoy daemonset of 3x (we have 3 worker nodes) have been deployed under the namespace of tanzu-system-ingress. Also, take a note of the external IP (192.168.100.130) of the Envoy LoadBalancer service as this will be used by our Ingress services.
Deploy a Sample App for testing Ingress Services
Deploy the sample app from within the manifest, this will create:
one new namespace called “test-ingress”
one deployment of the “helloweb” app, with a Replicaset of 3x Pods
two separate services called “s1” & “s2” — Note: both services are actually pointing to the same 3x Pods (as they are using the same Pod selector)
Verify the Pods are up and running
[root@pacific-ops01 ~]# kubectl get pods -n test-ingress
NAME READY STATUS RESTARTS AGE
helloweb-7cd97b9cb8-qjwtk 1/1 Running 0 50s
helloweb-7cd97b9cb8-r9s8g 1/1 Running 0 51s
helloweb-7cd97b9cb8-swztl 1/1 Running 0 51s
and both services (s1 & s2) are deployed as expected.
[root@pacific-ops01 ~]# kubectl get svc -n test-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
s1 ClusterIP 10.40.183.104 <none> 80/TCP 1m
s2 ClusterIP 10.40.129.12 <none> 80/TCP 1m
We can’t get to these services yet as they are internal K8s services (ClusterIP) only. We’ll need to deploy an Ingress object so that Contour can expose these services and route traffic to them from external. The good news is that there’s already an Ingress config template provided in the manifest. I’ve made the following changes to the template as per my lab environment (my lab domain is vxlan.co). Note the hostname (URL) and the path (URI) as we’ll be using these to access the two services.
Deploy the Ingress object.
[root@pacific-ops01 ~]# cd tkg-extensions-v1.1.0/ingress/contour/examples/https-ingress
[root@pacific-ops01 https-ingress]# kubectl apply -f .
Verify the Ingress service is running as expected
[root@pacific-ops01 https-ingress]# kubectl get ingress -n test-ingress
NAME HOSTS ADDRESS PORTS AGE
https-ingress ingress.vxlan.co 80, 443 2m
Create a DNS record with the ingress hostname by pointing to the Envoy load balancer external IP.