Wednesday 1 December 2021

K8s Dashboard via Terraform

 Hi all,  this post just to share my code and experience on deploying kubernetes dashboard terraform.


 

some component involved in this are : 

1. Terraform 

2. AKS (Azure Kubernetes Service)

3. k2tf 

If anyone did test AKS, it does provide its own view on the cluster information together with all the pod that has been deploy. However, for me there are some view that is missing which the load of each pod received is no there in the portal. It may work if the AKS has sent the log to Log Analytic and the AKS workbook is configure. 

Just for this case, the Log analytic is not present due to some reason as in this cluster just for development.  There some guide provided via microsoft docs since im using AKS and the Kubernetes itself, how ever , those guide is not deploy via "Kubectl" which is ok . As i did the AKS deployment via terraform, why not put that dashboard deployment together so it will appear one the cluster is deployed. 

Here are the default guide 

1. Manage an Azure Kubernetes Service cluster with the web dashboard - Azure Kubernetes Service | Microsoft Docs 

2. Deploy and Access the Kubernetes Dashboard | Kubernetes 

So here is what i did, is download the deployment.yaml for k8s dashboard and convert to terraform (.yaml to .tf). For that, i have discover one useful tool to convert it which is K2TF . 


So here is the close after being converted to terraform 

provider "kubernetes" {
    #load_config_file       = "false"
    host                   =  var.host
    client_certificate     =  var.client_certificate
    client_key             =  var.client_key
    cluster_ca_certificate =  var.cluster_ca_certificate
}


resource "null_resource" "main" {
  provisioner "local-exec" {
    command = "az aks disable-addons -g ${var.aks-rg} -n ${var.aks-name} -a kube-dashboard"
  }
}
resource "kubernetes_namespace" "kubernetes_dashboard" {
  metadata {
    name = "kubernetes-dashboard"
  }
}

resource "kubernetes_service_account" "kubernetes_dashboard" {
  metadata {
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }
}

resource "kubernetes_service" "kubernetes_dashboard" {
  metadata {
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  spec {
    port {
      port        = 443
      target_port = "8443"
    }

    selector = {
      k8s-app = "kubernetes-dashboard"
    }
  }
}

resource "kubernetes_secret" "kubernetes_dashboard_certs" {
  metadata {
    name      = "kubernetes-dashboard-certs"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  type = "Opaque"
}

resource "kubernetes_secret" "kubernetes_dashboard_csrf" {
  metadata {
    name      = "kubernetes-dashboard-csrf"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  type = "Opaque"
}

resource "kubernetes_secret" "kubernetes_dashboard_key_holder" {
  metadata {
    name      = "kubernetes-dashboard-key-holder"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  type = "Opaque"
}

resource "kubernetes_config_map" "kubernetes_dashboard_settings" {
  metadata {
    name      = "kubernetes-dashboard-settings"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }
}

resource "kubernetes_role" "kubernetes_dashboard" {
  metadata {
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  rule {
    verbs          = ["get", "update", "delete"]
    api_groups     = [""]
    resources      = ["secrets"]
    resource_names = ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  }

  rule {
    verbs          = ["get", "update"]
    api_groups     = [""]
    resources      = ["configmaps"]
    resource_names = ["kubernetes-dashboard-settings"]
  }

  rule {
    verbs          = ["proxy"]
    api_groups     = [""]
    resources      = ["services"]
    resource_names = ["heapster", "dashboard-metrics-scraper"]
  }

  rule {
    verbs          = ["get"]
    api_groups     = [""]
    resources      = ["services/proxy"]
    resource_names = ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  }
}

resource "kubernetes_cluster_role" "kubernetes_dashboard" {
  metadata {
    name = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  rule {
    verbs      = ["get", "list", "watch"]
    api_groups = ["metrics.k8s.io"]
    resources  = ["pods", "nodes"]
  }
}

resource "kubernetes_role_binding" "kubernetes_dashboard" {
  metadata {
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  subject {
    kind      = "ServiceAccount"
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "Role"
    name      = "kubernetes-dashboard"
  }
}

resource "kubernetes_cluster_role_binding" "kubernetes_dashboard" {
  metadata {
    name = "kubernetes-dashboard"
  }

  subject {
    kind      = "ServiceAccount"
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "kubernetes-dashboard"
  }
}

resource "kubernetes_deployment" "kubernetes_dashboard" {
  metadata {
    name      = "kubernetes-dashboard"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "kubernetes-dashboard"
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        k8s-app = "kubernetes-dashboard"
      }
    }

    template {
      metadata {
        labels = {
          k8s-app = "kubernetes-dashboard"
        }
      }

      spec {
        volume {
          name = "kubernetes-dashboard-certs"

          secret {
            secret_name = "kubernetes-dashboard-certs"
          }
        }

        volume {
          name      = "tmp-volume"
          #empty_dir = {}
        }

        container {
          name  = "kubernetes-dashboard"
          image = "kubernetesui/dashboard:v2.4.0"
          args  = ["--auto-generate-certificates", "--namespace=kubernetes-dashboard"]

          port {
            container_port = 8443
            protocol       = "TCP"
          }

          volume_mount {
            name       = "kubernetes-dashboard-certs"
            mount_path = "/certs"
          }

          volume_mount {
            name       = "tmp-volume"
            mount_path = "/tmp"
          }

          liveness_probe {
            http_get {
              path   = "/"
              port   = "8443"
              scheme = "HTTPS"
            }

            initial_delay_seconds = 30
            timeout_seconds       = 30
          }

          image_pull_policy = "Always"

          security_context {
            run_as_user               = 1001
            run_as_group              = 2001
            read_only_root_filesystem = true
          }
        }

        node_selector = {
          "kubernetes.io/os" = "linux"
        }

        service_account_name = "kubernetes-dashboard"

        toleration {
          key    = "node-role.kubernetes.io/master"
          effect = "NoSchedule"
        }
      }
    }

    revision_history_limit = 10
  }
}

resource "kubernetes_service" "dashboard_metrics_scraper" {
  metadata {
    name      = "dashboard-metrics-scraper"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "dashboard-metrics-scraper"
    }
  }

  spec {
    port {
      port        = 8000
      target_port = "8000"
    }

    selector = {
      k8s-app = "dashboard-metrics-scraper"
    }
  }
}

resource "kubernetes_deployment" "dashboard_metrics_scraper" {
  metadata {
    name      = "dashboard-metrics-scraper"
    namespace = "kubernetes-dashboard"

    labels = {
      k8s-app = "dashboard-metrics-scraper"
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        k8s-app = "dashboard-metrics-scraper"
      }
    }

    template {
      metadata {
        labels = {
          k8s-app = "dashboard-metrics-scraper"
        }
      }

      spec {
        volume {
          name      = "tmp-volume"
          #empty_dir = {}
        }

        container {
          name  = "dashboard-metrics-scraper"
          image = "kubernetesui/metrics-scraper:v1.0.7"

          port {
            container_port = 8000
            protocol       = "TCP"
          }

          volume_mount {
            name       = "tmp-volume"
            mount_path = "/tmp"
          }

          liveness_probe {
            http_get {
              path   = "/"
              port   = "8000"
              scheme = "HTTP"
            }

            initial_delay_seconds = 30
            timeout_seconds       = 30
          }

          security_context {
            run_as_user               = 1001
            run_as_group              = 2001
            read_only_root_filesystem = true
          }
        }

        node_selector = {
          "kubernetes.io/os" = "linux"
        }

        service_account_name = "kubernetes-dashboard"

        toleration {
          key    = "node-role.kubernetes.io/master"
          effect = "NoSchedule"
        }
      }
    }

    revision_history_limit = 10
  }
}

# resource "null_resource" "proxy" {
#   provisioner "local-exec" {
#     command = "kubectl proxy"
#   }
# }

# resource "null_resource" "web" {
#   provisioner "local-exec" {
#     command = "start-process http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/pod?namespace=default"
#   }
# }

so , if you are keen to do like i did , you may copy the code and run it together with your terraform for AKS deployment. 

Details on k2tf is here 

1. sl1pm4t/k2tf: Kubernetes YAML to Terraform HCL converter (github.com) 

2. And how i discover it - My Experience in Converting Yaml Files into Terraform Scripts: Challenges and Common Mistakes | by Hiranya Perera | Medium 

until next post, stay safe and happy testing 

Saturday 20 November 2021

Do this on your AGIC

 Hi all, 

i would like to share some finding you are deploying application gateway ingress controller or AGIC in short form. 


There behavior of it keep deploying defaultaddresspool and the address pool that u specify in your terraform code. 

 backend_address_pool {

   name = "${var.agname}-beap"

   fqdns = [

        "dummy"

      ]

So let say your var.agname is AGIC.. so defaultaddresspool  and AGIC-beap backend pool will replacing each other everytime you  run terraform apply. 

After searching then i found there some workaround which some lifecycle management has been added to ignore the changes of the block listed in it. 

  lifecycle {

  ignore_changes = [

    backend_address_pool,

    backend_http_settings,

    frontend_port,

    http_listener,

    probe,

    redirect_configuration,

    request_routing_rule,

    ssl_certificate,

    tags,

    url_path_map,

  ]

}

source : stackoverflow 

some other workaround i tested before getting to this is edit the gateway time and set the agic reconcile  

after all been added according and i try to deploy 3 sample with agic, it is all running smoothly,,



 so why not i reshare the finding on stackoverflow and what i done here for your reading, sample to try this available on my terraform github .




Thursday 11 November 2021

Adding an S make it work

 Hi all, 

this is just quick sharing on the issue i just face this morning . While creating and NSG rule for terraform i for an error mentioning that the parameter should be in string but i have already put it in string format 


after a while searching, i found this github issue - [HELP WANTED] NSG - Multiple Ports in One Rule · Issue #4518 · Azure/azure-quickstart-templates · GitHub

so to make it works, just and an S to it, from range becomes ranges .


yup, that all needed..

Thanks for reading and enjoy the rest of your day 


Monday 11 October 2021

Win-Kex with Windows Terminal

 As WSL2 release a while back, many people has excited about it and some linux distribution does support GUI mode . So i will be writing a bit of my preference that i just solve while using WSL2  with windows terminal. 

Requirement 

1. WSL2 is enable - click here to enable and read about it  

2. Windows Terminal  - click here to get it 

3. Install GUI on your Linux Distro - click here 


So here is my case, i have kali linux distro downloaded and run. Apart from that , i also install win-kex to experience the GUI. however, it goes to full screen and take all my screen as the result of launching that . 


while searching on the solution, i found a paramater needed to be add to make it work like RDP. 




resulting this 

 


 so it does reach my preference to use it but how do i make it work if i try to launch this via windows terminal and my answer to that is add a specific profile with command to it. 

Here is my version 

{

                "commandline": "wsl -d kali-linux kex --wtstart esm",

                //wsl -d kali-linux kex --wtstart -s

                "guid": "{55ca431a-3a87-5fb3-83cd-11ececc031d2}",

                "hidden": false,

                "name": "Win-KeX"

            }


Then, another profile will be listed for you to use 


that all for my sharing this time. and here are some reading material before i can come out with this 

1. Win-KeX ESM | Kali Linux Documentation  *ps i found this right after setting my profile in terminal

2. How to install Win-Kex (Kali Linux on Windows 10) with WSL - Hack Forums

3. Kali in WSL + WiN KeX 

4. Setup Kali Linux in Windows 10 WSL2 Setup Kali Linux in WSL2 (techtutsonline.com) 

check out my previous writing on windows terminal here until then , thanks for reading and stay safe 

Saturday 18 September 2021

Azure Windows VM not Activated !!!

Hi, this article will be more like a review on what consideration that need to be include in planning especially on some service in azure that somehow need some connection to azure backend service . 



As for this case, quick background is this all traffic is redirected to Azure Firewall as an outbound , after few month of running turn out the windows vm status show not activated on the desktop . 

A quick check to rectify the issue is by running psping against Azure KMS ip or fqdn. 


 

or ping the ip is the DNS cannot be resolve - 23.102.135.246 with the same port 1688

or issue a Test-NetConnection kms.core.windows.net -Port 1688 on powershell 

As for these case, i did a firewall rule under network rule to allow this subnet to those IP. Here is the result after rule has been applied . 


The connection is successful now and one last step to instruct windows to activate via this command 

"1..12 | ForEach-Object { Invoke-Expression "$env:windir\system32\cscript.exe $env:windir\system32\slmgr.vbs /ato" ; start-sleep 5 }"

So this give a a thought how the infrastructure should be design in proper manner in which ever public cloud service. If this kind of blocking is happening without proper plan it will block more feature to be able to use like having log analytic for metric and maybe azure update management for tracking and perform update. 

As for now, few scenario relate to this windows is not activated can be happen due few reason 

1. VM is behind Standard Private load balancer with is secured by default

2. Outbound traffic is via Azure Firewall or NVA but the necessary or i will say crucial is not implemented. 

Latest update from microsoft "The first DNS name of the KMS server for the Azure Global cloud is azkms.core.windows.net with two IP addresses: 20.118.99.224 and 40.83.235.53. The second DNS name of the KMS server for the Azure Global cloud is kms.core.windows.net with an IP address of 23.102.135.246"

More details solution can be found here at Microsoft document and few other

1. Troubleshoot Windows virtual machine activation problems in Azure - Virtual Machines | Microsoft Docs

2. Azure Windows Server license not activated - Stack Overflow


That is all for now have nice day ahead and stay safe 



Sunday 29 August 2021

Enable Boot Diagnostic Via Terraform Part 2

For this posting, it is more like an update as i recently found a better way in github (link here) to enable the boot diagnostic to Azure VM . 

This is code segment that i use previously 


  boot_diagnostics{
          enabled = true
          storage_uri = "Https://${azurerm_storage_account.hub-core-vmdiag.name}
                        .blob.core.windows.net"
}

so i did my own experiment to test that and turn out it is easier to be implemented 
here is code up 

1. AzureRM_virtual_machine resource block
 
 boot_diagnostics {
      enabled = true
      storage_uri = azurerm_storage_account.hub-core-vmdiag.primary_blob_endpoint
    }

2. AzureRM_windows_virtual_machine resource block 

 boot_diagnostics { 
      storage_account_uri = azurerm_storage_account.hub-core-vmdiag.primary_blob_endpoint
    }

that all.. thanks for reading and stay safe

Saturday 28 August 2021

Importing Existing Azure Vnet into Terraform

Previously i have posted a way to import a resource group into terraform. so today it will a continue process to that where it time to import virtual network into terraform . 

This will be a bit challenging as it got some dependencies to another resources like Subnet and NSG. 

So let begin, so always start with creating a empty of your vnet but for this scenario i will suggest to import NSG first as it was bind with subnet. 


resource "azurerm_network_security_group" "nsg-app" {
 
}

Then run the terraform import for this nsg 

 Terraform import azurerm_network_security_group.nsg-app xx/xxx/xxx/NSG-APP

After that, use terraform show to check what are the information needed for NSG block to match with deployment. 


for my case, i just add in the name and few importance information without all the additional rule created in the NSG. 


this step need to be repeat to all   NSG created before touching on Virtual Network and can be skip if no NSG being created or attach to subnet. 

continuing from that, you may start importing the virtual network with the same step and continue with the subnet . i was planning to use one resource block to address vnet and subnet like here 


but seem like create separate block for each subnet will be easier as less information needed for the subnet block. 




repeat for all the subnet available and check if there more changes needed with terraform plan . For me this is enough to import all. 

Thanks for reading and stay safe. 

Saturday 21 August 2021

Import Azure Existing Resource Group to Terraform

 As part of using terraform to manage the architecture, there a time when the environment has existing resources or the resource being deploy via portal instead of terraform 

So, in order to keep all the control in terraform, the resource need to be imported to terraform so it can be manage from there. 

In this case , Azure Resource Group will be imported to terraform, as we all aware, all resource in azure need to be located in resource 


1.Start with create an empty block the resource ;

resource "azurerm_resource_group" "prod-rg" {
    
}

"prod-rg" is just a block name, can be name with any name but i prefer to tally with the resource group name created on Azure. 

2. Get the RG resource id from azure portal 


3. Import command need to parameter which is resource and resource id 

For this case - terraform import azurerm_resource_group.prod-rg /xxx/xxx/resourcegroup/PROD-RG

4. After import is complete, some info need to be added into resource group block. 

4.1 you may run terraform show to see what need to be added 

  

 Not all need to be added 

5. Edit the RG block as follow 

resource "azurerm_resource_group" "prod-rg" {
    name = "PROD-RG"
    location = "southeastasia"
}

6. run terraform plan to check if any more information need to be add, but as for now, that two information is enough


That all i have for now, feel free to leave a feedback in the comment, happy terraforming and stay safe 

Friday 13 August 2021

Windows VM Stuck after Restart

Hello all, 

for this time round, i would like to share some fix i have done due to windows VM is not accessible after applying July update . So when the restart was performed, the VM is not responding to RDP and CPU usage is 0.02% from the Azure Portal for few hours. Turn out some error happen on booting up  the vm and you can see if VM screen like screenshot below if the boot diagnostic is enable


The method of solving this kind of behavior is either restoring from backup of continue work on the affected VM . I will explain more on the solution provided by microsoft support. 

As usual, the solution will need a temporary VM as fixer and the step as follow 

1. Create a Disk Snapshot of affected VM ; name it VM01-snapshot or ss

2. Create a manage disk using VM01-snapshot ; name it VM01-OSdisk-01

3. Create a temporary VM with Hyper V enable ; i name it as HyperV

4. Attach VM01-OSdisk-01 as a data disk to HyperV VM 

5. Do a RDP to HyperV vm and open Command Prompt 

6. Run "dism /image:G:\ /cleanup-image /revertpendingactions" ; change letter G according to  os disk of VM01

7. Once the process completed, go to disk management and "offline" the VM01 Os disk 

8. Create a VM in Hyper V; name TestVM, choose Gen1 because most azure vm is Gen1 and during the disk selection, choose attach the disk later.

9. After the TestVM created, right click and go to setting, On IDE click add hard disk and choose physical drive. This will work as Hyper V support pass through disk in normal hyper v deployment which means it will use physical disk instead of virtual hard disk

    


10. Set a good number for cpu and memory and try to boot the VM. 

11. As the vm is boot successfully and the TestVm can be power off, remove from TestVM setting and remove from HyperV vm data disk in azure portal.

12. Perform  "Swap OS disk" operation with source vm in Azure portal. 

Hope this will help somebody out there , leave a feedback in the comment and  stay safe


Friday 28 May 2021

Resolving Azure PostgreSQL FQDN Part 2

 Hello All,  Just a quick update on using Flexible Server for PostgreSQL in Azure . 

This is updated version from previous post in here - Part 1  



The objective of this post is to allow the connection to postgresql with the latest update applied to this services . below is the update ; full details update here - PostgreSQL Release Note 


Here is the overview of the deployment and by the end of this , connection to postgresql can be made from on premise and from server in azure. 






Component Involved 

1. VPN Gateway ; Connecting On premise with Azure Environment 

2. vNet Peering ; if there a spoke network involved 

3. Private DNS Zone 

4. PosgreSQL Flexible Server ( vNet Intergration)

5. Azure VM running AD DNS 


Step 1, Create Private DNS Zone with desired name, i would suggest something like psqldns.xxxxx

Step 2 , Configure the Virtual Network Link with private DNS Zone 


Step 3. Create postgreSQL and use vnet intergration, during this creation, the dns zone option will be appear . 


*This is the result once the deployment completed


Step 4 . Configure vNet peering if your azure architecture is deployed in hub and spoke. 


Step 5 . Configure Site to Site VPN from on premise to Azure 

Step 6 . Create conditional forwarder in AD 

         6.1 Forward request on postgresql.database.azure.com on premise to AD running in Azure VM 

          




       6.2 Forward request on postgresql.database.azure.com on AD running in Azure VM  to Azure DNS 


7. Test the connectivity from any premise server 

  


Alright, that all from me this time... 

Special thanks to MS support (Bruno Maia & Mohammed Abuhamdieh)  and my colleague Husna who work with me on solving this case so that i can share this solution to all of the reader.. 

Do comment if there any question .. 





Sunday 25 April 2021

Resolving Azure PostgreSQL FQDN

 Hello everyone and Happy Sunday . today i would to share finding that was found during my troubleshooting with MS support for resolving  Azure Postgres fqdn from onpremise network. By the way, the postgres that im refering to is Azure Database for PostgreSQL flexible server . To make it even complicated i have deployed this in private mode (vNet Intergration). 



The problem occurred when any server that is not the same virtual network with posgresql or on premises, the DB connection string cannot use posgresql fqdn because the name cannot be resolve. In order to solve that, there will some DNS conditional forwarding and DNS server running in the same virtual network with azure postgresql server. 



Component involve 
1. Existing Active Directory  (172.16.0.4)
2. Existing Azure PostgreSQL (enfrasql)
3. New DNS server (10.0.0.68)

Step 1 - Create conditional forwarder in existing Active Directory with the record as below. 



Step 2 . Go to new DNS server and create the record as below


 By now, the ping test will get the result either from hub-vnet or vnet01 to posgresql fqdn, it will be resolve accordingly. 



In summary, what this does is, forward the any request contained postgresql.database.azure.com to 10.0.0.68 and it will be forwarding again to Azure DNS IP (168.63.129.16) in order to resolve the FQDN.  Do note that Azure Postgresql flexible server is still in preview and it may has some improvement toward resolving the fqdn better in future. 

Some good link that i refer to is here 

1. Name resolution for resources in Azure virtual networks | Microsoft Docs


Kubecost on AKS Part 02