Upgrade Log Insight using Ansible and Azure DevOps

Intro

Some time ago, I demonstrated how Terraform can be applied in a vSphere environment to deploy .OVA files, like one or more Log Insight nodes. Now it is time to upgrade our Log Insight node(s), and this time we will use Ansible to do the job.

In my daily life, Ansible combined with Azure DevOps is the tool of choice to perform all types of work. This post can be seen as a demonstration how to perform an upgrade of Log Insight with the mentioned tools.

Disclaimer

  • This is a proof of concept, the code shown is a minimal viable product, no effort has been made to protect accounts and passwords. The code can be used as a basis for a production environment, but will need to be modified to meet the requirements of the organization.
  • I am also aware that you can still manually upgrade Log Insight or use tools like the vRealize Suite Lifecycle Manager.
  • Code for creating snapshots is not included, as well as other non-essentials.

Preparation

The first step is to install three Log Insight nodes version 8.4.1. After deployment of the first node, the option “Start new Deployment” was chosen and a basic installation was done, including a Virtual IP address, this is now the primary node. After this a power down was done and a snapshot was taken. The other two nodes have been powered down immediately after deployment and a snapshot has been made of these nodes as well. These nodes are later used as additional nodes to create a 3-node cluster.
Somewhat anticipating the topics to come, I found out pretty quickly that a web server is required to have the upgrade packages (.PAK files) available during an upgrade. In my case, I solved that by installing a simple web server on an existing Windows host, such as Miniweb.
Installation is unzipping the downloaded file, create a new folder under “htdocs” and copy the Log Insight upgrade packages (.PAK) files to this new folder.
Now we can start with the first question, how to upgrade Log Insight with Ansible?

Upgrade using PowerShell

Let’s go back to a manual upgrade of Log Insight.
The first action is to log in with an account with sufficient privileges on (in case of a cluster) the primary node, not using the virtual IP address or related FQDN; in this demo we will only use the local admin account.

After a successful login, the upgrade is started by uploading the selected upgrade package. After the upload is completed the actual upgrade is started by accepting the EULA. If it concerns a cluster, after the primary node, the other nodes will be upgraded one after the other (rolling upgrade).

The Log Insight API luckily supports upgrading and is also a two-step process.
POST /api/v1/upgrades, will start the upgrade by uploading the .PAK file. In the JSON body, the request specifies the location of the upgrade package in a format like:
“pakUrl”: “http://<myhost>/downloads/VMware-vRealize-Log-Insight-8.8.0-19675011.pak”.

To continue after completion of the upload process, the second operation, will start the upgrade:
PUT /api/v1/upgrades/{version}/eula, where {version} is the new version, formatted like: “8.8.0-19675011”. The JSON body of this request look like this:
“accepted”: true

Some time ago, I wrote about the Log Insight REST API and provided some PowerShell examples. So before switching to Ansible and to get some hands-on experience, let’s first see if we can upgrade Log Insight using PowerShell. Below is the full code, which can also be downloaded here as vLI-API-Update.ps1. (Code will need some modification for your situation!).

# LogInsight
# Upgrade Log Insight, using REST API, demo code

###############################################################
# Setting Variables
###############################################################
$vLIServer = "192.168.100.111"     # Primary node

# $vLIProvider is local or ActiveDirectory
$vLIProvider = "Local"
# $vLIProvider = "ActiveDirectory"

# Prompting for credentials, collect $vLIUser and vLIPassword
#$Credentials = Get-Credential -Credential $null
#$vLIUser = $Credentials.UserName
#$Credentials.Password | ConvertFrom-SecureString
#$vLIPassword = $Credentials.GetNetworkCredential().password

# The easy way, DO NOT use outside lab!
$vLIUser     = 'admin'
$vLIPassword = 'VMware1!'


################################################
# Upgrade to version
################################################
$version = "8.6.0-18703301"
#$version = "8.6.2-19092412"
#$version = "8.8.0-19675011"


################################################
# Adding certificate exception to prevent API errors
################################################
add-type @"
    using System.Net;
    using System.Security.Cryptography.X509Certificates;
    public class TrustAllCertsPolicy : ICertificatePolicy {
        public bool CheckValidationResult(
            ServicePoint srvPoint, X509Certificate certificate,
            WebRequest request, int certificateProblem) {
            return true;
        }
    }
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'


################################################
# Authenticate to Log Insight and create Bearer with SessionID needed for Authorization
################################################
$vLIBaseAuthURL = "https://" + $vLIServer + ":9543/api/v1/sessions"
$vLIBaseURL = "https://" + $vLIServer + ":9543/api/v1/"

$Type = "application/json"

# Creating JSON for Auth Body
$vLIAuthJSON =
"{
  ""username"": ""$vLIUser"",
  ""password"": ""$vLIPassword"",
  ""provider"": ""$vLIProvider""
}"
# Authenticating with API
Try
{
    $vLISessionResponse = Invoke-RestMethod -Method POST -Uri $vLIBaseAuthURL -Body $vLIAuthJSON -ContentType $Type
}
Catch
{
    $_.Exception.ToString()
    $error[0] | Format-List -Force
}
$vLISessionHeader = @{"Authorization"="Bearer "+$vLISessionResponse.SessionId}
Write-Host -ForegroundColor White '---'


################################################
# GET Log Insight Version
################################################
$URL = $vLIBaseURL+"version"

Try
{
    $JSON = Invoke-RestMethod -Method GET -Uri $URL -Headers $vLISessionHeader -ContentType $Type
    $LIVersion = $JSON.Version
}
Catch
{
    $_.Exception.ToString()
    $error[0] | Format-List -Force
}
Write-Host -ForegroundColor Cyan 'Version'
$LIVersion
Write-Host -ForegroundColor White '---'


################################################
# Log Insight Upgrades - Upload pakfile
################################################
$URL = $vLIBaseURL+"upgrades"

$JSONBody =
"{
  ""pakUrl"": ""http://192.168.100.90:8000/vmware/VMware-vRealize-Log-Insight-$version.pak""
}"

Try
{
    $JSON = Invoke-RestMethod -Method POST -Uri $URL -Headers $vLISessionHeader -Body $JSONBody -ContentType $Type
}
Catch
{
    $_.Exception.ToString()
    $error[0] | Format-List -Force
}
Write-Host -ForegroundColor Cyan '.PAK file uploaded'


################################################
# Log Insight Upgrades - Start upgrade
################################################
$URL = $vLIBaseURL+"upgrades/$version/eula"

$JSONBody =
"{
  ""accepted"": ""true""
}"

Try
{
    $JSON = Invoke-RestMethod -Method PUT -Uri $URL -Headers $vLISessionHeader -Body $JSONBody -ContentType $Type
}
Catch
{
    $_.Exception.ToString()
    $error[0] | Format-List -Force
}

Write-Host -ForegroundColor Cyan 'Upgrade Started. Wait until node reconnects.'

#EOF

Lines 1 – 76, handle authentication and authorization.
Lines 79 – 96, to check if everything went well, get the current version of Log Insight.
Lines 99 – 118, uploading the .PAK file.
Lines 121 – 141, accept the EULA and start the upgrade.

An additional PowerShell script, named “vLI-API-UpdateChecks.ps1”, has been added to the repository, which has been very helpful during development. The script shows detailed information about the upgrade process and can be run before, during and after the upgrade. Also here set, the variable for the new Log Insight version.Fig.1 – Sample output during upgrade from version 8.4.1 to 8.6.0.

The PowerShell script, turned out to work well, the script starts the upgrade, after some time the node is rebooted and after the node is available again, the upgrade is completed. This whole process takes about 10 minutes. In a cluster, the other nodes are now being upgraded piece by piece. Now it is time to perform the same action with the help of Ansible.

Ansible

To develop the Ansible playbook, I used a Linux (Ubuntu) workstation equipped with Python 3.8 and Ansible 2.12, access to my vSphere environment and my limited knowledge of Ansible playbooks.
The crucial command in the PowerShell script is
Invoke-RestMethod. Its counterpart is the well documented Ansible Uri module, which is installed by default.

The playbook starts with the definition of some variables, like primary Log Insight node and the version of the upgrade (variable “new-app_version”, which can be deduced from the name of the upgrade package, example: VMware-vRealize-Log-Insight-8.8.0-19675011.pak
The playbook consists of the following tasks:

1. Log insight node up?: Checks if the login page of the primary node returns HTTP status code 200. The playbook will fail if another status code is received.

2. Authenticate to Log Insight: This task handles the authentication. After successful authentication, the result is saved to create the session header in the next task

3. Set session_header: A session header is created and saved as a variable for later use.

4. Get + Show current Log Insight version: Get the current version of Log Insight. The result is saved for use in the Block construct (see next).

5. Block: Perform the upgrade. A block in Ansible can be used to create a logical group of tasks. Adding a when statement results in evaluating the ‘when’ condition, before Ansible will run the tasks inside the block. For the PowerShell minded, you can compare it with a if-then construct.
In this playbook, if the current Log Insight version is equal to the variable “new-app_version”, the block will be skipped, otherwise the upgrade will be performed.
The following tasks 5a – 5g are included in the block.

5a. Upload PAK file {{ new_app_version }}: upload of the .PAK file.
I will come back later why “status_code” is NOT used here!
Note: the location of the .PAK is still hard coded in this example and must be changed if you want to use this playbook!

5b. Wait until PAK file {{ new_app_version }} is uploaded: As it was not always clear when the .PAK file was successfully uploaded, an extra check was added. The result of the request: api/v1/upgrades/{{ new_app_version }}, provides detailed information about the progress of the upgrade (use the script vLI-API-UpdateChecks.ps1, mentioned before).
After uploading the .PAK file, the “clusterStatus” has the value of “Pending”, so we can test for that. The “until, retries, delay” construct will wait until the condition is met and retry every 5 seconds for a maximum of one hour.

5c. Start Upgrade to version {{ new_app_version }}: will accept the EULA ans start the upgrade.

5d. Wait until reboot of node {{ host_name }}: this task will wait until the node reboots, by checking the login page and waiting for a HTTP status equal to “-1”.

5e. Wait until node {{ host_name}} is ready: after the reboot, this task will wait until the login page is available.

5f. Authenticate to Log Insight after Upgrade: same as task 2.

5g. Get + Show Log insight version after Upgrade: after a successful upgrade the Log insight version must be equal to {{ new_app_version }}. If not, the playbook will fail.

The playbook below:

---
- hosts: localhost
  become: false
  gather_facts: false
  vars:
    username: admin
    password: VMware1!
    provider: Local
    new_app_version: "8.6.0-18703301"
#    new_app_version: "8.6.2-19092412"
#    new_app_version: "8.8.0-19675011"
    host_name: "192.168.100.111"
  tasks:

    - name: Log Insight node up?
      uri:
        url: https://{{ host_name }}/login
        follow_redirects: none
        method: GET
        validate_certs: no
        status_code: 200
      register: _result
      failed_when: _result.status != 200

    - name: Authenticate to Log Insight
      uri:
        url: https://{{ host_name }}:9543/api/v1/sessions
        method: POST
        body_format: json
        body:
          username: "{{ username }}"
          password: "{{ password }}"
          provider: "{{ provider }}"
        validate_certs: no
        status_code: 200
      register: session_response
      failed_when: session_response.status !=200

    - name: Set session_header
      set_fact:
       session_header: "Bearer {{ session_response.json.sessionId }}"

    - name: Get current Log insight version
      uri:
        url: https://{{ host_name }}:9543/api/v1/version
        method: GET
        headers:
          authorization: "{{ session_header }}"
        body_format: json
        validate_certs: no
        status_code: 200
      register: current_app_version

    - name: Show current Log Insight version
      debug: var=current_app_version.json.version

    - name: Perform Upgrade
      block:
        - name: Upload PAK file {{ new_app_version }}.
          uri:
            url: https://{{ host_name }}:9543/api/v1/upgrades
            method: POST
            headers:
              authorization: "{{ session_header }}"
            body_format: json
            body:
              pakUrl: http://192.168.100.90:8000/vmware/VMware-vRealize-Log-Insight-{{ new_app_version }}.pak
            validate_certs: no
#            status_code: -1

        - name: Wait until PAK file {{ new_app_version }} is uploaded
          uri:
            url: https://{{ host_name }}:9543/api/v1/upgrades/{{ new_app_version }}
            method: GET
            headers:
              authorization: "{{ session_header }}"
            body_format: json
            validate_certs: no
            status_code: 200
          register: _result
          until: _result.json.status.clusterStatus == "Pending"
          retries: 720 # 1 hour
          delay: 5

        - name: Start Upgrade to version {{ new_app_version }}
          uri:
            url: https://{{ host_name }}:9543/api/v1/upgrades/{{ new_app_version }}/eula
            method: PUT
            headers:
              authorization: "{{ session_header }}"
            body_format: json
            body:
              accepted: true
            validate_certs: no
            status_code: 200

        - name: Wait until reboot of node {{ host_name }}
          uri:
            url: https://{{ host_name }}/login
            follow_redirects: none
            method: GET
            validate_certs: no
            status_code: -1
          register: _result
          until: _result.status == -1
          retries: 720
          delay: 5

        - name: Wait until node {{ host_name}} is ready
          uri:
            url: https://{{ host_name }}/login
            follow_redirects: none
            method: GET
            validate_certs: no
            status_code: 200
          register: _result
          until: _result.status == 200
          retries: 720
          delay: 5

        - name: Authenticate to Log Insight after Upgrade
          uri:
            url: https://{{ host_name }}:9543/api/v1/sessions
            method: POST
            body_format: json
            body:
              username: "{{ username }}"
              password: "{{ password }}"
              provider: "{{ provider }}"
            validate_certs: no
            status_code: 200
          register: session_response

        - name: Get Log insight version after Upgrade
          vars:
            session_header: Bearer {{ session_response.json.sessionId }}
          uri:
            url: https://{{ host_name }}:9543/api/v1/version
            method: GET
            headers:
              authorization: "{{ session_header }}"
            body_format: json
            validate_certs: no
            status_code: 200
          register: _result
          failed_when: _result.json.version != new_app_version

        - name: Show Log Insight version after Upgrade
          debug: var=_result.json.version
      when: current_app_version.json.version != new_app_version

#EOF

The playbook can now be tested.

Fig.2 – Sample output during upgrade from version 8.6.2 to 8.8.0.

Azure DevOps

Finally, as already announced in the introduction, a concise description of how to invoke Ansible from a Azure DevOps pipeline and perform the Log Insight upgrade. Concise, because a more complete description about Azure DevOps could easily fill several posts. For me, the Pluralsight training “Integrating Ansible with Azure DevOps” was very useful in getting some understanding.

You can sign up for free with either a Microsoft or GitHub account to Azure DevOps. There is no charge for using Azure DevOps for this demonstration. The following steps should be performed:

1. Install Ansible Extension in Azure DevOps

2. Prepare a local machine for use as an Azure Pipeline Agent

3. Deploy the code in Azure DevOps

4. Create and run the Pipeline

5. Review the results

1. Install Ansible Extension in Azure DevOps

In Azure DevOps, open Organization Settings and Extensions in the General section. From here, choose Browse marketplace. In the search box, type “Ansible”, two items will appear, choose the extension provided by Microsoft and proceed with the installation in the Organization.

Fig. 3 – Result after installation of Ansible

2. Prepare a local machine for use as an Azure Pipeline Agent

Pipeline agents are needed to build and deploy code using the Azure Pipelines. An agent is a service that runs the jobs defined in the pipeline. Azure DevOps Agents can be Microsoft hosted agents or Self hosted Agents. Microsoft hosted agents can run Windows or Linux images which are re-imaged after each run and are provided by Microsoft and can instantly be used by you.

Self hosted agents are agents that are set up and managed by you. A self hosted agent will give you more control to install extra software. We will continue with a self hosted agent.
First, an
Ubuntu 20.04 LTS server was deployed in the vSphere environment, additionally Python and Ansible were installed. Instructions from Microsoft for creating a Linux Agent are pretty clear and can be found here.

Agents are not managed individually but must always be placed in a so-called Agent Pool. The agents in an Agent Pool are of the same type, this is because a pipeline is always linked to an Agent Pool and not to an individual agent. For this reason a new Agent Pool, named “Ubuntu” is created , which we will come across again later. If everything went well, our new self hosted Linux agent, can be seen in Organization Settings, Pipelines, Agent Pools, with a status of “Online”.

Fig. 4 – Agent Pool Ubuntu with Self hosted Agent

3. Deploy the code in Azure DevOps

For this demonstration, a new Project named “Ansible” has been created in Azure DevOps. The next step is to get the code into an Azure DevOps Repo.
As the code already is available in Git
Hub, the easiest way is:

1. In the Project go to Repos.

2. From the top-menu, choose Import repository.

Fig. 5 – Import Repository

Select the Repository type as “Git”, Provide the name of the repository to be imported and a name for the new Repo in Azure DevOps.

Fig. 6

4. Create and run the Pipeline

The code to create the Azure Pipeline is in the Azure Repo in a file called “azure-pipelines.yml”, see below. Azure DevOps supports two ways to create pipelines; the classical way, using a GUI editor, which is very useful while learning and understanding the pipelines concept. The “disadvantage”, you will need a separate “build”and a “release” pipeline.

The new approach is based on creating pipelines using YAML. This way you can create an integrated, build and release pipeline.
Note: you can convert elements from “classical”pipelines to YAML code.

The pipeline code is shown below, some explanation:

Line 3: trigger, the pipeline will automatically run after each update of the code.
Line 6: pool, refers to the Agent pool that will run the code.
Line 10: the first task copies the code from the Repo to the “Artifacts” space.
Line 16: the second task publishes the Artifact
Line 20: the final task, runs the Ansible playbook

# Ansible YAML pipeline

trigger:
- main

pool:
  name: Ubuntu

steps:
- task: CopyFiles@2
  displayName: 'Copy Playbooks to Artifacts'
  inputs:
    SourceFolder: playbooks
    TargetFolder: '$(build.artifactstagingdirectory)/Playbooks'

- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact: drop'


- task: Ansible@0
  displayName: 'Run playbook'
  inputs:
    playbookRootRemoteMachine: '$(System.DefaultWorkingDirectory)/playbooks'
    playbookPathLinkedArtifactOnRemoteMachine: playbook.yml
    playbookPathOnAgentMachine: '$(System.DefaultWorkingDirectory)/playbooks/playbook.yml'
    inventoriesRemoteMachine: hostList
    inventoryHostListRemoteMachine: 192.168.100.202
    failOnStdErr: false

To create the Pipeline, in the Project, go to Pipelines.
Select New Pipeline.
At “Where is your code?”, select: Azure Repos Git.
Select the Repo, you have just created, e.g. “loginsight-ansible-demo”.
On the next screen “Review your pipeline YAML”, the pipeline code is shown.

Fig. 7 – Review Pipeline

From here you can start the new pipeline, choose Run.

Fig. 8 – Pipeline is running

Click on Job, to see the progress.

Fig. 9 – Run playbook, upgrade from version 8.4.1 to 8.6.0.

After some time, the job is finished.

Fig. 10 – Job completed and successfully upgraded to version 8.6.0

5. Review the results

If all goes well, you will see green ticks appear during the execution of Jobs. An error during execution is marked by a red icon. Azure DevOps provides quite extensive logging, which can help you to resolve an issue. Another source of information is on the self hosted agent. The agent is installed in a folder called myagent. In a folder named _work, you can find more information and also find the files copied to the agent.

During the development of this demonstration, I have encountered the error message seen below:

TASK [Upload PAK file 8.6.0-18703301.] *****************************************
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 30, "msg": "Status code was -1 and not [200]: Connection failure: The read operation timed out", "redirected": false, "status": -1, "url": "https://192.168.100.111:9543/api/v1/upgrades"}

PLAY RECAP *********************************************************************
localhost                  : ok=5    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

##[error]Command ansible-playbook /home/ubuntu/myagent/_work/6/s/playbooks/playbook.yml  exited with code 2.
Finishing: Run playbook

During the execution of the playbook in Azure DevOps, an error occurred during the upload of the .PAK file to the node. This error only showed up during execution of the pipeline. Running the same playbook from my laptop or even directly from the self hosted Linux agent (yes you can do that as well!) did not reproduce the error.

For every task where the Ansible Uri module was used, I also added the parameter status_code to check the results. The Default status code is 200, the error message states that code -1, which means connection failure, was received. And indeed after changing the playbook with a “status_code: -1”, the pipeline ran without any issues. However manually running the changed playbook went, as expected, wrong. So for that reason, the statement was removed, with no visible impact on the playbook.

Conclusion

In this demonstration you could read how to perform an upgrade of Log Insight with a PowerShell script, with Ansible or even through a pipeline. I hope it was instructive. As always, I thank you for reading.

One thought on “Upgrade Log Insight using Ansible and Azure DevOps

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.