If you’ve ever deployed a VM in vSphere and caught yourself thinking, “There must be a faster way to do this,” you’re definitely not alone. After clicking through the same wizard more times than I’d like to admit, I decided it was time to build something cleaner, smarter, and fully automated.
In this series, I’ll walk through how I built a multi-OS provisioning and configuration framework using Ansible and VMware—flexible enough to handle Linux servers, Windows Server, and even Windows clients.
We’ll cover the full workflow:
– structuring a clean and scalable Ansible repository
– provisioning VMs from vSphere templates
– bootstrapping access with SSH keys or WinRM
– applying OS-specific configuration logic (Debian, RedHat, Windows)
– integrating the workflow with manual triggers or CI/CD pipelines
The goal is simple: save time, reduce repetitive work, and deploy consistent, ready-to-use systems with a single command.
So grab your favorite drink, open the terminal, and let’s get started.
Project Overview
Before diving into templates, roles, and automation magic, it’s important to understand what this project is trying to accomplish. The goal is simple: build a unified automation framework that can provision and configure multiple operating systems on VMware—cleanly, consistently, and without the usual copy-paste chaos.
At its core, the framework follows three guiding principles:
1. Separation of concerns
Provisioning, configuration, OS logic, and environment-specific data each live in their own clearly defined place. This keeps the repository clean and prevents the “spaghetti Ansible” effect we’ve all seen at least once.
2. OS-agnostic high-level workflow
Whether you’re deploying a Debian-based server, a RedHat box, a Windows Server instance, or even a Windows client, the overall flow stays the same. What changes is only the OS-specific logic underneath.
3. Environment-based inventories
The structure remains identical across lab, test, and production. Only the data—like hostnames, credentials, templates, or datastores—changes per environment.
From a high-level perspective, the automation workflow looks like this:
- Provision a VM from a vSphere template
A clean, pre-configured golden image (Linux or Windows) is cloned and customized automatically. - Apply static network configuration
Each VM receives a predefined static IP address, subnet, gateway and DNS configuration based on its host_vars definition. The same data is later used by Ansible to connect to the system for post-provisioning tasks. - Bootstrap remote access
- For Linux: install the public SSH key for the automation user
- For Windows: enable and configure WinRM
- Apply OS-specific configuration
Package installation, baseline settings, services, domain join (for Windows), or any custom logic you want to enforce. - Extend with application-level automation
Optional steps such as deploying agents, monitoring exporters, middleware, or full application stacks. - Integrate with CI/CD
The entire workflow can run manually or be fully automated through GitLab CI, Jenkins, or any other pipeline.
The end result is a flexible and scalable provisioning framework that eliminates repetitive tasks, enforces consistency, and makes spinning up new systems feel like a quick, predictable operation rather than an endless sequence of manual steps.
With the overview in place, it’s time to walk through the repository structure and start building the framework piece by piece.
Creating the Base Repository Layout
A clean and predictable repository layout is essential for any automation project—especially when dealing with multiple operating systems and several environments. The structure below keeps provisioning, configuration, variables, and OS-specific logic separated in a way that scales well as the framework grows.
Here is the high-level directory layout used in this project:
infra-ansible/
├── ansible.cfg
├── collections/
│ └── requirements.yml
├── inventories/
│ ├── lab/
│ │ ├── inventory.yml
│ │ └── group_vars/
│ │ ├── all.yml
│ │ ├── linux_debian.yml
│ │ ├── linux_redhat.yml
│ │ ├── win_servers.yml
│ │ └── win_clients.yml
│ ├── test/
│ └── prod/
├── playbooks/
│ ├── provision/
│ │ ├── vmware_linux_debian.yml
│ │ └── vmware_windows_server.yml
│ ├── configure/
│ │ ├── linux_base.yml
│ │ └── windows_base.yml
│ ├── patch/
│ └── stacks/
├── roles/
│ ├── os_linux_base/
│ ├── os_windows_base/
│ ├── vmware_guest_provision/
│ └── monitoring/
├── vars/
│ ├── vm_sizes.yml
│ └── images.yml
└── docs/
├── architecture.md
└── runbooks.md
The sections below explain how each part is created and why it exists.
Step 1: Create the Repository Folder
mkdir infra-ansible
cd infra-ansible
This will be our root directory for the entire automation framework.
Step 2: Create the Core Folder Structure
Inside the repository, create all the necessary directories:
mkdir -p collections
mkdir -p inventories/lab
mkdir -p playbooks/{provision,configure,patch,stacks}
mkdir -p roles
mkdir -p vars
mkdir -p docs
Each directory has a specific purpose:
- collections/ – holds external Ansible collections
- inventories/ – separate inventories for each environment (lab/test/prod)
- playbooks/ – provisioning, configuration, patching, and stack-level automation
- roles/ – reusable logic (OS-specific, provisioning, monitoring, etc.)
- vars/ – shared variables such as template names
- docs/ – architecture notes and operational runbooks
Step 3: Create ansible.cfg
This file ensures that Ansible knows where to find roles, collections, and the default inventory.
Create it at the repository root:
ansible.cfg
[defaults]
inventory = inventories/lab/inventory.yml
roles_path = roles
collections_path = collections
host_key_checking = False
retry_files_enabled = False
deprecation_warnings = False
Step 4: Define Required Collections
The automation relies on a few Ansible collections, such as:
- community.vmware provides the VMware modules we use to clone VMs, customize guests, and run commands inside the VM through VMware Tools.
- ansible.windows contains the core Windows modules such as win_ping, win_file, win_optional_feature, and win_reboot.
- community.windows adds additional Windows-related functionality from the community ecosystem.
- microsoft.ad gives us the microsoft.ad.membership module, which we will use later to join Windows machines to Active Directory in a clean, supported way.
Create the requirements file:
collections/requirements.yml
collections:
- name: community.vmware
- name: ansible.windows
- name: community.windows
- name: microsoft.ad
At this stage we are only defining which collections the framework depends on. The actual installation of these collections will be done later, from within a dedicated Python virtual environment, to avoid version conflicts with the system-wide Ansible installation.
Step 5: Create the Inventory Structure
We begin with a lab environment and a clean OS-oriented grouping.
Create:
inventories/lab/inventory.yml
# Root inventory structure
all:
children:
# Debian-based Linux servers (Debian, Ubuntu)
linux_debian:
hosts: {}
# RedHat-based Linux servers (RHEL, Rocky, AlmaLinux, CentOS)
linux_redhat:
hosts: {}
# Windows Server machines
win_servers:
hosts: {}
# Windows Client machines (Windows 10/11)
win_clients:
hosts: {}
# Group containing all Linux systems (Debian + RedHat)
linux:
children:
linux_debian: {}
linux_redhat: {}
# Group containing all Windows systems (Servers + Clients)
windows:
children:
win_servers: {}
win_clients: {}
# vCenter endpoint
vcenter:
hosts: {}
This layout allows us to target:
- Debian-only
- RedHat-only
- All Linux
- Only Windows Server
- All Windows
- vCenter
without duplicating logic.
Step 6: Create group_vars for Each OS Family
Each OS type gets its own variables file.
Debian Linux
inventories/lab/group_vars/linux_debian.yml
# Settings specific to Debian-based Linux systems (Debian, Ubuntu)
ansible_connection: ssh
ansible_user: "ansible"
ansible_become: true
ansible_become_method: sudo
linux_package_manager: "apt"
linux_common_packages:
- curl
- vim
- htop
RedHat Linux
inventories/lab/group_vars/linux_redhat.yml
# Settings specific to RedHat-based Linux systems (RHEL, Rocky, AlmaLinux, CentOS)
ansible_connection: ssh
ansible_user: "ansible"
ansible_become: true
ansible_become_method: sudo
linux_package_manager: "dnf"
linux_common_packages:
- curl
- vim-enhanced
- htop
Windows Server
inventories/lab/group_vars/win_servers.yml
# Placeholder for future Windows Server group settings
#
# This file is intentionally empty for now.
# When multiple Windows Server hosts are introduced,
# shared configuration (baseline policies, defaults,
# package lists, WinRM options, role mappings, etc.)
# can be added here instead of duplicating them in host_vars.
Windows Clients
inventories/lab/group_vars/win_clients.yml
# Placeholder for future Windows Client group settings
#
# At this stage, all configuration for Windows 10 is
# host-specific, so this file remains empty.
#
# As the environment expands (multiple workstations, shared
# hardening rules, security policies, software baselines,
# or centralized parameters), this file will become the
# natural location for settings that apply to all Windows
# client VMs.
Note: Why Keep These Files If They’re Empty?
Because a mature automation environment evolves.
Today you may have a single Windows client and no Windows servers — but later you might add:
- 20 Windows workstations
- several Windows Server VMs
- common security settings
- shared WinRM defaults
- policies applicable to all Windows machines
When that moment comes, you will not want to refactor your repository.
The structure will already be in place.
Keeping group_vars empty-but-ready is the cleanest way to support future expansion.
vCenter Connection Configuration
inventories/lab/group_vars/vcenter.yml
# vCenter API connection settings
ansible_connection: local
vcenter_hostname: "vcsa.racklab.local"
vcenter_username: "ansible@vsphere.local"
vcenter_password: "{{ vault_vcenter_password }}"
vcenter_validate_certs: false
A few important notes:
- The password is securely stored in Ansible Vault and never in plain text.
- The
ansible_connection: localsetting ensures that provisioning tasks run on the Ansible control node instead of a remote host.
This separation keeps the configuration clean, avoids duplication, and allows each VM to define its own provisioning metadata in a separate location.
Step 7: Create the Linux Base Role
We create one role that works for both Debian and RedHat families.
defaults
roles/os_linux_base/defaults/main.yml
# Default variables for Linux base configuration
linux_base_packages_extra: []
main entry point
roles/os_linux_base/tasks/main.yml
# Main entry point for os_linux_base role
- name: Include Debian-family tasks
include_tasks: debian.yml
when: ansible_os_family == "Debian"
- name: Include RedHat-family tasks
include_tasks: redhat.yml
when: ansible_os_family == "RedHat"
Debian logic
roles/os_linux_base/tasks/debian.yml
# Base configuration for Debian-based systems
- name: Update APT cache
apt:
update_cache: true
cache_valid_time: 3600
- name: Install common packages on Debian systems
apt:
name: "{{ linux_common_packages + linux_base_packages_extra }}"
state: present
RedHat logic
roles/os_linux_base/tasks/redhat.yml
# Base configuration for RedHat-based systems
- name: Ensure DNF metadata is up to date
dnf:
update_cache: true
- name: Install common packages on RedHat systems
dnf:
name: "{{ linux_common_packages + linux_base_packages_extra }}"
state: present
Step 8: Create Central Template Mapping
Each operating system family (Debian, RedHat, Windows Server, Windows Client) uses its own golden image stored in vSphere. To avoid hardcoding template names inside the playbooks, the framework stores them in a central variable file.
Create the following file:
vars/templates.yml
# Template names used for OS provisioning
linux_debian_template: "ubuntu-22-base"
linux_redhat_template: "rockylinux-9-base"
win_server_template: "win2022-golden"
win_client_template: "win10-template-no-sysprep"
Why store template names separately?
- Playbooks remain clean and OS-agnostic
- Changing a template requires editing only one file
- Different environments (lab/test/prod) can override templates if needed
- Provisioning logic becomes reusable across multiple systems
During provisioning, the appropriate template is selected by referencing these variables instead of embedding string literals.
Using a Python virtual environment for Ansible and installing collections
On most Linux distributions, Ansible packages are available directly from the system repositories. While this is convenient, it often leads to version mismatches between ansible-core and external collections such as community.vmware. To keep this framework stable and reproducible, all Ansible commands are executed from a dedicated Python virtual environment.
This has a few important advantages:
- we control the exact ansible-core version used by the project
- we can install additional Python libraries (pyvmomi, requests, etc.) without touching the system Python
- Ansible collections are installed in an isolated environment, avoiding conflicts with system-wide installations
The setup is straightforward. From the repository root:
cd ~/infra-ansible
python3 -m venv .venv
source .venv/bin/activate
Next, upgrade pip and install the required Python packages inside the virtual environment:
pip install --upgrade pip
pip install ansible-core pyvmomi requests
With Ansible installed in the virtualenv, you can now install the collections defined earlier in collections/requirements.yml:
ansible-galaxy collection install -r collections/requirements.yml
From this point on, every Ansible command related to this project should be executed from the virtual environment:
source .venv/bin/activate
cd ~/infra-ansible
ansible-playbook ...
If you skip the virtual environment and run Ansible from the system packages, you may run into compatibility issues between ansible-core and the community.vmware collection, leading to errors such as missing support modules or StrictVersion import problems.
Using Ansible Vault for Secure Credentials
When you start automating VMware and Windows deployments, there’s one thing you absolutely don’t want floating around in plain text: passwords.
vCenter passwords, Windows local admin passwords, domain join credentials, service accounts… All perfect candidates for leaking into Git history and ruining your day.
That’s where Ansible Vault comes in.
Vault lets you store sensitive variables in encrypted files and keep your repository clean, safe, and shareable. Everything looks like YAML, but the content is encrypted. Ansible decrypts it automatically at runtime when you pass the vault password.
What We Store in Vault
In this project, the Vault holds:
- the vCenter administrator password
vault_vcenter_password - the Windows local administrator password (used for WinRM bootstrap and later for user creation)
vault_win_admin_password
Basically: anything that makes a security auditor raise an eyebrow.
Where We Store the Vault and Why
All sensitive values live in:
inventories/lab/group_vars/all/vault.yml
Putting the Vault inside group_vars/all/ has a big advantage:
Ansible loads it automatically for every playbook and every host.
This means:
- we don’t need
vars_files:everywhere - ad-hoc commands like
ansible HOST -m win_pingalso get access to the encrypted values - credentials stay in one central, predictable location
This layout keeps things clean, avoids accidental leaks in inventory files, and makes the entire project easier to reason about — exactly what you want when working with VMware automation, WinRM bootstrap, and multi-OS provisioning.
Creating the Vault File
We initialize the vault like this:
ansible-vault create inventories/lab/group_vars/all/vault.yml
Ansible asks for a vault password (you’ll use the same one later with --ask-vault-pass).
Inside the file, we add our encrypted variables:
vault_vcenter_password: "YourVCPasswordHere"
vault_win_admin_password: "YourWindowsPasswordHere"
Once saved, the file on disk is fully encrypted and unreadable.
If we ever need to edit the values:
ansible-vault edit inventories/lab/group_vars/all/vault.yml
If you want to view it (read-only):
ansible-vault view inventories/lab/group_vars/all/vault.yml
Using Vault Variables in Playbooks
We never store plaintext passwords in playbooks or inventory.
Instead, we reference the encrypted values like any other variable:
vcenter_password: "{{ vault_vcenter_password }}"
ansible_password: "{{ vault_win_admin_password }}"
Ansible decrypts everything on the fly when you run:
ansible-playbook ... --ask-vault-pass
Wrap-Up
In this first part, we focused entirely on building the foundation of our automation framework: a clean repository layout, structured inventories, OS-specific group variables, secure vCenter settings, and a central mapping for all base templates. With these pieces in place, the project is now ready for actual provisioning work.
In the next post, we’ll take the first real step: deploying a Windows 10 virtual machine from a vSphere template using the structure we created here. We’ll define the VM’s metadata, assign a static IP address, set the correct network adapter, and build the full provisioning playbook.
Now that the framework is prepared, we can finally start putting it to use. Stay tuned for Part 2: Provisioning a Windows 10 Client from a vSphere Template.
