oAuth, tokens, and powerShell

Google, Microsoft, Amazon, Box, Twitter, Trello, Facebook… what do they all have in common? oAuth authentication workflows.

Take your pick of languages to get samples on how to authenticate against all of the endpoints; and you will have to pick and decide between SDKs, NuGet packages, library after library to pull it all together. Sure, these options are great for application developers. But I’m not a developer. I’m a system administrator. An automation engineer. I don’t have interest to load assemblies into core infrastructure that is changing day to day…

Enter – powerShell, and Invoke-WebRequest/-RestMethod. With these two commands as the base, and a bit of ingenuity – you can do all the calls needed to authenticate yourself and start working with a site’s API endpoints. Added benefit of doing it this way? You can use the same code on powerShell 6 on Linux or Windows.

1. Figuring out the authentication flow.

oAuth 2 authentication flows are configured ahead of time by the vendor you are connecting to. Systems will very, but the general flow you will interact with seems to be answered by the following questions. Are you authenticating as a user every time you need to access a service? What about automating server to server work? Do you want to prompt user’s for consent once, assume consent from SSO referred connections, or require consent every single time you request an authentication? oAuth supports just about any combination of this, but isn’t necessarily configured to be consumed. Also, most user based authentication flows support the use of refresh tokens. These special tokens can be used to authenticate in a never ending loop without proving the requester is still valid. The idea is that you already went through the authentication, authorization, and validation process once – no need to do it again since only the authorized account holder would have gotten the refresh token.

Are you authenticating as a service account, or automation system? oAuth 2.0 is also setup to support authentication by signing a request with a private key. Vendors may vary – Google will provide a .p12 file. Box requires you to create your own, and upload the public key. Either way, you use this private key to digitally sign a configured request to get a token, and can be done with no user interaction.

2. The gotchas of doing oAuth tokens

In a user based authentication flow, at some point, you will need to make a request in a web browser. Works great if you are on linux and have access to the selenium-driver, but in a Windows world can get tricky. Invoke-WebRequest gets most of the way, but just not far enough in a complex vendor environments. Basic auth / form auth frequently don’t work well here either. As mentioned previously about refresh tokens though – it is possible to do this web browser process once, gather a refresh token, and then continue on in life for as long as you keep your refresh token uncompromised.

Getting an access token via Json-Web-Token(JWT) request only is more complicated, but is the general process for doing a service to service oAuth request. Google it, and you will get lots of explanations of all the bits and pieces. You’ll also get very few explanations on how to generate one.

3. Code some stuff – go go powerShell

Using the UMN-Google, UMN-Azure, or UMN-Trello repos at https://github.com/umn-microsoft-automation as an example, you will find functions that do the heavy lifting on getting access to various API endpoints.

In any of these cases there is a general flow of process.

  1. Gather who is requesting access to what
  2. Take that information and go to a claims end point to verify authentication. This is generally done in a web browser. These powerShell functions are setup to do an IE popUp to let a user login for verification.
  3. Take the claim received if verified, and go to token endpoint to exchange for a token and possibly a refresh token.

A. function ConvertTo-Base64URL
This is a core component that encodes json data into the needed Base64Url encoded strings. This is needed when using certificates to sign a JWT request.

B. function Get-xxOAuthTokenUser (where xxx = G for google, or Azure)
This function assumes that you have done the work ahead of time to create a google project or Azure application endpoint. Mostly, you just authenticate in a web browser to get an authorization code that is exchanged later for your tokens.

C. function Get-xxOAuthTokenService (where xxx = G for google, or Azure)
This function uses a signed JWT request from a private key (Google) or secret key (Azure)to get an access token. Service to Service flows have the possibility to go directly to the token endpoint with a properly formulated JWT request.

Turing Ping On/Off With An SCCM Configuration Item


The best way I’ve found to disable ping using a configuration item through a
script is using a .Net class. Our fleet is Windows 7+ and Server 2003(ick)+ so
the solution needs to be more robust than using the Server 2012+ built-in
firewall cmdlets.

The scripts are quite simple but we rely heavily on the following core code:

# Get the firewall manager object from .Net
$Firewall = New-Object -ComObject HNetCfg.FwMgr

# Get the domain policy (0)
$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

# Get the settings for ping
$IcmpSettings = $Policy.IcmpSettings

This code gets the firewall manager object out of .Net and then grabs the domain
Firewall policy then the ICMP settings for the domain policy. This is flexible
so the .Net object should exist in older versions of Windows.

Now, there is really only one setting we care about in the ICMP Settings:


If it’s $true then ping is turned on, if it’s $false then ping is disabled.

Thus, the discovery script is fairly simple, all we have to do is .ToString()
the $IcmpSettings.AllowInboundEchoRequest and if we want to enable/disable
ping all we have to do is add an = $true or = $false to the statement.

Below I’ve put the full scripts I used for our code.



$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings


Remediation – Turn Ping On

$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings

$IcmpSettings.AllowInboundEchoRequest = $true

Remediation – Turn Ping Off

$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings

$IcmpSettings.AllowInboundEchoRequest = $false

Commit to Github Using API from Powershell

Continuing my search for doing automation in Powershell with as few dependencies as possible I turned to committing code to Github via the API.  I found various posts that outlined the basic process but not very good examples of getting it done, certainly none in Powershell.  So I extended the Powershell module I had already been working on, UMN-GitHub, to add support.  The short version is, just run the function Update-GitHubRepo.

The reference for master is ”refs/heads/master” however I would recommend committing to a test branch at a minimum and doing a pull request after that.  Github has plenty of content on best practices around all that and is outside the scope of this post. (use Get-GitHubRepoRef to get a list of refs, choose the one you want)

Lets dig in a bit to what the function actually does.

Run $headers = New-GitHubHeader [token or credential switch] first to get the header you will need.

# Get reference to head of ref and record Sha
$reference = Get-GitHubRepoRef -headers $headers -Repo $Repo -Org $Org -server $server -ref $ref
$sha = $reference.object.sha
# get commit for that ref and store Sha
$commit = Get-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -sha $sha
$treeSha = $commit.tree.sha
# Creat Blob
$blob = New-GitHubBlob -headers $headers -Repo $Repo -Org $Org -server $server -filePath $filePath
# create new Tree
$tree = New-GitHubTree -headers $headers -Repo $Repo -Org $Org -server $server -path $path -blobSha $blob.sha -baseTree $treeSha -mode 100644 -type 'blob'
# create new commit
$newCommit = New-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -message $message -tree $tree.sha -parents @($sha)
# update head to point at new commint
Set-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -ref $ref -sha $newCommit.sha

One thing to note, if you open up New-GitHubBlob you’ll notice I converted the file contents to base64.  Since the contents need to be sent as text via json data …. there wasn’t a good way to define a boundary and say “this is the file contents” without it blowing up.  Its not like you can attach a file.  So converting it to base64 solved a lot of problems.

This first version of New-GitHubTree is not very advanced and only takes in one file.  Future version may take in multiple version.

Powershell and the VMWare 6.5 RestAPI

VMWare has released a RestAPI to be an alternative to powercli and their other SDKs.  Once of the best ways to peek at the API is via the API Explorer.  In a browser, open the web page to your vcenter server/appliance: https://<vcenter-FQDN>/apiexplorer/

I found plenty of references to using the API is other languages/formats …… but as usual powershell was harder to come by.  Also, in general I found examples to be overly simplified and vmware’s explanation of what data should look like to be lacking.  So after a fair amount of trial and error I put together a number of basic functions and put them in a module.  They are public in Github.

The module is not (as of yet anyways) a complete covering of everything.  It covers basics around creating and removing VMs, tagging, and of course Authenticating.  The number of options around things like creating the vm are numerous therefore I did use some defaults.  That being said,  if you look at the code in the functions along with the apiexplorer you should have a good bases for making changes to make it do what you want.  This applies to all the functions.  As with most RestAPIs, once you figure out the basic structure of the various methods (GET,POST,DELETE,PUT,PATCH) and how to construct the body (in this case its just JSON), expanding to consume any part of the api becomes much easier.


Emergency! We need that patched!

Infrastructure is never a perfect world when it comes to Microsoft patch management. Is your WSUS service healthy? Is it integrated to SCCM as a software update point? Are you free of errors, but still not sure? Did that security patch really go out?

Your security monitoring, SCOM, or log analytics tool might tell you otherwise; and what are you left to do?

Do you believe the SCCM deployment reports, or your monitoring that says a critical patch is missing?

No matter the case of what, or why – sometimes there comes a point where you just need to brute force install a patch. Also, you need it, like, NOW.

You’ve got options if you’re prepared with DSC, but this is production, and we need them patched now – reboot later! Business requirements… alas.

Hopefully you can pull a dynamic list of systems from SCOM, SCCM, VMWare, Hyper-V, AD, or where ever… and just pipe it through. Obviously, if you need to reboot now as well, that is easy enough to modify from this little one-off.

1. Get a list of systems you need to apply the patch to.
2. Extract the .cab of the KB you download from Microsoft.
3. Assumes you have remote powerShell / Admin access.

$listOfComputers| foreach {
$session = new-pssession $_
copy 'path to patchKB.cab' 'path to patchKB.cab' -tosession $session

Invoke-Command -session $session -scriptBlock {
## Check if KB is already installed
$KB = get-hotfix |where{$_.hotfixid -eq 'KB#####'}
if (!$KB) {

## Just in case you need to verify the OS version ##
$os = (Get-WmiObject -class Win32_OperatingSystem).version

## dism... I know -- allows for remote execution using no new processes, or EULA to the KB ##
dism.exe /online /add-package /PackagePath:c:\windows\temp\patchKB.cab /norestart

Else {write-host 'KB previously installed'}}}

DIY Continous Integration with GitHub Webhooks, Azure, and Docker Containers Part 1

There are plenty of tools out there focused on CI (Continuous Integration), not all of which play nice with Microsoft or are for some reason inaccessible ($$, IT management, etc).  This is one approach I’ve used based of the tools available to me.  Maybe some part of it will be of value to you.

The Main components are Github, Azure Runbooks, and Docker.  The overall process is this.  You make a commit in github, then github will send a bunch of data over to Azure to be processed.  Azure will spin up a docker container and dump your code onto the container.  I chose Azure for two reasons.  The cost to run the runbooks is very small.  You get 500 minutes Free!  After that its still only $0.002 / minute.  The second reason is that all the work has been done for you by Azure.  You don’t have to set anything up or manage another server.  We all have better things to do.

I’m actually going to start with the Azure Runbook because without that you can’t really create the webhook.  Create a Powershell Runbook and edit it.  Copy and paste the following for the first few lines.

param ([object]$WebHookData)
$WebhookBody = $WebhookData.RequestBody
$Inputs = ConvertFrom-JSON $WebhookBody
#$Inputs# here for debug, it will output to the ouput section of the job in Azure
$authorEmail = $Inputs.head_commit.author.email
$ref = (($Inputs.ref).Split('/'))[-1]
$user = $Inputs.head_commit.committer.name
$message = $Inputs.head_commit.message
$message += "`n"
$files = $Inputs.head_commit.modified
$files = $files + $Inputs.head_commit.added
"Files changed/added $files"
$repo = $Inputs.repository.name
$org = $Inputs.repository.owner.name

There is plenty of additional information in $WebhookData.RequestBody, but this list grabs some of the key elements you need: what changed, by who, and what org/repo the changes were made in.  This is also a good starting point.  From here we need to add a Listener, aka Webhook.  Go ahead and publish what you have so far.  Back on the main page for the runbook under “RESOURCES” is a link to Webhooks.  Click on that and click to Add a webhook.  Create a new Webhook.  Give it a name and this is key YOU MUST COPY THE URL NOW.  Once you create it you can NOT go back and see the URL again.   Paste it someplace handy so you don’t lose it.  Almost done.  Click on OK and then click on “Modify run settings”.  The default is to Run on Azure.  Now this is also a great alternative and you can build automation around Azure vms and apply most of what will follow.  However, since I have access to on-prem equipment its cheaper to run everything local.  You will need to set up a Hybrid runbook worker  (This process is outside the scope of this post).  Don’t worry you can finish creating the Webhook now, leaving it on Azure, and come back later to switch over to your Hybrid Worker.

Now on to github.  To set up a repository webhook on GitHub, head over to the Settings page of your repository, and click on Webhooks & services. After that, click on Add webhook.  Take the URL you copied earlier and paste it into “Payload URL”.  You can leave the rest as defaults, make sure Content type is application/json.  Click on “Add Webhook”

Now you’re ready to start some basic testing.  If you don’t already have a dev/test branch for your repo, go ahead and create one.  Make a commit to your test branch.  Head back over to the Azure runbook.  This time under “RESOURCES” click on “Jobs”.   You should see a completed job (possibly In Queue or Running if you are quick).  Click on the Job.

Now you can click on the Output button and you should see some useful information.  If you go back to your runbook editor and start removing comments you will get even more information.  Remove the comment from $Inputs to see everything.  Use this part to get familiar with the data coming from github to determine what you consider to be valuable.

Part II will cover some decision making and spinning up the Container.

Install chef client on Windows Container and Connect to Chef Server

With a traditional windows machine the traditional “knife bootstrap windows winrm …” approach to bootstrapping works fine.  It is not however so straight forward for a container.  When you first spin up a container (assume a blank server core)  you have two methods to interact with it, Direct Powershell from the container host or the docker client, neither of which work with “knife” (in so far as I know).  You also don’t know the admin password.  There are going to be different methods to solving this problem, this is just one example that I have found to be simple and easy.  I am assuming you already have a chef server setup and have a minimal amount of familiarity with it.

Step one: Download the Validation key into a file named “validation.pem” and store it in an empty folder.  Then create a file ‘first-boot.json’ in that same folder. Contents:


Of course you can add additional parameters to this file as you see fit.

You’ll also need to know your ‘chef_server_url’ and ‘validation_client_name’ (which you can get from “Generate knife Config” in the chef server.

Step two:  Spin up a container (more details on this here)

$ps = "Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Wait-Service/Wait-Service.ps1' -OutFile 'c:\Wait-Service.ps1';c:\Wait-Service.ps1 -ServiceName WinRm -AllowServiceRestart"
($cid = docker run -d microsoft/windowsservercore powershell.exe -executionpolicy bypass $ps)

Step three: Install the chef client and connect to chef server.  There are 3 lines to update with your specific info

Invoke-Command -ContainerId $cid -RunAsAdministrator -ScriptBlock{Invoke-WebRequest -uri "https://omnitruck.chef.io/install.ps1" -OutFile c:\install.ps1;c:\install.ps1;Install}
$cPath = 'c:\'
$local = '' # update this from above
docker cp -L $local $cid`:$cPath
$chefURL = '' # update with your chef server URL
$validationClientName = '' # update this with your info
Invoke-Command -ContainerId $cid -RunAsAdministrator -ScriptBlock{
@" chef_server_url '$using:chefURL'
validation_client_name '$using:validationClientName'
file_cache_path 'c:/chef/cache'
file_backup_path 'c:/chef/backup'
cache_options ({:path => 'c:/chef/cache/checksums', :skip_expires => true})
node_name '$env:COMPUTERNAME'
log_level :info log_location STDOUT "@ | Out-File 'c:\chef\client.rb' -Encoding utf8 -Force
chef-client -c c:/chef/client.rb -j c:/chef/first-boot.json

That's it.  Run the following if you want to inspect the client.rb that is create for troubleshooting

$name = (((docker ps --no-trunc -a| Select-String $cid).ToString()).Normalize()).Split(" ")[-1]
$ps = "get-content c:\chef\client.rb"
docker exec $($name) powershell.exe -executionpolicy bypass $ps