As part of a recent piece of work deploying a Modern Workplace solution for a customer, I was asked how could we deploy their corporate fonts to all machines so that corporate branding was maintained across documents that were produced in the organisation. Looking online there are several scripts that allow you to deploy an individual font (Deploying an embedded file (FONT) in a Powershell script through Intune MDM) however I had two font families with a total of 15 fonts that needed to be deployed. Encoding these files into Base64 would hit the limit of the PowerShell scripts that Intune Management Extension could execute so I had to look for an alternative.
The logical solution was to build an “application” that can deploy the fonts using the Win32app functionality in Intune and then push them as Required to the Intune managed computers. I would need several components for this:
An Installer to copy the fonts into the right target location on the devices (whilst you can copy them to C:\Windows\Fonts there is a bit more configuration to complete the install)
An Uninstaller to remove the fonts if they were no longer required (this is more for completeness and clean up in the future rather than a full requirement for the customer)
The fonts in a logical format so they can be iteratively installed.
The install and uninstall process was quickly solved when I found this blog for Adding and Removing Fonts with Windows PowerShell. The script enables a single font, or all fonts in a particular folder to be installed. I made a tweak to complete this recursively over my install folder so that I could package all the fonts in a single application changing the relevant lines in the source script from this:
elseif ((Test-Path $path -PathType Container) -eq $true)
{
$bErrorOccured = $false
foreach ($file in (Get-Childitem $path)) {
if ($hashFontFileTypes.ContainsKey($file.Extension)) {
$retVal = Add-SingleFont (Join-Path $path $file.Name)
if ($retVal -ne 0) {
$bErrorOccured = $true
}
}
else {
"`'$(Join-Path $path $file.Name)`' not a valid font file type"
""
}
}
If ($bErrorOccured -eq $true) {
exit 1
}
else {
exit 0
}
}
to this
elseif ((Test-Path $path -PathType Container) -eq $true) {
$bErrorOccured = $false
write-host $path
foreach($file in (Get-Childitem $path -Recurse)) {
if ($hashFontFileTypes.ContainsKey($file.Extension)) {
$retVal = Add-SingleFont $($file.Fullname)
if ($retVal -ne 0) {
$bErrorOccured = $true
Write-Output "Install of $($file.name) Failed"
}
else { Write-Output "Install of $($file.name) Successful" }
}
else {
"`'$(Join-Path $path $file.Name)`' not a valid font file type"
""
}
}
If ($bErrorOccured -eq $true) { exit 1 }
else { exit 0 }
}
To make the install process easier I have created some single line scripts that will take all the files in the Fonts subfolder and call the Add-Font or Remove-Font script against them.
Putting this all together I used the Intune Win32 app packager to create a file that I can load into Intune to deploy to my users.
Want to use this yourself? You can download the script components from the following link. All you need to do is add the font files into the Fonts subfolder, run the wrapping process and then upload to Intune. font-deployment
Hope you find this useful to further customise your Modern Workplace deployments
In Michael Niehaus’ recent blog on Configuring Windows 10 defaults via Windows Autopilot using an MSI he talked about the ability to set the Time Zone of the device based on a variable in the Config.xml file. One of the comments on the blog asked whether it would be possible to set the time zone based on where the device was at the time of setup rather than based on an attribute in the file.
Whilst Windows 10 has a feature to detect the time zone automatically I have found on occasion with devices that this doesn’t always work out of the box and the time zone remains in a region that is not accurate for where the device is. This should be relatively simple to automate if we can geo-locate the public IP address and match those coordinates to a valid time zone. For this I am using two web services:
https://ipstack.com – this will map the public IP address to a geo-location including coordinates, you get access to 10,000 queries a month for free
Bing Maps API – this will return the correct time zone in a valid Windows format that can be used in the script to set the time zone on the device, there is a free tier here that also gives you 10,000 queries per month without charge
The Script
The script will execute the following in sequence:
Attempt to get the coordinates of the IP address you are using via the IPStack API
If successful, attempt to find the time zone from the Bing Maps API
Compare the value of the current time zone on the machine with the response from the Bing Maps API
Change the time zone on the machine
The only variable you should need to change in the script are the two lines that will contain your API keys helpfully called ipStackAPIKey and bingMapsAPIKey.
You can choose now to either save the file as a script to run once in Intune, or, incorporate this into Michael’s MSI. If you wanted the script to run every time the machine starts up, you could adapt the Logon script from my recent post on Mapping legacy files shares for Azure AD joined devices.
<#
SCRIPT: Set-TimeZoneGeoIP.ps1
DESCRIPTION: Based on the public egress IP obtain geoLocation details and match to a time zone
Once obtained set the Time Zone on the destination computer
#>
<#
SETUP ATTRIBUTES
#>
$logFile = "$env:ProgramData\Intune-PowerShell-Logs\TimeZoneAPI-" + $(Get-Date).ToFileTimeUtc() + ".log"
Start-Transcript -Path $LogFile -Append
$ipStackAPIKey = "##########" #used to get geoCoordinates of the public IP. get the API key from https://ipstack.com
$bingMapsAPIKey = "##########" #Used to get the Windows TimeZone value of the location coordinates. get teh API key from https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bingmaps.mapapis
<#
Attempt to get Lat and Long for current IP
#>
Write-Output "Attempting to get coordinates from egress IP address"
try {
$geoIP = Invoke-RestMethod -Uri "http://api.ipstack.com/check?access_key=$($ipStackAPIKey)" -ErrorAction SilentlyContinue -ErrorVariable $ErrorGeoIP
}
Catch {
Write-Output "Error obtaining coordinates or public IP address, script will exit"
Exit
}
Write-Output "Detected that $($geoIP.ip) is located in $($geoIP.country_name) at $($geoIP.latitude),$($geoIP.longitude)"
Write-Output "Attempting to find Corresponding Time Zone"
try {
$timeZone = Invoke-RestMethod -Uri "https://dev.virtualearth.net/REST/v1/timezone/$($geoIP.latitude),$($geoIP.longitude)?key=$($bingMapsAPIKey)" -ErrorAction Stop -ErrorVariable $ErrortimeZone
}
catch {
Write-Output "Error obtaining Timezone from Bing Maps API. Script will exit"
Exit
}
$correctTimeZone = $TimeZone.resourceSets.resources.timeZone.windowsTimeZoneId
$currentTimeZone = $(Get-TimeZone).id
Write-Output "Detected Correct time zone as $($correctTimeZone), current time zone is set to $($currentTimeZone)"
if ($correctTimeZone -eq $currentTimeZone) {
Write-Output "Current time zone value is correct"
}
else {
Write-Output "Attempting to set timezone to $($correctTimeZone)"
Set-TimeZone -Id $correctTimeZone -ErrorAction SilentlyContinue -ErrorVariable $ErrorSetTimeZone
Write-Output "Set Time Zone to: $($(Get-TimeZone).id)"
}
Stop-Transcript
The script will output to a folder in %PROGRAMDATA% called Intune-PowerShell-Logs.
Hopefully you will find this useful to configure time zone information on your Modern Workplace machines.
More and more of my customers are moving their devices from a traditional IT model to a Modern Desktop build directly in Azure AD, managing devices via Microsoft Intune rather than Group Policy or System Center Configuration Manager. The move to this modern approach of delivering IT services usually sits alongside of moving the organisation’s unstructured file data to OneDrive and SharePoint online which is the logical place to store this data instead of sat on a file server in an office or datacentre.
What if, however, that you still have a large volume of data that remains on your on premises file servers. Users will still require access to these shares but there is no native way of connecting to file shares within the Intune console. This is the challenge I have had for a customer in recent weeks and have developed a couple of PowerShell scripts that can be run to map drives when a user logs in and supports both dedicated and shared devices.
The Challenge
Looking back at a legacy IT approach, drive mappings were done through either Group Policy Preferences but also through login scripts such as batch or KIX. Both processes follow a similar method:
User signs into a device
GPP or login script runs containing list of mapped drives aligned to security groups of users who should have access
If the user signing into the device is in the relevant groups the drive letter is mapped to the shared location
This method has worked for years and IT admins maintain one or the other process to give users access to corporate data. If we now look forward to the modern managed IT environment, there are a few issues when working with the legacy file servers:
There is no native construct in Intune that maps UNC file paths for users
Whilst you can run a PowerShell script that could run a New-PSDrive cmdlet this will only execute once on the device and never again.
You may think that the second piece isn’t an issue, simply create a couple of scripts to map the network drives to the file shares and they will run once and remain mapped. What if the devices are shared and multiple users need to sign into the computer or if you need to amend the drive mappings? We needed a solution that could map drives at user sign in and be easy to change as the organisation moves away from file servers.
The Solution
As with most things, I started looking at what was on the Internet and quickly came across these blogs from Nicola Suter and Jos Lieben but neither really did what I needed for my customer (they have >100 different network drives). I set about looking for scripts that would deliver what I needed for the customer.
My requirements for the new drive mapping script were as follows:
Work natively with Azure AD joined devices
support users on dedicated or shared workstations
process the drive mappings sequentially as a traditional GPP or Login script would execute
Let’s start with the actual drive mapping script itself
Drive mapping script
For the drive mapping script to work, it needs to run silently and also enumerate the groups that the user has access to. Sounds easy, but PowerShell and AzureAD doesn’t natively have a way of matching these. I settled on making use of Microsoft’s Graph API listMemberOf function as this can be called to pull the groups that a user is a member of into a variable that can work with the drive mapping. The function requires a minimum permission of Directory.ReadAll which needed to be granted through an App Registration in Azure AD. Step forward my next web help in the form of Lee Ford’s blog on using Graph API with PowerShell
Configure Azure AD
First sign into Azure Portal and navigate to Azure AD and Application Registrations (Preview) to create a new App Registration. Give the app a name
When its created you will be shown the new app details. make sure that you note down the Directory ID and the Application (client) ID as you will need these in the script.
As well as these ID values, you also need a Redirect URI that is referenced in the script, click on Add a Redirect URI and choose the item in the screenshot below then click Save.
Now that the app is registered, we need to add permissions to read data from Graph API. Click on the API Permissions heading to grant the required Directory.ReadAll delegated permission.
By default the user’s making a connection to the API will be required to consent to the permissions change. To make this seamless, we can use our administrative account to grant this consent on behalf of all organisation users.
This is the setup of the Azure AD Application that will be used to access the Graph API, we can now focus on the PowerShell script that will map the drives.
The Drive Mapping Script
The Drive mapping script is made up of several parts:
Configuration section where you setup the Application Registration and drive mappings that will be run for each user
Connection to the Graph API
Enumeration of group membership for the user
Iterating through all drive maps and mapping those that the user is a member of.
I will share the script in full further down this post but have included the key snippets in each location.
First we define the variables for the app registration we created earlier.
$clientId = "73b7bec7-738#-####-####-############" #This is the Client ID for your Application Registration in Azure AD
$tenantId = "3b7b2097-f13#-####-####-############" # This is the Tenant ID of your Azure AD Directory
$redirectUri = "https://login.microsoftonline.com/common/oauth2/nativeclient" # This is the Return URL for your Application Registration in Azure AD
$dnsDomainName = "skunklab.co.uk" #This is the internal name of your AD Forest/
Once these are in place we setup our array of drive mappings. In this section there are four attributes that can be defined:
includeSecurityGroup – this is the group of users who should have the drive mapped
excludeSecurityGroup – this is a group of users who shouldnt have the drive mapped (this is optional)
driveLetter – this is the alphabetical letter that will be used as part of the drive mapping
UNCPath – this is the reference to the file share that should be mapped to the drive letter.
The code in the script looks like this:
$Drivemappings = @( #Create a line below for each drive mapping that needs to be created.
@{"includeSecurityGroup" = "FOLDERPERM_FULL-ACCESS" ; "excludeSecurityGroup" = "" ; "driveLetter" = "T" ; "UNCPath" = "\\skunklab.co.uk\dfs\Shared"},
@{"includeSecurityGroup" = "FOLDERPERM_ALL-STAFF" ; "excludeSecurityGroup" = "FOLDERPERM_FULL-ACCESS" ; "driveLetter" = "T" ; "UNCPath" = "\\skunklab.co.uk\dfs\shared\SHARED ACCESS"}
)
You can add as many lines in the $Drivemappings variable as you have groups that need mapping, just make sure that the final line doesn’t have a comma at the end of the line.
Next we create the connection to Graph API. I use the code from Lee’s blog earlier and it worked first time:
# Add required assemblies
Add-Type -AssemblyName System.Web, PresentationFramework, PresentationCore
# Scope - Needs to include all permisions required separated with a space
$scope = "User.Read.All Group.Read.All" # This is just an example set of permissions
# Random State - state is included in response, if you want to verify response is valid
$state = Get-Random
# Encode scope to fit inside query string
$scopeEncoded = [System.Web.HttpUtility]::UrlEncode($scope)
# Redirect URI (encode it to fit inside query string)
$redirectUriEncoded = [System.Web.HttpUtility]::UrlEncode($redirectUri)
# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/authorize?client_id=$clientId&response_type=code&redirect_uri=$redirectUriEncoded&response_mode=query&scope=$scopeEncoded&state=$state"
# Create Window for User Sign-In
$windowProperty = @{
Width = 500
Height = 700
}
$signInWindow = New-Object System.Windows.Window -Property $windowProperty
# Create WebBrowser for Window
$browserProperty = @{
Width = 480
Height = 680
}
$signInBrowser = New-Object System.Windows.Controls.WebBrowser -Property $browserProperty
# Navigate Browser to sign-in page
$signInBrowser.navigate($uri)
# Create a condition to check after each page load
$pageLoaded = {
# Once a URL contains "code=*", close the Window
if ($signInBrowser.Source -match "code=[^&]*") {
# With the form closed and complete with the code, parse the query string
$urlQueryString = [System.Uri]($signInBrowser.Source).Query
$script:urlQueryValues = [System.Web.HttpUtility]::ParseQueryString($urlQueryString)
$signInWindow.Close()
}
}
# Add condition to document completed
$signInBrowser.Add_LoadCompleted($pageLoaded)
# Show Window
$signInWindow.AddChild($signInBrowser)
$signInWindow.ShowDialog()
# Extract code from query string
$authCode = $script:urlQueryValues.GetValues(($script:urlQueryValues.keys | Where-Object { $_ -eq "code" }))
if ($authCode) {
# With Auth Code, start getting token
# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
# Construct Body
$body = @{
client_id = $clientId
scope = $scope
code = $authCode[0]
redirect_uri = $redirectUri
grant_type = "authorization_code"
}
# Get OAuth 2.0 Token
$tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body
# Access Token
$token = ($tokenRequest.Content | ConvertFrom-Json).access_token
}
else {
Write-Error "Unable to obtain Auth Code!"
}
Now we need to use the token we generated and query the Graph API to get the list of groups that our user is a member of
Finally we need to check if the user can see the domain (there’s no point executing the script if they are out of the office) and for each group the user is a member of, map the drives using the New-PSDrive cmdlet
$connected=$false
$retries=0
$maxRetries=3
Write-Output "Starting script..."
do {
if (Resolve-DnsName $dnsDomainName -ErrorAction SilentlyContinue) {
$connected=$true
}
else {
$retries++
Write-Warning "Cannot resolve: $dnsDomainName, assuming no connection to fileserver"
Start-Sleep -Seconds 3
if ($retries -eq $maxRetries){
Throw "Exceeded maximum numbers of retries ($maxRetries) to resolve dns name ($dnsDomainName)"
}
}
}
while( -not ($Connected))
Write-Output $usergroups
$drivemappings.GetEnumerator()| ForEach-Object {
Write-Output $PSItem.UNCPath
if(($usergroups.contains($PSItem.includeSecurityGroup)) -and ($usergroups.contains($PSItem.excludeSecurityGroup) -eq $false)) {
Write-Output "Attempting to map $($Psitem.DriveLetter) to $($PSItem.UNCPath)"
New-PSDrive -PSProvider FileSystem -Name $PSItem.DriveLetter -Root $PSItem.UNCPath -Persist -Scope global
}
}
That’s it, the script when executed will run as the user and map the drives. We now need to host this script somewhere that can be referenced from any device with an Internet connection.
Uploading the script to Azure
We will host the drive mapping script in a blob store in Azure. Sign into your Azure Portal and click on Storage Accounts and create a new one with the following settings
Once created we need to add a Container that will store the script
and finally we upload the script to the container
Once uploaded we need to get the URL for the script so we can use this in the Intune script later.
The Intune Script
Now that we have our drive mapping script and its uploaded to the Azure blob, we need a way of calling this every time a user signs into the computer. This script will:
Be run from the Intune Management Extension as the SYSTEM account
Create a new Scheduled Task that will execute a hidden PowerShell window at logon which will download and run the previous script
The only variable we need to change in this script is the URL to the drive mapping script and the name of the scheduled task that is created. The whole script looks like:
<#
DESCRIPTION: Create a Scheduled Task to run at User Login that executes
that executes a powershell script stored in an Azure blob storage account
AUTHOR: Matt White (matthewwhite@itlab.com
DATE: 2019-04-06
USAGE: Edit the values in the first section with respect to your link to the script
Add in the name of the scheduled task that you want to be called
Uplod the script to Intune to execute as a system context script
#>
<#
DO NOT EDIT THIS SECTION
#>
$scriptName = ([System.IO.Path]::GetFileNameWithoutExtension($(Split-Path $script:MyInvocation.MyCommand.Path -Leaf)))
$logFile = "$env:ProgramData\Intune-PowerShell-Logs\$scriptName-" + $(Get-Date).ToFileTimeUtc() + ".log"
Start-Transcript -Path $LogFile -Append
<#
END SECTION
#>
<#
Setup Script Variables
#>
$scriptLocation = "https://#############.blob.core.windows.net/pub-intune-scripts/DriveMapping.ps1" #enter the path to your script StorageAccounts->"account"->Blobs->"container"->"script"->URL
$taskName = "Map Network Drives" #enter the name for your scheduled task
<#
END SECTION
#>
<#
Setup the Scheduled Task
#>
$schedTaskCommand = "Invoke-Expression ((New-Object Net.WebClient).DownloadString($([char]39)$($scriptLocation)$([char]39)))"
$schedTaskArgs= "-ExecutionPolicy Bypass -windowstyle hidden -command $($schedTaskCommand)"
$schedTaskExists = Get-ScheduledTask -TaskName $taskName -ErrorAction SilentlyContinue
If (($schedTaskExists)-and (Get-ScheduledTask -TaskName $taskName).Actions.arguments -eq $schedTaskArgs){
Write-Output "Task Exists and names match"
}
Else {
if($schedTaskExists) {
Write-Output "OldTask: $((Get-ScheduledTask -TaskName $taskName).Actions.arguments)"
Write-Output "NewTask: $($schedTaskCommand)"
Write-Output "Deleting Scheduled Task"
Unregister-ScheduledTask -TaskName $taskName -Confirm:$false -ErrorAction SilentlyContinue
}
Write-Output "Creating Schdeuled Task"
$schedTaskAction = New-ScheduledTaskAction -Execute 'Powershell.exe' -Argument $schedTaskArgs
$schedTaskTrigger = New-ScheduledTaskTrigger -AtLogon
$schedTaskSettings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -DontStopIfGoingOnBatteries -Compatibility Win8
$schedTaskPrincipal = New-ScheduledTaskPrincipal -GroupId S-1-5-32-545
$schedTask = New-ScheduledTask -Action $schedTaskAction -Settings $schedTaskSettings -Trigger $schedTaskTrigger -Principal $schedTaskPrincipal -ErrorVariable $NewSchedTaskError
Register-ScheduledTask -InputObject $schedTask -TaskName $taskName -ErrorVariable $RegSchedTaskError
}
Stop-Transcript
This now needs to be added to Intune so that it can be executed on the devices. Navigate to Intue, Device Configuration, PowerShell scripts and add a new script
Once the file is uploaded, click on Configure to check how the script should be run
Once complete click Save and the script will be uploaded.
Finally we need to assign the script to users or devices. In my example all my computers are deployed via Autopilot so I assign the script to my Autopilot security groups which contain all the computer accounts.
The end result
When the Intune script runs on the endpoint it will check if the scheduled task exists and whether the script it will execute matches what was in any previous configuration. If there is no task, it is created and if there are changes, the old task is deleted and a new task is created.
When a user signs in they will see a popup window as the auth token is generated and then, if they are connected to the corporate network, their network drives will be mapped.
If you need to change the drives that a user has access to (either as you migrate to a more appropritae cloud service or you change the servers that host the data) simply amend the script in the blob store and the new drives will be mapped at logon.
The Intune script can be re-used for any other code that you want to run at user logon, simply reference the link to the script in the blob store and the name of the scheduled task you wish to use.
The scripts in full
Drive Mapping Script
<#
DESCRIPTION: Iterate through a list of drive mappings, match the groups to AzureAD groups
Where they match connect to the UNC path
AUTHOR: Matt White (matthewwhite@itlab.com)
DATE: 2019-04-06
USAGE: Edit the values in the first section with respect to your link to the script
Add in the name of the scheduled task that you want to be called
Uplod the script to Intune to execute as a system context script
#>
<#
DO NOT EDIT THIS SECTION
#>
$scriptName = ([System.IO.Path]::GetFileNameWithoutExtension($(Split-Path $script:MyInvocation.MyCommand.Path -Leaf)))
$logFile = "$env:ProgramData\Intune-PowerShell-Logs\$scriptName-" + $(Get-Date).ToFileTimeUtc() + ".log"
Start-Transcript -Path $LogFile -Append
<#
END SECTION
#>
$clientId = "73b7bec7-####-####-####-############" #This is the Client ID for your Application Registration in Azure AD
$tenantId = "3b7b2097-####-####-####-############" # This is the Tenant ID of your Azure AD Directory
$redirectUri = "https://login.microsoftonline.com/common/oauth2/nativeclient" # This is the Return URL for your Application Registration in Azure AD
$dnsDomainName = "skunklab.co.uk" #This is the internal name of your AD Forest
$Drivemappings = @( #Create a line below for each drive mapping that needs to be created.
@{"includeSecurityGroup" = "FOLDERPERM_FULL-ACCESS" ; "excludeSecurityGroup" = "" ; "driveLetter" = "T" ; "UNCPath" = "\\skunklab.co.uk\dfs\shared"},
@{"includeSecurityGroup" = "FOLDERPERM_ALL-STAFF" ; "excludeSecurityGroup" = "FOLDERPERM_FULL-ACCESS" ; "driveLetter" = "T" ; "UNCPath" = "\\skunklab.co.uk\dfs\shared\sharedaccess"}
)
# Add required assemblies
Add-Type -AssemblyName System.Web, PresentationFramework, PresentationCore
# Scope - Needs to include all permisions required separated with a space
$scope = "User.Read.All Group.Read.All" # This is just an example set of permissions
# Random State - state is included in response, if you want to verify response is valid
$state = Get-Random
# Encode scope to fit inside query string
$scopeEncoded = [System.Web.HttpUtility]::UrlEncode($scope)
# Redirect URI (encode it to fit inside query string)
$redirectUriEncoded = [System.Web.HttpUtility]::UrlEncode($redirectUri)
# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/authorize?client_id=$clientId&response_type=code&redirect_uri=$redirectUriEncoded&response_mode=query&scope=$scopeEncoded&state=$state"
# Create Window for User Sign-In
$windowProperty = @{
Width = 500
Height = 700
}
$signInWindow = New-Object System.Windows.Window -Property $windowProperty
# Create WebBrowser for Window
$browserProperty = @{
Width = 480
Height = 680
}
$signInBrowser = New-Object System.Windows.Controls.WebBrowser -Property $browserProperty
# Navigate Browser to sign-in page
$signInBrowser.navigate($uri)
# Create a condition to check after each page load
$pageLoaded = {
# Once a URL contains "code=*", close the Window
if ($signInBrowser.Source -match "code=[^&]*") {
# With the form closed and complete with the code, parse the query string
$urlQueryString = [System.Uri]($signInBrowser.Source).Query
$script:urlQueryValues = [System.Web.HttpUtility]::ParseQueryString($urlQueryString)
$signInWindow.Close()
}
}
# Add condition to document completed
$signInBrowser.Add_LoadCompleted($pageLoaded)
# Show Window
$signInWindow.AddChild($signInBrowser)
$signInWindow.ShowDialog()
# Extract code from query string
$authCode = $script:urlQueryValues.GetValues(($script:urlQueryValues.keys | Where-Object { $_ -eq "code" }))
if ($authCode) {
# With Auth Code, start getting token
# Construct URI
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
# Construct Body
$body = @{
client_id = $clientId
scope = $scope
code = $authCode[0]
redirect_uri = $redirectUri
grant_type = "authorization_code"
}
# Get OAuth 2.0 Token
$tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body
# Access Token
$token = ($tokenRequest.Content | ConvertFrom-Json).access_token
}
else {
Write-Error "Unable to obtain Auth Code!"
}
####
# Run Graph API Query to get group membership
####
$uri = "https://graph.microsoft.com/v1.0/me/memberOf"
$method = "GET"
# Run Graph API query
$query = Invoke-WebRequest -Method $method -Uri $uri -ContentType "application/json" -Headers @{Authorization = "Bearer $token"} -ErrorAction Stop
$output = ConvertFrom-Json $query.Content
$usergroups = @()
foreach ($group in $output.value) {
$usergroups += $group.displayName
}
# Loop the Drive Mappings and check group membership
$connected=$false
$retries=0
$maxRetries=3
Write-Output "Starting script..."
do {
if (Resolve-DnsName $dnsDomainName -ErrorAction SilentlyContinue) {
$connected=$true
}
else {
$retries++
Write-Warning "Cannot resolve: $dnsDomainName, assuming no connection to fileserver"
Start-Sleep -Seconds 3
if ($retries -eq $maxRetries){
Throw "Exceeded maximum numbers of retries ($maxRetries) to resolve dns name ($dnsDomainName)"
}
}
}
while( -not ($Connected))
Write-Output $usergroups
$drivemappings.GetEnumerator()| ForEach-Object {
Write-Output $PSItem.UNCPath
if(($usergroups.contains($PSItem.includeSecurityGroup)) -and ($usergroups.contains($PSItem.excludeSecurityGroup) -eq $false)) {
Write-Output "Attempting to map $($Psitem.DriveLetter) to $($PSItem.UNCPath)"
New-PSDrive -PSProvider FileSystem -Name $PSItem.DriveLetter -Root $PSItem.UNCPath -Persist -Scope global
}
}
Stop-Transcript
Intune Scheduled Task Script
<#
DESCRIPTION: Create a Scheduled Task to run at User Login that executes
that executes a powershell script stored in an Azure blob storage account
AUTHOR: Matt White (matthewwhite@itlab.com
DATE: 2019-04-06
USAGE: Edit the values in the first section with respect to your link to the script
Add in the name of the scheduled task that you want to be called
Uplod the script to Intune to execute as a system context script
#>
<#
DO NOT EDIT THIS SECTION
#>
$scriptName = ([System.IO.Path]::GetFileNameWithoutExtension($(Split-Path $script:MyInvocation.MyCommand.Path -Leaf)))
$logFile = "$env:ProgramData\Intune-PowerShell-Logs\$scriptName-" + $(Get-Date).ToFileTimeUtc() + ".log"
Start-Transcript -Path $LogFile -Append
<#
END SECTION
#>
<#
Setup Script Variables
#>
$scriptLocation = "https://###########.blob.core.windows.net/pub-intune-scripts/DriveMapping.ps1" #enter the path to your script StorageAccounts->"account"->Blobs->"container"->"script"->URL
$taskName = "Map Network Drives" #enter the name for your scheduled task
<#
END SECTION
#>
<#
Setup the Scheduled Task
#>
$schedTaskCommand = "Invoke-Expression ((New-Object Net.WebClient).DownloadString($([char]39)$($scriptLocation)$([char]39)))"
$schedTaskArgs= "-ExecutionPolicy Bypass -windowstyle hidden -command $($schedTaskCommand)"
$schedTaskExists = Get-ScheduledTask -TaskName $taskName -ErrorAction SilentlyContinue
If (($schedTaskExists)-and (Get-ScheduledTask -TaskName $taskName).Actions.arguments -eq $schedTaskArgs){
Write-Output "Task Exists and names match"
}
Else {
if($schedTaskExists) {
Write-Output "OldTask: $((Get-ScheduledTask -TaskName $taskName).Actions.arguments)"
Write-Output "NewTask: $($schedTaskCommand)"
Write-Output "Deleting Scheduled Task"
Unregister-ScheduledTask -TaskName $taskName -Confirm:$false -ErrorAction SilentlyContinue
}
Write-Output "Creating Schdeuled Task"
$schedTaskAction = New-ScheduledTaskAction -Execute 'Powershell.exe' -Argument $schedTaskArgs
$schedTaskTrigger = New-ScheduledTaskTrigger -AtLogon
$schedTaskSettings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -DontStopIfGoingOnBatteries -Compatibility Win8
$schedTaskPrincipal = New-ScheduledTaskPrincipal -GroupId S-1-5-32-545
$schedTask = New-ScheduledTask -Action $schedTaskAction -Settings $schedTaskSettings -Trigger $schedTaskTrigger -Principal $schedTaskPrincipal -ErrorVariable $NewSchedTaskError
Register-ScheduledTask -InputObject $schedTask -TaskName $taskName -ErrorVariable $RegSchedTaskError
}
Stop-Transcript
After I completed a recent upgrade of Windows 10 to 1803, I noticed that my English (United Kingdom) language had been joined by English (United States) and that I couldn’t remove this from the system.
A lot of my work recently has been working with Microsoft Intune to utilise Microsoft Modern Management constructs and principles to deliver a cloud first approach to provisioning new Windows 10 endpoints for an organisation.
Since Microsoft has migrated Intune management from the classic interface to the Azure Portal, the ability to execute installers for legacy line of business applications has been reduced. The idea is that the modern workplace is consuming data via apps from an app store and this is evident in Microsoft’s support for the Microsoft Store for Business and Universal Windows Platform .appx package support in Intune however this is not always feasible in most workplaces. There are still legacy line of business applications that require an MSI or EXE based installer and whilst Intune will support Line of Business installers that are MSI based there is again a limitation that the MSI must contain all the code required to install the application. There is currently no support for EXE based installers in the Azure Portal for Intune.
Back at Microsoft Ignite 2017, Microsoft announced the availability of the Intune Management Extension and the support to execute PowerShell scripts on Windows 10 Endpoints via Microsoft Intune (Read More). This got me thinking about how to extend the functionality of Microsoft Intune to deliver a more traditional (MDT / SCCM) provisioning process for legacy applications on modern managed Windows 10 devices.
If you could store your legacy line of business applications in a web accessible location (with appropriate security controls to prevent unauthorised access) you could then utilise the Intune Management Extension and PowerShell scripts to download the application install payload to a temporary location and then execute the payload to overcome the limitation of the Intune portal.
Looking around the Internet I came across this blog post by MVP Peter van der Woude which integrates the Chocolatey package manager and Intune. With a bit of reworking I amended the PowerShell code to download and install the AEM agent onto a target machine.
Save the PowerShell script and then add to Intune as outlined in Peter’s blog post and wait for the code to execute on your endpoint. The process can be extended to run any executable based installer.
Whilst this is a fairly simplistic example, the concept could be extended to download a compressed archive, extract and then execute the installer as required.
As I deploy more and more instances of Microsoft Intune I am having to onfigure managed applications for Android and iOS enrolled devices. Whilst the iOS app store has been neatly integrated into the Azure portal, Android apps need to be added by their relevant App URL (frustrating and something I hope that Microsoft / Google can fix in the near future).
When configuring the mobile app and app protection policies it can be useful to have the correct Microsoft suite (Outlook, Word, Excel, PowerPoint, OneDrive, Skype for Business etc.) installed automatically on the end user device. The list below is hopefully a time saver to get to each of the applications without having to click through or search the Google Play store manually:
In the second quick article following my reporting requirement this time is to report on the enabled user accounts that have not logged in in the past X days. Again a quick Google came across the following article on WindowsITPro (Use Get-ADUser to Find Inactive AD Users)
I took the Search-ADAccount cmdlet and created some filters to exclude disabled accounts as well as enable a parameter to be passed with the script to specify the maximum age, in days, of a user account (default is 90 days)
To execute the script run .\Get-InactiveAccounts.ps1 to report on accounts older than 90 days or use the InactiveDays parameter to specify the age of accounts to report (eg .\Get-InactiveAccounts.ps1 -InactiveDays 180)
As part of some recent work to assist a client with reporting on their active users and the dates those users last changed their passwords I evolved a script written by Carl Gray here (PowerShell: Get-ADUser to retrieve password last set and expiry information) to generate a short PowerShell script that will report the enabled Active Directory users and the date that they last set their password.
Copy the code below and save on your server as Get-PasswordLastChange.ps1 and then run from the command line. Script will produce a CSV file and save it in the same directory as the script
#Configure Output File
$myDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$timestamp = Get-Date -UFormat %Y%m%d-%H%M
$random = -join(48..57+65..90+97..122 | ForEach-Object {[char]$_} | Get-Random -Count 6)
$reportfile = "$mydir\PasswordLastSet-$timestamp-$random.csv"
Import-Module ActiveDirectory
Get-ADUser -filter * -properties passwordlastset, passwordneverexpires | `
Where {($_.userAccountControl -band 2) -eq $False} | `
sort-object name | `
select-object Name, passwordlastset, passwordneverexpires | `
Export-csv -path $reportfile -NoTypeInformation
Write-Host -ForegroundColor White "Report written to $reportfile in current path."
Get-Item $reportfile
I have been working on an issue for one of my clients where they have migrated email from an on-premise Exchange server to Microsoft Office 365. The migration was handled using a third party migration tool to synchronise email content into their Office 365 tenant, something that didn’t go smoothly and not wholly relevant to this posting. The major thing that wasn’t fully setup was the synchronisation between their existing on-premise Active Directory and their 365 tenant and users struggled initially with passwords that were not in sync between the two platforms. One thing that did happen however was that the Exchange server was decommissioned and any Exchange attributes removed from the Active Directory user accounts.
Looking into this I started by upgrading the installed version of DirSync to the latest release from Microsoft (click here) which automatically detected the previous installation and upgraded it to the latest release. As described in upgrade guide I completed the initial configuration and provided the following to the wizard:
Office 365 administrative login,
my Active Directory details,
the fact that it is not a hybrid deployment (remember Exchange has already been uninstalled),
that I wanted to sync passwords and
to not perform an initial sync (I need to troubleshoot what hasn’t been working).
I opened the not so easy to find Synchronisation Service Manager (C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\UIShell) and checked the configuration of the data to be synchronised.
We are synchronising our users based on their location in Active Directory and have created two Organisational Units (OUs) under two separate User OUs within our directory only one of these has been selected at this stage.
The user was showing up and the synchronisation log was showing success messages. I updated the Department attribute for the user within Active Directory and forced a sync to office 365, no errors reported in the Sync Manager and the Office 365 user was updated correctly. This seemed so simple so I informed my client we would enable the changes this week and everything would go back to one password and, apart from the shortcoming of Microsoft Outlook not participating with the Single Sign-On that the rest of Office does, users would be happy.
Or so I thought until I came to actually enable the remainder of the users…
I opened Synchronisation Service Manger, browsed to Management Agents and edited the Active Directory Connector, edited the Containers I wanted to sync and clicked OK only to be given the message:
Join/Protection validation error ‘publicFolder’ is no longer listed in the list of selected object classes
Looking through the options I found that under Select Object Type the publicFolder wasn’t listed so after clicking Show All, scrolling down a long way I enabled it and was able to save the update configuration
First hurdle out the way I moved a user into my new OU and forced another synchronisation… which failed
Clicking on the error I was redirected to a Microsoft KB article (KB2647098). Reading through the article I knew there were no duplicate accounts either locally in my AD domain or in the 365 tenant so I started to look into the proxyaddresses to see if Exchange had left something in place when it was removed from the network. When I looked in ADSIEdit at the user in question I found there were no proxyaddresses listed at all:
Checking the one synchronised user I noticed that they did have the proxy addresses listed so I added them manually in the format
Saving this and re-running the sync gave me a success, my user was now showing as synchronised with Office 365. I added another couple of users but this time, when I synchronised I received a different error message
I checked the accounts were enabled both in Office 365 and in Active Directory, restarted the DirSync service to no avail. After looking at this and a handful of other users I could see that this was going to take a while to complete manually so needed to automate this in some way. Rather than spend some time researching how to write the appropriate PowerShell commands to complete this change I switched to my trusty friend ADModify.net (https://admodify.codeplex.com/)which I have used many times in the past to update attributes in bulk within Active Directory.
For those of you that haven’t used it ADModify.net is a very competent tool at modifying the attributes of a number of AD objects in bulk.
Extract the contents of the download and run the executable then click on Modify Attributes when prompted
Select your Domain and preferred Domain Controller from the menu on the left, click the green icon and then browse to your OU to modify.
Click on Add to List at the bottom to add the discovered objects, select the ones you wish to modify and click on Next
On the email addresses tab choose Add SMTP Address and enter the email address in the format you want to use. Set Primary and update the General tab as well then click Go.
At this stage the tool will update the AD attributes for the list of objects selected earlier, output a popup with the status of any failures as well as write an XML based log file to the directory the executable ran from. Review any errors and then continue to add any other addresses
Here I have added an address that matches their UPN login to match the configuration I can see for my Office 365 users.
Now that these have been run against all the AD accounts that need to synchronise I restarted the DirSync services once more and completed my sync without issues.
The conclusion from all this is to make sure that everything is in place to synchronise your Active Directory and Office 365 tenant *before* decommissioning Exchange but also that if you are in this situation that you make use of tools such as ADModify.net to save the hassle of editing all the records by hand.
After eight years working at Wavex I have taken the decision to move on for a new career challenge and I am now a Senior Infrastructure Engineer at IT Lab. The past eight years have been a very good learning experience and I have developed from a young and inexperienced Service Desk Engineer into a more confident and technical IT Professional with the ability to communicate on a technical and commercial level to explain the problem and solution in a given situation.
As part of this change I have decided to try and remove some of the more juvenile aspects of my website and tidy up some of the blog posts that were written before I started my professional career as I realised that many of these were inappropriate content for a modern audience. All the blog content has moved from the subfolder “blog” to the root of my website and I am hoping that as my new role develops that there will be new content added over the coming months to read.
Thank you to all those at Wavex for their time in the past and here’s to the future.