I was recently setting up some different scenarios and tests using a VM with Windows Autopilot for Windows 10 1803 vs 1809 . Being lazy, and thinking that I was being “smart”, I just reused the same VM but had a different VHDX disks for 1803 and 1809. Each Windows instance had a unique hardware hash (see below snipet).
However, the import failed with message “Device is already registered to the same Tenant. Error code: 806 – ZtdDeviceAlreadyAssigned”.
The error message gives a hint to the problem about the device already being registered. But when you delete the system from within the Autopilot profiles section, it seems to NOT actually delete it. Such that the device isn’t removed and subsequent imports fail with the same error. This is caused because the delete was triggered to remove the device from the Microsoft Store for Business, but the change hasn’t yet synchronized back into Intune.
My experience has been that there are variance on how quickly the delete is reflected. So if you need to expedite the scheduled sync cycle, just simply click the Sync button. Then that should get it back into a position where you can import the new Autopilot hash CSV file successfully.
When setting up hybrid Azure AD join with on-premises Windows 10 environments, if you encounter the an error that “The system tried to delete the JOIN of a drive that is not joined.“, then there is a good chance that the device has not yet synchronized into Azure AD.
A few tips to help you isolate the cause and get past this issue:
- First, confirm the device exists in Azure Active Directory (or not). In the Azure portal, navigate to Azure Active Directory > Devices > All devices.
- Review the steps in Troubleshooting hybrid Azure Active Directory joined Windows 10 and Windows Server 2016 devices. Note that this article points back to another article on How to configure hybrid Azure Active Directory joined devices, which presently contains way more helpful information to help you troubleshoot.
- In the most current Azure AD Connect releases, use the built-in Troubleshooter. Then in the PowerShell windows which launches, use both options to troubleshooting options for Object Sync and Password Hash Sync.
In my case, the troubleshooting guides were useful to confirm that I had configured everything correctly. Then the Azure AD Connect troubleshooter reported an error that “Password Hash Synchronization cloud configuration is disabled”. Searching that issue on the Internet led me to discover that the cause was likely due to mismatched passwords between the Azure AD account “On-Premises Directory Synchronization Service Account” with the password currently set in the local synchronization service.
To fix that, first set a new password for the “On-Premises Directory Synchronization Service Account”. To do that, try setting it in Azure directly. However, given that it’s a special account, it may be necessary to reset the password through PowerShell with the MSOL cmdlets. While I’m not getting into the full end-to-end setup and use of those add-on Azure PowerShell cmdlets, the command could be as simple as:
Connect-AzureAD Set-AzureADUserPassword -ObjectId abc123def456xyz980 -Password MyP@ssw0rd! -ForceChangePasswordNextLogin $false
Next, start program Synchronization Service Manager, then click on Connectors. Locate the Windows Azure Active Directory Account and click Properties.
Finally, set the password. Voila, devices will now sync to Azure AD on the next synchronization!
Beginning in Windows 10 1709, Hyper-V networking included a “Default Switch” to help simplify Internet connectivity to guest VMs. The idea is that this switch would automatically share whatever Internet connection is used by the Host, then NAT the addresses to the guests. This sharing is accomplished using the Internet Connection Sharing (ICS) service on the Host.
While in theory this makes the networking of guests easier, one particular challenge with this solution still exists today with Windows 10 1803….occasionally this guest loses it’s ability to work through the Host’s connection. And thereby the guest has no Internet access. So, if you’re faced with this issue, try restarting the ICS service on the host to restore connectivity.
For Internet connection issues with the default switch on a Windows 7 guest VM, I found the best solution to be changing the “Automatic metric” settings on the NICs in the VM. This workaround/solution was posted in a TechNet forum thread, but the details are copied below for ease of reading.
“Go to <Network and Sharing><Change Adapter Settings> and right click your wired and wireless adapters one at a time to change the properties. Select IPv4, click <Properties> then click <Advanced>. For each one, clear the “Automatic metric” check box and assign the metric value manually. I set the wireless to 1 and the wired to 2, which gave me the behavior I wanted.”
When using Azure AD Premium’s Application Proxy feature, if you should receive an error that states “Status code: BadGateway” along with the line “The service detected a possible loop. Make sure that the internal URL doesn’t point to the external URL of any application.”
Example web page error:
The cause – when making a change to the internal URL, I had incidentally set the internal URL to use the external URL. The surprising thing is that App Proxy actually allowed me to save the changes!
Recently when helping a customer migrate to ConfigMgr 2012 R2 SP1 + CU1, the site had major problems with fully completing DP upgrades. First, it would start the uninstall of a CM07 secondary site, then do nothing. So we would restart the process to which it reassigns the DP and then stalls again…but even worse, it never completed the process and it couldn’t be restarted again. So the status never came to a full completion and the old content (SMSPKG, PCK files, etc.) were not removed. Additionally, the state of the migration status never went beyond “Reassigning distribution point” as in the image below.
Ultimately I had to engage Microsoft to get an answer. Even they had to dig through the SQL stored procedure (sp_MIG_ UpgradeDistributionPoint) to understand for themselves what the conversion process does. Essentially the stages boil down to these high-level steps:
- Uninstall of the 2007 components, then delete the values in MIG_DistributionPointSource in the database
- Drop a .dpu file into the distmgr.box (for more info, see TechNet article “DP converts, but content fails“)
- Convert DP values of various package and DP mapping values in SQL
- And finally, set the DPUPgrade status where Action=2 and Status=0 for complete. The example below shows an incomplete “hung” state of a DP with Action=2 and Status=1.
To get these statuses, the following SQL query was used so that we would know when the remote server had completed the uninstall of the secondary site and was ready to have a manual .dpu file dropped in distmgr.box to initiate the content conversion.
-- Result must be empty select * from MIG_DistributionPointSource where PKGServer like '%ServerName%' -- Result must have content listed select * from PkgServers_G where NALPath like '%ServerName%' select * from PkgServers_L where NALPath like '%ServerName%'&nbsp; select * from PkgStatus where PkgServer like '%ServerName%' select * from ContentDPMap where ServerName like '%ServerName%' select * from DistributionStatus where DPNALPath like '%ServerName%' -- Result must have Action=2 and Status=1 to know server is ready to convert content select * from DPUpgradeStatus where NALPath like '%ServerName%' -- Use the DDPID number to create the .dpu file name select * from DistributionPoints where ServerName like '%ServerName%'
The secondary site conversion woes occurred in primarily three ways:
- First problem – a rerun the stored procedure did not occur after the uninstall of 2007 components. While this is still unsolved and would take ‘significant’ effort to diagnose the cause, PSS provided a query to identify if the site is ready for the content conversion by creating a dummy .dpu entry.
- Second problem – A manual restart of the conversion tools led to the system having a “failed to convert content”. This was occurring because a couple dozen of the packages had mysteriously updated their source files, so there is a hash mismatch between 2007 and 2012. The solution will be to delete those identified packages in 2012, re-migrate from 2007, and then restart the failed DP conversion (or drop a dummy .dpu)
- Third problem – On a “completed” DP, files were leftover on the DP, both the PCK and extracted SMSPKG$ files. Since we did not migrate ALL packages that were on the 2007 DP into 2012, those files are left behind such that they still could be converted at some point if desired. Otherwise they need to be manually deleted after the conversion process completes. Then, any package that had a CM07 advertisement that is set to “run from server” will migrate over to CM12 with the option to “Copy content to a package share on DP” – meaning the files in the SMSPKG$ share will remain. These files should not be deleted from the SMSPKG$, but rather it would be better to change the package setting to not copy the contents (if appropriate for that package).
I recently needed to add multi-language clients that I had missed during the initial install of ConfigMgr 2012 R2 SP1. Typically this is accomplished through site maintenance of the setup wizard. New with the ConfigMgr 2012 R2 SP1 install is to first install ConfigMgr “RTM” SP2, then update the server with the lightweight R2 SP1 bits.
But when I ran the setup from the original ConfigMgr 2012 SP2 media, the only options available were to recover a site or uninstall the site. The option to perform site maintenance was grayed out (see image below)!
Fortunately there was an easy workaround. Instead of running setup from the media, instead run it from the current ConfigMgr installation directory, under \bin\X64\. Optionally, run Configuration Manager Setup from the program files menu. Then voila, the option was available!