Deploying a Web Deploy Package to AWS ElasticBeanstalk

AWS provide an extension to Visual Studio to make interacting with your AWS services easy, including deploying to a Beanstalk environment, which is the recommended way of deploying to a Beanstalk.

This works great, and if you are able to, you should obviously use the recommended approach, but there may be times you don’t have the extension available, or already have a build system setup to use Web Deploy packages. As far as I can tell the Beanstalk just uses MsDeploy packages under the hood, making it easy to deploy these without the extension!

1. Create a Package

If you don’t already have a Web Deploy package, create one. This is simple in Visual Studio, open your Web Application, right click on the Web Application project and select Publish.

This will open the Publish Web dialog:

Deploy1

Select Custom, and give your profile a name (for example the beanstalk environment name). On the Connection screen, update the Publish method to Web Deploy Package.

Deploy2Enter Default Web Site for the Site name, and choose a location on your machine to create the package.

Check the Settings are correct on the next screen, confirm your publish location on the Preview screen, and then click Publish.

Navigate to the folder where the package was created, you should see 5 files, the only one of interest for this case is the ZIP file.

 

2. Deploying the package

Browse to your environment in the AWS Management Console

Deploy3Select Upload and Deploy.

 

Deploy4Choose the ZIP file created earlier, and give this version a label (these should be unique amongst labels used for this application).

Clicking Deploy will start the deployment of the code to this environment, you will be able to monitor the logs in the Console to the status of your deployment.

If everything goes to plan you should see a message in the logs saying “New application version was deployed to running EC2 instances.”.

Next Steps

Just as you can automate these steps using the AWS Visual Studio Toolkit and the Deployment Tool command line program, these steps can be automated

The package can be created using MSBuild and the Package target.

The deployment to AWS can be automated using either the CLI tools, or Powershell tools and the following methods

List all azure resources in a csv / excel

Run the following code to list all the azure resources under all of your resources.

# settings
$defaultPath= "c:\Temp\azureresources.csv"
$csvDelimiter= ';'
# set azure account
[void](Login-AzureRmAccount)
# receive all subscriptions
$subscriptions= Get-AzureRmSubscription
$subscriptions| ft SubscriptionId, SubscriptionName
# select azure subscriptions that you want to export
"Please enter subscription ids (comma separated, leave empty to use all subscriptions)"
$subscriptionIds= read-host
if([String]::IsNullOrWhiteSpace($subscriptionIds)) {
    $subscriptionIds= @($subscriptions| select-ExpandPropertySubscriptionId)
}
elseif($subscriptionIds.Contains(',')) {
    $subscriptionIds= $subscriptionIds.Split(',')
}
else{
    $subscriptionIds= @($subscriptionIds)
}
# configure csv output
"Enter destination path - leave it empty to use $defaultPath"
$path= read-host
if([String]::IsNullOrWhiteSpace($path)) {
    $path= $defaultPath
}
if(Test-Path$path) {
    "File $path already exists. Delete? y/n [Default: y)"
    $remove= read-host
    if([String]::IsNullOrWhiteSpace($remove) -or$remove.ToLower().Equals('y')) {
        Remove-Item$path
    }
}
"Start exporting data..."
foreach($subscriptionIdin$subscriptionIds) {
    # change azure subscription
    [void](Set-AzureRmContext-SubscriptionID$subscriptionId)
    # read subscription name as we want to see it in the exported csv
    $subscriptionName= ($subscriptions| Where { $_.SubscriptionId -eq$subscriptionId}).SubscriptionName
    
    $subscriptionSelector= @{ Label="SubscriptionName"; Expression={$subscriptionName} }
    $tagSelector=  @{Name="Tags";Expression={ if($_.Tags -ne$null) { $x= $_.Tags | %{ "{ `"" + $_.Name + "`" : `"" + $_.Value + "`" }, "}; ("{ "+ ([string]$x).TrimEnd(", ") + " }") } }}
    #get resources from azure subscription
    $export= Get-AzureRmResource| select *, $subscriptionSelector, $tagSelector-ExcludeProperty"Tags"
    $export| Export-CSV$path-Delimiter$csvDelimiter-Append-Force-NoTypeInformation
    "Exported "+ $subscriptionId+ " - "+ $subscriptionName
}
"Export done!"
If you want to run this script in scheduler, then you need to save azure profile so that the script can pick it up. Run the following commands
Add-AzureRmAccount
Save-AzureRmProfile -Path “c:\temp\azureprofile.json”
After checking if the file exists, the following (line 7 of the previous script) should load the azure profile:
Select-AzureRmProfile -Path $azureProfilePath

azure sql backup to azure storage

Use the following script to backup azure sql to storage.
replace database details and storage details in the scripts below.
Also you would need to import your azure publishing file to run this script.
refer to this article on how to azure publishing file.

https://devopsandcloud.wordpress.com/2017/01/21/download-and-import-publish-settings-and-subscription-information/

 

# Check if Windows Azure Powershell is avaiable
try{
Import-Module Azure -ErrorAction Stop
}catch{
throw “Windows Azure Powershell not found! Please make sure to install them from http://www.windowsazure.com/en-us/downloads/#cmd-line-tools”
}

Import-AzurePublishSettingsFile “C:\jenkinsjobs\Pay-As-You-Go.publishsettings” #replace with your publishing file path

$DatabaseServerName=”azureservername.database.windows.net”
$DatabaseName= “DBName”
$DatabasePassword=”azure sql password”
$DatabaseUsername=”azure sql user”
$StorageName=”storage name”
$StorageKey=”storage key”
$StorageContainerName=”containername”
$dateTime = get-date -Format u
$blobName = “$DatabaseName.$dateTime.bacpac”
Write-Host “Using blobName: $blobName”

# Create Database Connection
$securedPassword = ConvertTo-SecureString -String $DatabasePassword -asPlainText -Force
$serverCredential = new-object System.Management.Automation.PSCredential($DatabaseUsername, $securedPassword)
$databaseContext = New-AzureSqlDatabaseServerContext -FullyQualifiedServerName $DatabaseServerName -Credential $serverCredential

# Create Storage Connection
$storageContext = New-AzureStorageContext -StorageAccountName $StorageName -StorageAccountKey $StorageKey

# Initiate the Export
$operationStatus = Start-AzureSqlDatabaseExport -StorageContext $storageContext -SqlConnectionContext $databaseContext -BlobName $blobName -DatabaseName $DatabaseName -StorageContainerName $StorageContainerName

# Wait for the operation to finish
do{
if ($operationStatus)
{
$status = Get-AzureSqlDatabaseImportExportStatus -Request $operationStatus
if ($status){
Start-Sleep -s 3
$progress =$status.Status.ToString()
Write-Host “Waiting for database export completion. Operation status: $progress”
}
else
{
Write-Host “Null Status. Awating updates.”
}
}
}until ($status.Status -eq “Completed”)
Write-Host “Database export is complete”

download and import publish settings and subscription information

Run Windows PowerShell as an administrator

Choose Start, in the Search box, type Windows Powershell.

Right-click the Windows PowerShell link, and then choose Run as administrator.

At the Windows PowerShell command prompt, type the following command, and then press Enter.

Get-AzurePublishSettingsFile

A web browser opens at https://windows.azure.com/download/publishprofile.aspx for signing in to Windows Azure.
Sign in to the Windows Azure Management Portal, and then follow the instructions to download your Windows Azure publishing settings. Save the file as a .publishsettings type file to your computer.

Note of the file name and location

In the Windows Azure PowerShell window, at the command prompt, type the following command, and then press Enter.

Import-AzurePublishSettingsFile <mysettings>.publishsettings

Replace <mysettings> with the file name of the publishsettings file that you downloaded in the previous step.

backup azure storage

I always keep a backup of my azure storage so that in case code deletes something by mistake in azure storage, i have the backup ready to get the file from. Even though azure replicates storage, but it is not fail proof in case of manual deletion and the replica will remove the blob too.

I use AZCOPY to move my data from storage to storage or storage to azure file. This is then run as a 6 hourly job to sync data with storage, giving me enough time to get copy from manual replica in case I delete something.

to download azcopy, go to this link – http://aka.ms/downloadazcopy

then use this powershell script to run AZcopy. (replace with your source and destination storage)

$theSource = @{path=”; accessKey=”; recursion=”; pattern=”}
$theDestination = @{path=”; accessKey=”}

$theSource.path = ‘/Source:https://STORAGENAME.blob.core.windows.net/CONTAINERNAME’
$theSource.AccessKey = ‘/SourceKey:KEY’
$theDestination.path = ‘/Dest:https://STORAGENAME.file.core.windows.net/FILESTORAGENAME&#8217;
$theDestination.AccessKey = ‘/DestKey:KEY’

$theSource.recursion = ‘/S /V /XO’
$supressConfirmationPrompt = ‘/Y’
$listingOnlyOption = ” # or /L – use this option if you just want to list.

$arguments = $theSource.path + ” ” + $theDestination.path + ” ” + $theSource.AccessKey + ” ” + $theDestination.AccessKey + ” ” + $theSource.recursion + ” ” + $supressConfirmationPrompt + ” ” + $listingOnlyOption

$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = “C:\AzCopy\AzCopy.exe ”
$pinfo.Arguments = $arguments

$pinfo.RedirectStandardError = $true
$pinfo.RedirectStandardOutput = $true
$pinfo.UseShellExecute = $false

$p = New-Object System.Diagnostics.Process
$p.StartInfo = $pinfo
$p.Start() | Out-Null
$p.WaitForExit()
$stdout = $p.StandardOutput.ReadToEnd()
$stderr = $p.StandardError.ReadToEnd()
Write-Host “stdout: $stdout”
Write-Host “stderr: $stderr”
Write-Host “exit code: ” + $p.ExitCode

if you dont want to use storage key but SAS keys, replace /SourceKey and /DestKey with /SourceSAS and /DestSAS

Then to schedule this job in jenkins, use the following to run it as batch command -Powershell.exe  -NonInteractive -ExecutionPolicy Bypass -File C:\JenkinsJobs\LLProdDatabaseBackups.ps1

Convert Azure D Series VM to DS Series.

I have had lot of scenarios where some old azure D Series VM need to be updated to DS Series so that premium storage disks can be added to those. We can do it manually by moving vm disks around in azure portal or just run the powershell script below and let it do the work. I picked this up from some blog and made few fixes and changes to it as it was bit outdated and crashing. But it did the work for me in the end, and hopefully helps someone out too.

<#

# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2″ -SourceServiceName “rajsourcevm2″ -DestVMName “rajdsvm16″ -DestServiceName “rajdsvm16svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg19′ -DestStorageAccountContainer ‘vhds’ -VNetName rajvnettest3 -SubnetName FrontEndSubnet

#>

[CmdletBinding(DefaultParameterSetName=“Default”)]
Param
(
[Parameter (Mandatory = $true)]
[string] $SourceVMName,

[Parameter (Mandatory = $true)]
[string] $SourceServiceName,

[Parameter (Mandatory = $true)]
[string] $DestVMName,

[Parameter (Mandatory = $true)]
[string] $DestServiceName,

[Parameter (Mandatory = $true)]
#[ValidateSet(‘West US’,‘East US 2′,‘West Europe’,‘East China’,‘Southeast Asia’,‘West Japan’,’Australia East’, ignorecase=$true)]
[string] $Location,

[Parameter (Mandatory = $true)]
#[ValidateSet(‘Standard_DS1′,‘Standard_DS2′,‘Standard_DS3′,‘Standard_DS4′,‘Standard_DS11′,‘Standard_DS12′,‘Standard_DS13’,‘Standard_DS14′, ignorecase=$true)]
[string] $VMSize,

[Parameter (Mandatory = $true)]
[string] $DestStorageAccountName,

[Parameter (Mandatory = $true)]
[string] $DestStorageAccountContainer,

[Parameter (Mandatory = $false)]
[string] $VNetName,

[Parameter (Mandatory = $false)]
[string] $SubnetName
)

#publish version of the the powershell cmdlets we are using
(Get-Module Azure).Version

#$VerbosePreference = “Continue”
$StorageAccountTypePremium = ‘Premium_LRS’

#############################################################################################################
#validation section
#Perform as much upfront validation as possible
#############################################################################################################

#validate upfront that this service we are trying to create already exists
if((Get-AzureService -ServiceName $DestServiceName -ErrorAction SilentlyContinue) -ne $null)
{
Write-Error “Service [$DestServiceName] already exists”
return
}

#Determine we are migrating the VM to a Virtual network. If it is then verify that VNET exists
if( !$VNetName -and !$SubnetName )
{
$DeployToVNet = $false
}
else
{
$DeployToVNet = $true
$vnetSite = Get-AzureVNetSite -VNetName $VNetName -ErrorAction SilentlyContinue

if (!$vnetSite)
{
Write-Error “Virtual Network [$VNetName] does not exist”
return
}
}

Write-Host “DepoyToVNet is set to [$DeployToVnet]”

#TODO: add validation to make sure the destination VM size can accomodate the number of disk in the source VM

$DestStorageAccount = Get-AzureStorageAccount -StorageAccountName $DestStorageAccountName -ErrorAction SilentlyContinue

#check to see if the storage account exists and create a premium storage account if it does not exist
if(!$DestStorageAccount)
{
# Create a new storage account
Write-Output “”;
Write-Output (“Configuring Destination Storage Account {0} in location {1}” -f $DestStorageAccountName, $Location);

$DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

if (!($?))
{
throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable”
}

Write-Verbose “Created Destination Storage Account [$DestStorageAccountName] with AccountType of [$($DestStorageAccount.AccountType)]”
}
else
{
Write-Host “Destination Storage account [$DestStorageAccountName] already exists. Storage account type is [$($DestStorageAccount.AccountType)]”

#make sure if the account already exists it is of type premium storage
if( $DestStorageAccount.AccountType -ne $StorageAccountTypePremium )
{
Write-Error “Storage account [$DestStorageAccountName] account type of [$($DestStorageAccount.AccountType)] is invalid”
return
}
}

Write-Host “Source VM Name is [$SourceVMName] and Service Name is [$SourceServiceName]”

#Get VM Details
$SourceVM = Get-AzureVM -Name $SourceVMName -ServiceName $SourceServiceName -ErrorAction SilentlyContinue

if($SourceVM -eq $null)
{
Write-Error “Unable to find Virtual Machine [$SourceServiceName] in Service Name [$SourceServiceName]”
return
}

Write-Host “vm name is [$($SourceVM.Name)] and vm status is [$($SourceVM.Status)]”

#need to shutdown the existing VM before copying its disks.
if($SourceVM.Status -eq “ReadyRole”)
{
Write-Host “Shutting down virtual machine [$SourceVMName]”
#Shutdown the VM
Stop-AzureVM -ServiceName $SourceServiceName -Name $SourceVMName -Force
}

$osdisk = $SourceVM | Get-AzureOSDisk

Write-Host “OS Disk name is $($osdisk.DiskName) and disk location is $($osdisk.MediaLink)”

$disk_configs = @{}

# Used to track disk copy status
$diskCopyStates = @()

##################################################################################################################
# Kicks off the async copy of VHDs
##################################################################################################################

# Copies to remote storage account
# Returns blob copy state to poll against
function StartCopyVHD($sourceDiskUri, $diskName, $OS, $destStorageAccountName, $destContainer)
{
Write-Host “Destination Storage Account is [$destStorageAccountName], Destination Container is [$destContainer]”

#extract the name of the source storage account from the URI of the VHD
$sourceStorageAccountName = $sourceDiskUri.Host.Replace(“.blob.core.windows.net”, “”)

$vhdName = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length – 1].Replace(“%20″,” “)
$sourceContainer = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length – 2].Replace(“/”, “”)

$sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary
$sourceContext = New-AzureStorageContext -StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceStorageAccountKey

$destStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccountName).Primary
$destContext = New-AzureStorageContext -StorageAccountName $destStorageAccountName -StorageAccountKey $destStorageAccountKey
if((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
{
New-AzureStorageContainer -Name $destContainer -Context $destContext | Out-Null

while((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
{
Write-Host “Pausing to ensure container $destContainer is created..” -ForegroundColor Green
Start-Sleep 15
}
}

# Save for later disk registration
$destinationUri = “https://$destStorageAccountName.blob.core.windows.net/$destContainer/$vhdName”

if($OS -eq $null)
{
$disk_configs.Add($diskName, “$destinationUri”)
}
else
{
$disk_configs.Add($diskName, “$destinationUri;$OS”)
}

#start async copy of the VHD. It will overwrite any existing VHD
$copyState = Start-AzureStorageBlobCopy -SrcBlob $vhdName -SrcContainer $sourceContainer -SrcContext $sourceContext -DestContainer $destContainer -DestBlob $vhdName -DestContext $destContext -Force

return $copyState
}

##################################################################################################################
# Tracks status of each blob copy and waits until all the blobs have been copied
##################################################################################################################

function TrackBlobCopyStatus()
{
param($diskCopyStates)
do
{
$copyComplete = $true
Write-Host “Checking Disk Copy Status for VM Copy” -ForegroundColor Green
foreach($diskCopy in $diskCopyStates)
{
$state = $diskCopy | Get-AzureStorageBlobCopyState | Format-Table -AutoSize -Property Status,BytesCopied,TotalBytes,Source
if($state -ne “Success”)
{
$copyComplete = $true
Write-Host “Current Status” -ForegroundColor Green
$hideHeader = $false
$inprogress = 0
$complete = 0
foreach($diskCopyTmp in $diskCopyStates)
{
$stateTmp = $diskCopyTmp | Get-AzureStorageBlobCopyState
$source = $stateTmp.Source
if($stateTmp.Status -eq “Success”)
{
Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor Green
$complete++
}
elseif(($stateTmp.Status -like “*failed*”) -or ($stateTmp.Status -like “*aborted*”))
{
Write-Error ($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)
return $false
}
else
{
Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor DarkYellow
$copyComplete = $false
$inprogress++
}
$hideHeader = $true
}
if($copyComplete -eq $false)
{
Write-Host “$complete Blob Copies are completed with $inprogress that are still in progress.” -ForegroundColor Magenta
Write-Host “Pausing 60 seconds before next status check.” -ForegroundColor Green
Start-Sleep 60
}
else
{
Write-Host “Disk Copy Complete” -ForegroundColor Green
break
}
}
}
} while($copyComplete -ne $true)
Write-Host “Successfully Copied up all Disks” -ForegroundColor Green
}

# Mark the start time of the script execution
$startTime = Get-Date

Write-Host “Destination storage account name is [$DestStorageAccountName]”

# Copy disks using the async API from the source URL to the destination storage account
$diskCopyStates += StartCopyVHD -sourceDiskUri $osdisk.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $osdisk.DiskName -OS $osdisk.OS

# copy all the data disks
$SourceVM | Get-AzureDataDisk | foreach {

Write-Host “Disk Name [$($_.DiskName)], Size is [$($_.LogicalDiskSizeInGB)]”

#Premium storage does not allow disks smaller than 10 GB
if( $_.LogicalDiskSizeInGB -lt 10 )
{
Write-Warning “Data Disk [$($_.DiskName)] with size [$($_.LogicalDiskSizeInGB) is less than 10GB so it cannnot be added”
}
else
{
Write-Host “Destination storage account name is [$DestStorageAccountName]”
$diskCopyStates += StartCopyVHD -sourceDiskUri $_.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $_.DiskName
}
}

#check that status of blob copy. This may take a while if you are doing cross region copies.
#even in the same region a 127 GB takes nearly 10 minutes
TrackBlobCopyStatus -diskCopyStates $diskCopyStates

# Mark the finish time of the script execution
$finishTime = Get-Date

# Output the time consumed in seconds
$TotalTime = ($finishTime – $startTime).TotalSeconds
Write-Host “The disk copies completed in $TotalTime seconds.” -ForegroundColor Green

Write-Host “Registering Copied Disk” -ForegroundColor Green

$luncount = 0 # used to generate unique lun value for data disks
$index = 0 # used to generate unique disk names
$OSDisk = $null

$datadisk_details = @{}

foreach($diskName in $disk_configs.Keys)
{
$index = $index + 1

$diskConfig = $disk_configs[$diskName].Split(“;”)

#since we are using the same subscription we need to update the diskName for it to be unique
$newDiskName = “$DestVMName” + “-disk-“ + $index

Write-Host “Adding disk [$newDiskName]”

#check to see if this disk already exists
$azureDisk = Get-AzureDisk -DiskName $newDiskName -ErrorAction SilentlyContinue

if(!$azureDisk)
{

if($diskConfig.Length -gt 1)
{
Write-Host “Adding OS disk [$newDiskName] -OS [$diskConfig[1]] -MediaLocation [$diskConfig[0]]”

#Expect OS Disk to be the first disk in the array
$OSDisk = Add-AzureDisk -DiskName $newDiskName -OS $diskConfig[1] -MediaLocation $diskConfig[0]

$vmconfig = New-AzureVMConfig -Name $DestVMName -InstanceSize $VMSize -DiskName $OSDisk.DiskName

}
else
{
Write-Host “Adding Data disk [$newDiskName] -MediaLocation [$diskConfig[0]]”

Add-AzureDisk -DiskName $newDiskName -MediaLocation $diskConfig[0]

$datadisk_details[$luncount] = $newDiskName

$luncount = $luncount + 1
}
}
else
{
Write-Error “Unable to add Azure Disk [$newDiskName] as it already exists”
Write-Error “You can use Remove-AzureDisk -DiskName $newDiskName to remove the old disk”
return
}
}

#add all the data disks to the VM configuration
foreach($lun in $datadisk_details.Keys)
{
$datadisk_name = $datadisk_details[$lun]

Write-Host “Adding data disk [$datadisk_name] to the VM configuration”

$vmconfig | Add-AzureDataDisk -Import -DiskName $datadisk_name -LUN $lun
}

#read all the end points in the source VM and create them in the destination VM
#NOTE: I don’t copy ACL’s yet. I need to add this.
$SourceVM | get-azureendpoint | foreach {

if($_.LBSetName -eq $null)
{
write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)]]”
$vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn
}
else
{
write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)], LBSetName is [$($_.LBSetName)]”
$vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn -LBSetName $_.LBSetName -DefaultProbe
}
}

#
if( $DeployToVnet )
{
Write-Host “Virtual Network Name is [$VNetName] and Subnet Name is [$SubnetName]”

$vmconfig | Set-AzureSubnet -SubnetNames $SubnetName
$vmconfig | New-AzureVM -ServiceName $DestServiceName -VNetName $VNetName -Location $Location
}
else
{
#Creating the virtual machine
$vmconfig | New-AzureVM -ServiceName $DestServiceName -Location $Location
}

#get any vm extensions
#there may be other types of extensions that be in the source vm. I don’t copy them yet
$SourceVM | get-azurevmextension | foreach {
Write-Host “ExtensionName [$($_.ExtensionName)] Publisher [$($_.Publisher)] Version [$($_.Version)] ReferenceName [$($_.ReferenceName)] State [$($_.State)] RoleName [$($_.RoleName)]”
get-azurevm -ServiceName $DestServiceName -Name $DestVMName -Verbose | set-azurevmextension -ExtensionName $_.ExtensionName -Publisher $_.Publisher -Version $_.Version -ReferenceName $_.ReferenceName -Verbose | Update-azurevm -Verbose
}

 

Backup Blob to another Blob

Select-AzureSubscription “SubscriptionName”

# I am making a VHD backup – VHD blob to copy #
$blobName = “1436836594602.vhd”

# Source Storage Account Information #
$sourceStorageAccountName = “SomeName”
$sourceKey = “SourcePrimaryKey”
$sourceContext = New-AzureStorageContext –StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceKey
$sourceContainer = “vhds”

# Destination Storage Account Information #
$destinationStorageAccountName = “Backupprodvmdiskfiles”
$destinationKey = “DestinationPrimaryKey”
$destinationContext = New-AzureStorageContext –StorageAccountName $destinationStorageAccountName -StorageAccountKey $destinationKey

# Create the destination container #
$destinationContainerName = “vhds”
New-AzureStorageContainer -Name $destinationContainerName -Context $destinationContext

# Copy the blob #
$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName `
-DestContext $destinationContext `
-SrcBlob $blobName `
-Context $sourceContext `
-SrcContainer $sourceContainer

while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq “Pending”)
{
Start-Sleep -s 30
$blobCopy | Get-AzureStorageBlobCopyState
}