FTP “550 Access is denied.”

It is a permissions problem on the server-side. The FTP user account does not have permission to either create a new file, or overwrite an existing file, in the current remote directory. The fix is on the server-side: either fix the directory permissions to allow the FTP user access, or fix the permissions of the FTP user so that it’s possible, or both.

Advertisements

How to change Azure Subscription in powershell

Follow these steps –

Import-AzurePublishSettingsFile yourpublishsettings.publishsettings and set your azure account

then type,

Get-AzureSubscription (this will give you the list of subscription)

then finally select the subscription name you want to use

Select-AzureSubscription subscription_name

Select-AzureSubscription <subscription-name> will set the current subscription but not make the default one, so this change will be gone when you open powershell again next time.
If you’d like to change the subscription across sessions, use Select-AzureSubscription -Default <subscription name>

How to assign a reserved / static public IP Address to a virtual machine on Azure

Using Powershell, first you need to reserve a new IP if you don't have one already - 

New-AzureReservedIP -ReservedIPName ReservedIP -Location "Australia East"

Then assign this reserved IP to a virtual machine - 
Set-AzureReservedIPAssociation -ReservedIPName ReservedIP 
-ServiceName VMServiceName

Move linux logs to AWS S3

One of the best and cheapest place i have found for backing up data is AWS S3. Its cheap, reasonalbly fast and easy to manage using command line, shell scripts and powershell.

Below we see the steps to move daily generated logs to AWS S3. We will be using S3cmd, linux based s3 client to copy data to S3.

To install S3cmd use the steps.

  1. As a superuser (root) go to /etc/yum.repos.d
  2. Download s3tools.repo file for your distribution. Links to these .repo files are in the table above. For instance  wget http://s3tools.org/repo/RHEL_6/s3tools.repo  , if you’re on CentOS 6.x
  3. Run yum install s3cmd.

then run s3cmd –configure to add your accesskey and security key for the s3 bucket

next, copy all the log files using the following syntax

s3cmd put log* s3://prod-logs/

here ‘log’ is the prefix for the log files and prod-logs is my bucket name.

Next if you want to remove the logs moved to S3 bucket using the below

rm -rf log*

 

If you want to change this to a batch script to move files daily to S3 bucket, follow the steps as per below-

Write a Shell Script to do copy/move to s3 and set permissions

#!/bin/sh

s3cmd put log* s3://prod-logs/

rm -rf log*

Make the script an Executable

$ chmod +x ScriptName.sh

Run at mid night  every day by adding to Cron Job

$ crontab -e

59 23 * * * /ScriptName.sh

 

Save cronjob

 

Above will move all your daily logs to S3.

 

 

 

Get going with ExceptionLess

The other day, I was trying to come up with a centralized, real time error and log reporting solution and came across this https://exceptionless.com/. On some r&d I found out it to be a perfect in premise free log management tool that goes really well with entire .NET stack and javascript.

There is an online version of same provided by ExceptionLess, but i am going to describe the process for self hosting below.
ExceptionLess has 2 key components :
API to receive logs, messages and events. The core is written in .NET as WEB API and supported by mangoDB, redis and ElasticSearch in backend.
Second is a dashboard application written in angular.js that makes calls to the API and provides a decent enough dashboard to browse logs.

Each project has its own API key so we can be sure that the logs are separate for each project/module and dashboard has enough filters to separate then out. So we can have logs of different projects with proper access control on the same dashboard.
At the time of this blog, I used version 2.1 and the notes below are based on that. Version 3.0 will be out soon and make internal hosting better. For those who do not want internal hosting, I recommend using ExceptionLess hosting. more details here : http://exceptionless.com/pricing

The installation is really straight forward as with any .NET site.
– install mangoDB, redis server, ElasticSearch and IIS url rewrite on the server.

Download the API code from here. https://github.com/exceptionless/Exceptionless
then rebuild/publish the API on IIS.
Update the connection strings in the Web.config file to point to your Elasticsearch and Redis servers. If you are doing everything on same server then standard port are 6379 for redis and 9200 for elasticSearch.

Update the app settings (BaseURL, EnableSSL, WebsiteMode, etc..) in the Web.config file. For me these were http://serverDNS or http://serverIP , SSL = false and mode = “Production”

Update the mail settings in the Web.config file. I was using my own SMTP server so I updated it to this and all the emails worked fine.
<mailSettings>
<smtp from=”no-reply@no-reply.com”>
<network host=”smtp.live.com” password=”mypasswordgoeshere” port=”587″ userName=”MyEmail@live.co.uk” enableSsl=”true”/>
</smtp>
</mailSettings>

Update the machineKey in web.config file. You can use this link to generate a new one. http://www.a2zmenu.com/utility/Machine-Key-Generator.aspx

Update the ‘BaseURL’ to the UI url I am setting up next.

To install the angular frontend using the release here :
https://github.com/exceptionless/Exceptionless.UI/releases

Update the app.config.*.js file with your settings. the key here is the BaseUrl. This should point out to the API url.

The first user I signed as, became the UI admin.
So my final API layer web.config is like this :

<connectionStrings>
<add name=”RedisConnectionString” connectionString=”127.0.0.1:6379″ />
<add name=”ElasticSearchConnectionString” connectionString=”http://localhost:9200&#8243; />
</connectionStrings>
<appSettings>
<!– Base url for the ui used to build links in emails and other places. –>
<add key=”BaseURL” value=”http://xxx.xxx.xxx.xxx:8081/&#8221; />
<!– Controls whether SSL is required. Only enable this if you have SSL configured. –>
<add key=”EnableSSL” value=”false” />
<!–
Dev: Use this mode when debugging. (Outbound emails restricted)
QA: Use this mode when deployed to staging. (Outbound emails restricted)
Production: Use this mode when deployed to production.
–>
<add key=”WebsiteMode” value=”Production” />

and app.config file on the UI layer is like this :
(function () {
“use strict”;

angular.module(‘app.config’, [])

.constant(‘BASE_URL’, ‘http://xxx.xxx.xxx.xxx/api/v2&#8217;)

.constant(‘FACEBOOK_APPID’, ”)

.constant(‘GITHUB_APPID’, ”)

.constant(‘GOOGLE_APPID’, ”)

.constant(‘INTERCOM_APPID’, ”)

.constant(‘LIVE_APPID’, ”)

.constant(‘STRIPE_PUBLISHABLE_KEY’, ”)

.constant(‘SYSTEM_NOTIFICATION_MESSAGE’, ”)

.constant(‘USE_HTML5_MODE’, true)

.constant(‘USE_SSL’, false)

.constant(‘VERSION’, ‘2.0.441’)

;
}());
you can also enable API layer debugging by using nLog.
to do that i just updated my web.config with this setting :
<configSections>
<section name=”exceptionless” type=”Exceptionless.ExceptionlessSection, Exceptionless.Extras” />
<section name=”nlog” type=”NLog.Config.ConfigSectionHandler, NLog”/>
</configSections>
<nlog xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
<targets>
<!– Daily log files, deleted after 7 days –>
<target name=”Search.Web” xsi:type=”File” fileName=”C:\inetpub\wwwroot\errors.web.log” archiveFileName=”C:\inetpub\wwwroot\errors.web.{#}.log” archiveEvery=”Day” archiveNumbering=”Rolling” maxArchiveFiles=”7″ />
</targets>
<rules>
<logger name=”*” minlevel=”Info” writeTo=”Search.Web” />
</rules>
</nlog>

Then there are a lot of samples to make a call to the api and see results on dashboard.

https://github.com/exceptionless/Exceptionless.Net

but in summary you have to do the settings as per below –

Install ExceptionLess using Nuget package manager.

Then update config file

<exceptionless apiKey=”YOUR_API_KEY” serverUrl=”http://localhost&#8221; enableSSL=”false” />

Or

using Exceptionless.Configuration;
[assembly: Exceptionless(“YOUR_API_KEY”, ServerUrl = “http://localhost&#8221;, EnableSSL = false)]

API_KEY here is provided to you when you create a new project on UI.

and then make calls to the API to submit logs,events or messages.

Azure data factory – MYSQL to Azure SQL

The following steps describe how to move data from on premise MYSQL server to MSSQL on Azure.

Every data factory job has 4 key components –

Gateway, Linked services, Source and Pipeline.

Gateway here is what provides access to your MYSQL server. Usually when setting up data factory on azure portal, you will get a link to download and install gateway on the server. I did the other way around. Went to this link to install the gateway https://www.microsoft.com/en-au/download/details.aspx?id=39717

then copied the key from portal for the data factory and in few seconds i could see on the portal that data factory shows 1 gateway online.

I have one VM with a static IP that can connect to all the test DBs and gateway installed. I just keep changing the key and keep connecting to different data factory.

Next are your linked service. These are like your connection strings to the various servers. I have below added the source and destination exmaples for MYSQL source and MSSQL azure target. But you can always change them. You get more details on how to change JSON here based on source and target here : https://msdn.microsoft.com/en-us/library/azure/dn835050.aspx

{
“name”: “SOURCE”,
“properties”: {
“description”: “”,
“server”: “127.0.0.1”,
“database”: “DBName”,
“schema”: “”,
“authenticationType”: “Basic”,
“username”: “user_id”,
“password”: “**********”,
“gatewayName”: “GatewayName-CheckPortal”,
“encryptedCredential”: null
“type”: “OnPremisesMySqlLinkedService”
}
}

 

{
“name”: “TARGET”,
“properties”: {
“description”: “”,
“connectionString”: “Data Source=tcp:azuresql.database.windows.net,1433;Initial Catalog=DBName;User ID=User_Id;Password=**********;Encrypt=True;TrustServerCertificate=False;Application Name=\”Azure Data Factory Linked Service\””,
“type”: “AzureSqlLinkedService”
}
}

 

Next are your datasets. Here you are defining the location of data within your source and target. the main property here is location which gives details on dataset type and name, and then is tied back to the service. So here your are defining your source of data within the servers defined above. Again visit this link incase you want to change your endpoints : https://msdn.microsoft.com/en-us/library/azure/dn835050.aspx

{
“name”: “Source”,
“properties”: {
“published”: false,
“location”: {
“type”: “RelationalTableLocation”,
“tableName”: “tableName”,
“linkedServiceName”: “SOURCE”
},
“availability”: {
“frequency”: “Hour”,
“interval”: 1,
“waitOnExternal”: {}
}
}
}

{
“name”: “Target”,
“properties”: {
“published”: false,
“location”: {
“type”: “AzureSqlTableLocation”,
“tableName”: “TableName”,
“linkedServiceName”: “TARGET”
},
“availability”: {
“frequency”: “Hour”,
“interval”: 1
}
}
}

What connects this in the end is the pipeline. The type here is ‘CopyActivity’ since we are moving copying data, source query has the data we want to move and then target has procedure name we are calling to move data to azure SQL. You can keep this simple by just giving the table name on azure SQL. Example : https://msdn.microsoft.com/en-us/library/azure/34d563cf-1163-47e5-96b8-9c7aec5f37d2#TableSink

{
“name”: “Pipeline_MySQL_To_AzureSQL”,
“properties”: {
“activities”: [
{
“type”: “CopyActivity”,
“transformation”: {
“source”: {
“type”: “RelationalSource”,
“query”: “select * from tableName limit 1000;”
},
“sink”: {
“type”: “SqlSink”,
“sqlWriterStoredProcedureName”: “spOverwriteSomeName”,
“sqlWriterTableType”: “SomeTableType”,
“writeBatchSize”: 0,
“writeBatchTimeout”: “00:00:00”
}
},
“inputs”: [
{
“name”: “Source”
}
],
“outputs”: [
{
“name”: “Target”
}
],
“policy”: {
“timeout”: “01:00:00”,
“concurrency”: 1,
“executionPriorityOrder”: “NewestFirst”,
“retry”: 2
},
“name”: “MySQLToBlobCopyActivity”
}
],
“start”: “2015-07-12T13:00:00Z”,
“end”: “2015-07-12T16:00:00Z”,
“isPaused”: false
}
}

 

few notes from my experience :

make sure to add waitforexternal in your linked service, else you will see pending execution or pending validation on the portal. If you dont know what to add there, just keep it blank and defaults will be picked up.

make sure the gateway has mysql connector 6.6.5 installed if you are trying to connect to mysql server. See this https://msdn.microsoft.com/en-us/library/mt171579.aspx and then get this http://dev.mysql.com/downloads/file.php?id=412152

i also had the target tables ready with proper clustered index so that no error is thrown in case source didnt have a clustered index.