
Sorry, this entry is only available in Italian.
NAKIVO Backup & Replication provides you with the ability to run a script before a job begins (a pre-job script) and after the job has been completed (a post-job script)
Unsing, for example Putty, you have to login Nakivo Director :
Default ssh port for the Nakvo Director is 2221.
Login to nakivo appliance using this credential :
Username: nkvuser
Password: QExS-6b%3D
cd /opt sudo su
you have to insert password again
mkdir backup
Change new folder permission
chmod 777 backup
Using for example WinSCP with the same above configuration (port and credential) , browse to the new folder /opt/backup and upload here your scripts sh.
Using putty give execution permission to files :
cd /backup chmod +x ./your_file.sh
Take note of the full path of the script file : /opt/backup/your_file.sh
Using nakivo portal appliance now you have to implement your scripts on job execution. You need to use this guide :
The Nakivo backup software license allows you to use Nakivo technicians to resolve any problems. To contact the technicians, in the left menu click on “Help” and then on “Request support”. A new screen will open where you will have to push “Create new bundle”. A new form opens for generating the bundle that will be sent to Nakivo. In the Nakivo software there is the possibility of setting the various tasks or devices to record a verbose log. This may be useful to Nakivo technicians to help you solve your problem.
To enable verbose logging in a job, in the “Options” section you must select “Bottleneck detection”.
To enable verbose logging in the transporters (Source and Target) used in a job, select the individual transporter and select “Enable debug logging for this node”.
In this environment we have a Nakivo Appliance and a Qnap nas, used as Nakivo backup repository. We need to update the Nakivo appliance that currently is at 10.6 version.
Enter in your Nakivo web interface, navigate to “Seetings” – “Software Update”. The procedure proposes to you the 10.7 version. Proceed. You’ll reice a warning that “remote transporters will not be updated automatically”. After this procedure the Nakivo will be at 10.7 version and not other updates will be avalaible. Indeed the web console signals that your qnap transporter is “out of the date”. So you need to update it to use it. And here there is the problem: we’ll sew that, using nakivo qnap site , you’ll be able to install only the version 10.9 of qnap trasporter that is newer than the currently Nakivo appliance version. So you first need to install the 10.9 version on your Nakivo appliance, but you need to do it manually
Using Nakivo upadte site you have to download the “Virtual Appliance”. You”ll download the file
NAKIVO_Backup_Replication_v10.9.0.76010_Updater.sh
Using the application Winscp connect to your Nakivo appliance via ssh. Upload the sh file in the folder /opt/nakivo/updates.
To enter via ssh in a nakivo appliance the default credential are :
Now you have to follow this instruction to update the application : Nakivo manual.
In our environment , the Qnap has a Nakivo Transporter App version 10.6.0, compatible with the starting version of our Nakivo appliance. It’s not possible to update this version automatically via qnap, you need to downnload from Nakivo site the new transporter and update it via Qnap web console.
In Nakivo site , you have to choose between the intel or arm transporter package. You’ll download a opkg file.
So, enter in qnap web console and install it manually :
If you are unabled to install the Nakivo Trasporter package because you recive an error that report that the digital sign is invalid , yoiu need to allow installation of applications without a valid digital signature. Click the Settings icon in the top-right corner of the App Center. On the General tab, check the option “Allow installation of applications without a valid digital signature”.
Even if Nakivo siuggests to use chrome or Firefox to use properly its web interface, we had problem using Chrome. We solved it using Microsoft Edge.
You are trying to backup a sqlserver db on mapped network drive. You have already mapped the drive in Windows and you can see that drive in Windows Explorer. You are not able to see the drive which is mapped when you open the backup procedure using SqlServer Managment studio.
First of all you not need the previous mapped network drive created by Explorer. You have to create this drive using SqlServer, and you’ll not be able to see it using Exploer.
You need to execute below to enable xp_cmdshell as its disabled by default due to security reasons. (Please turn off again once you done with the work)
Using SSMS execute thesse commands :
EXEC sp_configure 'show advanced options', 1; GO RECONFIGURE; GO
EXEC sp_configure 'xp_cmdshell',1 GO RECONFIGURE GO
After this you ‘ll have a positive reaction a this command that using SSMS you should use for testing previous operations :
EXEC XP_CMDSHELL 'Dir C:'
To map network drive you have to use the same command that you should use over Windows ysig command prompt :
'net use Z: \\networkShare\Test'
So, using SSMS you have to run the command
EXEC XP_CMDSHELL 'net use Z: \\networkShare\Test'
Now you should test this connection with the command
EXEC XP_CMDSHELL 'Dir Z:'
but, the most important goal, is tha you’ll be able to see teh drive Z during backup proceure over SSMS
The above command will work and completes successfully without asking the user to provide a username/password if the user has authorized access to this network share. If not, But the easy way is to use the “net use” command on the command prompt line explained above.
net use Z: \\networkShare\Test /u:domainname\username password
So, using SSMS you have to run the command
EXEC XP_CMDSHELL 'net use Z: \\networkShare\Test /u:domainname\username password'
To build a backup environment it is recommended to follow the 3-2-1 rule.
To carry out the third tip, we very often think of the Cloud; it’s a great solution but the costs are still quite high if we need to move TB. If, on the other hand, we have to save some GB, the solution is attractive.
For companies that need to move TB of data, a solution can be backup to a remote location, for example connected with a vpn. The costs of saving the data will therefore be absorbed by the purchase costs of a fairly large NAS unit.
Let us try to detail such a solution.
The speed of the backup will be determined by the slowest internet connection speed between the 2 locations. Let’s suppose that the 2 offices are able to communicate at the speed of 300 Mb / s.
300 Mb / s = 300,000,000 b / s
300,000,000 b / s / 8 = 37,500,000 b / s = 37.5 MB / s
To move 1GB of data over a 300 Mb / s network, it will take 1,000 / 37.5 = 26.7 seconds
To move 1GB of data over a 100 Mb / s network, it will take 1,000 / 12.5 = 80 seconds
Backup software all have the ability to perform incremental backups but the first backup that is performed will inevitably be very long. So it needs to be planned carefully.
If we want to move a 20GB virtual machine across a 300 Mb / s network, it will take about 27 minutes. It will take us 2 hours and 15 minutes for a 1TB virtual machine. The incremental backups of the various software on the market, in “normal” conditions of server activity, allow subsequent backups to arrive at times equal to 20% of the first backup.
The source site data is stored on Nas at the destination site. It is clear that choosing a NAS equipped with a 10Gb / S ethernet card does not improve our remote backup since the transfer speed of the vpn is less than 1Gb / s.
Could the NAS disks further decrease the copy speed? To answer this, let’s take a look at a table that tries to give a value to the write speeds of the various disk systems. We took the data from the wikipedia site and then processed it.
Drive
(Type / RPM ) |
MB/s
(64KB block, random) |
MB/s
(512KB block, random) |
MB/s random
average |
MB/s
(large block, sequential) |
MB/s sequenzial
average |
FC / 15 K | 9.7 – 10.8 | 49.7 – 63.1 | 33,3 | 73.5 – 127.5 | 100,5 |
SAS / 15 K | 11.2 – 12.3 | 58.9 – 68.9 | 37,8 | 91.5 – 126.3 | 108,9 |
FC / 10 K | 8.3 – 9.2 | 40.9 – 53.1 | 27,9 | 58.1 – 107.2 | 82,65 |
SAS / 10 K | 8.3 – 9.2 | 40.9 – 53.1 | 27,9 | 58.1 – 107.2 | 82,65 |
SAS/SATA / 7200 | 4.4 – 4.9 | 24.3 – 32.1 | 16,4 | 43.4 – 97.8 | 70,6 |
SATA / 5400 | 3.5 | 22.6 | 13,05 | 47,1 (estimate) | |
SSD | 520 | 520 |
Backup software normally writes disks sequentially, so numbers in hand, even a sata 5400 hard drive could be useful in our scenario. Buying hard drives above 7200 rpm, on the other hand, would not lead to an improvement.
To check the outcome of Microsoft Azure Backup execution we can take advantage of the fact that, if the backup fails, some events are generated.
Copy and paste the following code in a new file and modify it with your data (mail server, user, password, messages).
$SMTPServer = "YOUR SMTP SERVER" $SMTPPort = "25" $Username = "USERNAME TO ACCESS SERVER" $Password = "PASSWORD" $to = "Email recipient" # $cc = "cc email recipient" $subject = "Error Backup MyServer" $body = "backup failed" # $attachment = "" $message = New-Object System.Net.Mail.MailMessage $message.subject = $subject $message.body = $body $message.to.add($to) # $message.cc.add($cc) $message.from = $username # $message.attachments.add($attachment) $smtp = New-Object System.Net.Mail.SmtpClient($SMTPServer, $SMTPPort); $smtp.EnableSSL = $true $smtp.Credentials = New-Object System.Net.NetworkCredential($Username, $Password); $smtp.send($message) write-host "Mail Sent"
Save it as file with extension .ps1
<QueryList> <Query Id="0" Path="CloudBackup"> <Select Path="CloudBackup">*[System[(Level=1 or Level=2) and (EventID=5 or EventID=10 or EventID=11 or EventID=12 or EventID=13 or EventID=14 or EventID=16 or EventID=18)]]</Select> </Query> </QueryList>
From now on, an email should be sent to you when the backup fails.