Nagios/Opsview: Check Symantec AV Definitions

This morning whilst deploying a modified version of the Symantec Anti-Virus check from MonitoringExchange.org I noticed that on my 64-bit hosts that the check was not returning the correct data and instead of the expected output I was receiving the following error code:

check_av.vbs(51, 1) Microsoft VBScript runtime error: Type mismatch: 'strValue'

Initially I thought this could be a change due to the new installs being Symantec Endpoint Protection compared to the previous times I had implemented this using Symantec Anti-Virus 10.x but the SEP installs on the 32-bit systems were working fine however the 64-bit versions were not.

A quick look in the registry showed me that the value that is read by the script is not there on the 64-bit version and has been moved to another location (HKEY_LOCAL_MACHINESOFTWAREWow6432NodeSymantecSharedDefsDefWatch). I sat down with the script and quickly wrote in some extra code that would allow me to change the search path depending on the Operating System Architecture. I also added in some more error checking so if the key didnt exist then rather than exiting with an OK status it returns an UNKNOWN status and a relevant error message.

As I use NSClient++ to enable me to monitor my Windows servers I simply save the script to the NSClient++scripts folder and add the following line into my NSCI.ini under [NRPE Handlers]

check_av=cscript.exe //NoLogo scriptscheck_av.vbs /W:$ARG1$ /c:$ARG2$

Then from within Nagios or Opsview define the command for check_nrpe

check_nrpe -H $HOSTADDRESS$ -c check_av -a 2 3

The full script is listed below and is also available on Monitoring Exchange (link):

' Script: check_av.vbs
' Author: Matt White
' Version: 1.1
' Date: 01-03-2010
' Details: Check the current definitions for Symantec AntiVirus are within acceptable bounds
' Usage: cscript /nologo check_av.vbs -w:<days> -c:<days>

' Define Constants for the script exiting
Const intOK = 0
Const intWarning = 1
Const intCritical = 2
Const intUnknown = 3

' Create required objects
Set ObjShell = CreateObject("WScript.Shell")
Set ObjProcess = ObjShell.Environment("Process")

const HKEY_CURRENT_USER = &H80000001
const HKEY_LOCAL_MACHINE = &H80000002

Dim strKeyPath, strSymantecVer
Dim intWarnLevel, intCritLevel, intYear, intMonth , intDay, intVer_Major, intDateDifference
Dim year, Month , Day, Ver_Major
Dim arrValue

' Parse Arguments to find Warning and Critical Levels
If Wscript.Arguments.Named.Exists("w") Then
intWarnLevel = Cint(Wscript.Arguments.Named("w"))
Else
intWarnLevel = 2
End If

If Wscript.Arguments.Named.Exists("c") Then
intCritLevel = Cint(Wscript.Arguments.Named("c"))
Else
intCritLevel = 4
End If

' Determine CPU architecture for correct location of the registry key
strCPUArch = objProcess("PROCESSOR_ARCHITECTURE")
If InStr(1, strCPUArch, "x86") > 0 Then
strKeyPath = "SOFTWARESymantecSharedDefsDefWatch"
ElseIf InStr(1, strCPUArch, "64") > 0 Then
strKeyPath = "SOFTWAREWow6432NodeSymantecSharedDefsDefWatch"
End If

' Query Registry using WMI to obtain the definition value
Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\.rootdefault:StdRegProv")
oReg.GetBinaryValue HKEY_LOCAL_MACHINE,strKeyPath,"DefVersion",arrValue

' If the query doesnt return an array Quit - Unknown
If isArray(arrValue) = vbFalse Then
Wscript.Echo "UNKNOWN - Unable to read Definitions from the Registry"
Wscript.Quit(intUnknown)
End If

' Generate output from the registry value
intYear = CLng("&H" & hex(arrValue(1)) & hex(arrValue(0)))
intMonth = CLng("&H" & hex(arrValue(3)) & hex(arrValue(2)))
intDay = CLng("&H" & hex(arrValue(7)) & hex(arrValue(6)))
intVer_Major = CLng("&H" & hex(arrValue(17)) & hex(arrValue(16)))
strSymantecVer= intYear & "-" & intMonth & "-" & intDay & " rev. " & intVer_Major
intDateDifference = DateDiff("d", intYear & "/" & intMonth & "/" & intDay, Date)

' Output current version and definition age as Performance data
Wscript.Echo("Current Definitions: " & strSymantecVer & " Which are " & intDateDifference & " days old" & "|age=" & intDateDifference)

If intDateDifference > intCritLevel Then
Wscript.Quit(intCritical)
ElseIf intDateDifference > intWarnLevel Then
Wscript.Quit(intWarning)
ElseIf intDateDifference <= intWarnLevel Then
Wscript.Quit(intOK)
End If
Wscript.Quit(intUnknown)

Website Migrated and Theme issues

Whilst trying to make the home page and blog page look the same I managed to break the wp-admin section of my blog. Thanks to the guys at Loho.co.uk they have migrated my whole site to a new platform where I can administer the site more efficiently and I have been able to remove the faulty theme. Hopefully I can fix what I broke but for now its going to be the standard WP theme.

Making Windows Mobile work with Self-Signed certificates

If you try to synchronise a Windows Mobile PDA with Exchange Direct Push using SSL and the certificate is not issued by a Certification Authority (CA) that is in the PDA’s trusted certificate list then the device will not activate. Most commonly I have come across this with SBS servers that use the default self-signed certificate.

The solution should always be to purchase and install a certificate that is issued by a trusted CA to overcome the issue and the PDA will start to work automatically in these cases. If however you don’t want to purchase the certificate then you can bypass the security checks that Windows Mobile imposes on Active Sync. To do this requires you to install the certificate on the PDA and modify the registry to accept the installed certificate as a trusted one.

As each time I have done this I haven’t had the relevant PDA in front of me I have found a useful tool, that saves you trying to talk the end user through making the changes themselves, called My Mobiler (http://mymobiler.com/) which lets you interact with the PDA from your desktop.

  1. Install the certificate on the PDA
    1. Browse to your Outlook Web Access URL in Internet Explorer and save the certificate locally to your desktop by clicking on the padlock icon
    2. Connect the PDA via USB to the PC and allow Active Sync to connect.
    3. Click Explore Device in Active Sync and copy the certificate to the folder that is open
    4. Open File Explorer on the PDA and click on the certificate (it should be in My Documents)
    5. You will likely receive errors that the certificate is not trusted. Click More and then Install
    6. You should receive confirmation the certificate has been installed successfully.
  2. Install PHM RegEdit on your PDA
    1. There are a number of places to download the .cab file on the Internet (link) save this to your desktop
    2. With the PDA connected Explore the device again and copy the .cab file to the device
    3. Open File Explorer and click on the .cab to install it (again it should be in My Documents)
    4. When prompted that the installer cannot be verified click Install
  3. Apply the registry fix
    1. Click Start and select Programs. Scroll down and click on PHM Registry Editor
    2. Expand the following path: HKEY_CURRENT_USERSOFTWAREMicrosoftActiveSyncPartners
    3. You will see a list of GUID keys. Search through these for the one that contains the name “Microsoft Exchange” this is the key you need to modify
    4. Click Edit and select new DWORD
    5. Name the DWORD “Secure” and leave the value as 0
    6. Exit the Registry Editor

If everything has worked correctly your PDA should now synchronise with Exchange

AdminSDHolder groups and Send As

Looked at an issue a colleague had today where the SendAs permissions for a user were being removed automatically from a their account causing issues with their PA not being able to send email as they had configured it. The problem here was that the user in question was in one of the protected AdminSDHolder groups and Active Directory will reset the Send As permissions for members of these groups on an hourly basis.

As well as the ability for another user to Send-As the user in question this can also have implications if you run Blackberry Enterprise Server as the BES Service Account needs Send-As permissions to forward email from a handset to another recipient.

Microsoft have released a KB article on this (907434) which details the situation further but basically the solution should be to remove the user from the groups and if they need to perform the actions granted by the AdminSDHolder groups then they should be given a second “admin” account to perform these tasks.

The list of groups that are affected by the AdminSDHolder changes are:

  • Administrators
  • Account Operators
  • Server Operators
  • Print Operators
  • Backup Operators
  • Domain Admins
  • Schema Admins
  • Enterprise Admins
  • Cert Publishers

ESXi enabling SNMP

Last night I wrote an article about how to monitor the health of an ESXi server (link here) and I wanted to explain a bit more about my findings with SNMP on an ESXi host.

My goal with the monitoring was to use the check_dell and check_hp commands I have found for Nagios/Opsview to monitor the hardware that ESX is running on. The ESXi installs I am working with are using the Dell and HP management agents installed so I thought that everything would work out of the box and enabling SNMP would let me query the different aspects of the hardware.

The official line from VMWare was that SNMP is not enabled on ESXi and with no console cant be enabled. I knew however, having read a recent post on the TechHead blog (link here) that you could see the snmp.xml file and this shows that it is not enabled which made me think it must be possible to enable it. I was right.

A quick google came up with this article and I had a look and this was a fairly simple process to run:

First you need to enter the “unsupported” console on your ESXi server. To do this press Ctrl+Alt+F1 at your ESX console, now type the word unsupported (N.B. you will not see the text on your screen) and press Enter. If all goes well you should see a password prompt, enter your root password here and you should get a warning you are entering a mode that should only be enabled with VMWare support and be presented with a console.

type the following command to enter the VI text editor and start to modify the snmp.xml file:

vi /etc/vmware/snmp.xml

You should see a single line of text at the top of the screen which is the contents of the xml file. Press i to enter Insert mode and change

<enabled>false</enabled>

to

<enabled>true</enabled>

Then scroll across and add the community name you want the SNMP agent to respond on and place this between the following tags

<communities></communities>

so it should look like

<communities>public</communities>

I wasnt interested in setting up SNMP traps so left this blank and quit the VI editor by press Esc to exit insert mode and then :wq to write the file and quit the editor.

Finally we need to restart the services on the esx host which can be done with the following command

/sbin/services.sh restart

Great, SNMP is now enabled so I should be able to get the information from the HP/Dell management agents that I want. Wrong. My snmpwalk of the host provided little to no useful information about what I was trying to unlock.

opsview@LON-SVR-MON1:~$ snmpwalk -v 2c -c public 10.9.0.65
SNMPv2-MIB::sysDescr.0 = STRING: VMware ESX 4.0.0 build-219382 VMware, Inc. x86_64
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.6876.4.1
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (6061646) 16:50:16.46
SNMPv2-MIB::sysContact.0 = STRING: not set
SNMPv2-MIB::sysName.0 = STRING: lon-svr-esx2.domain.local
SNMPv2-MIB::sysLocation.0 = STRING: not set
SNMPv2-MIB::sysServices.0 = INTEGER: 72
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORID.1 = OID: SNMPv2-MIB::snmpMIB
SNMPv2-MIB::sysORID.2 = OID: IF-MIB::ifMIB
SNMPv2-MIB::sysORID.3 = OID: SNMPv2-SMI::enterprises.6876.1.10
SNMPv2-MIB::sysORID.4 = OID: SNMPv2-SMI::enterprises.6876.2.10
SNMPv2-MIB::sysORID.5 = OID: SNMPv2-SMI::enterprises.6876.3.10
SNMPv2-MIB::sysORDescr.1 = STRING: SNMPv2-MIB, RFC 3418
SNMPv2-MIB::sysORDescr.2 = STRING: IF-MIB, RFC 2863
SNMPv2-MIB::sysORDescr.3 = STRING: VMWARE-SYSTEM-MIB, REVISION 200801120000Z
SNMPv2-MIB::sysORDescr.4 = STRING: VMWARE-VMINFO-MIB, REVISION 200810230000Z
SNMPv2-MIB::sysORDescr.5 = STRING: VMWARE-RESOURCES-MIB, REVISION 200810150000Z
SNMPv2-MIB::sysORUpTime.1 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.2 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.3 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.4 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::sysORUpTime.5 = Timeticks: (0) 0:00:00.00
IF-MIB::ifNumber.0 = INTEGER: 4
IF-MIB::ifDescr.1 = STRING: Device vmnic0 at 02:00.0 bnx2
IF-MIB::ifDescr.2 = STRING: Device vmnic1 at 02:00.1 bnx2
IF-MIB::ifDescr.3 = STRING: Device vmnic2 at 03:00.0 bnx2
IF-MIB::ifDescr.4 = STRING: Device vmnic3 at 03:00.1 bnx2
IF-MIB::ifType.1 = INTEGER: ethernetCsmacd(6)
IF-MIB::ifType.2 = INTEGER: ethernetCsmacd(6)
IF-MIB::ifType.3 = INTEGER: ethernetCsmacd(6)
IF-MIB::ifType.4 = INTEGER: ethernetCsmacd(6)
IF-MIB::ifMtu.1 = INTEGER: 1500
IF-MIB::ifMtu.2 = INTEGER: 1500
IF-MIB::ifMtu.3 = INTEGER: 1500
IF-MIB::ifMtu.4 = INTEGER: 1500
IF-MIB::ifSpeed.1 = Gauge32: 1000000000
IF-MIB::ifSpeed.2 = Gauge32: 1000000000
IF-MIB::ifSpeed.3 = Gauge32: 0
IF-MIB::ifSpeed.4 = Gauge32: 0
IF-MIB::ifPhysAddress.1 = STRING: 18:a9:5:4e:a7:1c
IF-MIB::ifPhysAddress.2 = STRING: 18:a9:5:4e:a7:1e
IF-MIB::ifPhysAddress.3 = STRING: 18:a9:5:4e:a7:20
IF-MIB::ifPhysAddress.4 = STRING: 18:a9:5:4e:a7:22
IF-MIB::ifAdminStatus.1 = INTEGER: up(1)
IF-MIB::ifAdminStatus.2 = INTEGER: up(1)
IF-MIB::ifAdminStatus.3 = INTEGER: up(1)
IF-MIB::ifAdminStatus.4 = INTEGER: up(1)
IF-MIB::ifOperStatus.1 = INTEGER: up(1)
IF-MIB::ifOperStatus.2 = INTEGER: up(1)
IF-MIB::ifOperStatus.3 = INTEGER: down(2)
IF-MIB::ifOperStatus.4 = INTEGER: down(2)
IF-MIB::ifLastChange.1 = Timeticks: (0) 0:00:00.00
IF-MIB::ifLastChange.2 = Timeticks: (0) 0:00:00.00
IF-MIB::ifLastChange.3 = Timeticks: (0) 0:00:00.00
IF-MIB::ifLastChange.4 = Timeticks: (0) 0:00:00.00
SNMPv2-MIB::snmpInPkts.0 = Counter32: 187
SNMPv2-MIB::snmpInBadVersions.0 = Counter32: 0
SNMPv2-MIB::snmpInBadCommunityNames.0 = Counter32: 0
SNMPv2-MIB::snmpInBadCommunityUses.0 = Counter32: 0
SNMPv2-MIB::snmpInASNParseErrs.0 = Counter32: 0
SNMPv2-MIB::snmpEnableAuthenTraps.0 = INTEGER: disabled(2)
SNMPv2-MIB::snmpSilentDrops.0 = Counter32: 0
SNMPv2-MIB::snmpProxyDrops.0 = Counter32: 0

My thoughts now are simple. SNMP is not enabled in ESXi for the reason that there is not much there to query and you can use the CIM queries that I mentioned in the previous post to look at this instead.

Monitoring ESXi Server health using Nagios/Opsview

As part of a project I am currently working on I have a requirement to check that my clients’ infrastructure is working to the best of its ability. Whilst we perform regular checks to ensure the sites are running as expected we don’t currently have an easy way to check the health of the ESX hosts that the virtual servers run on. Until now.

I had spent a lot of time trying to “hack” SNMP to be enabled on the ESXi boxes which involved editing the snmp.xml file in the “unsupported” console on the host but after enabling this found that it didnt give me the data I was looking for to run my checks against. Looking a bit further I found a python script which queries the CIM service on the ESX host to find out whether the hardware is working as expected. The script uses the CIM service to check the ESX Health Status and report back to your monitoring platform what the current status of the host is.

Installation is fairly straightforward. The following details are for an Opsview install running on Ubuntu 8.04LTS server but should be easily adaptable to any installation if needs be.

First login to your server as normal and download the latest version of the pywbem module (http://archive.ubuntu.com/ubuntu/pool/universe/p/pywbem/pywbem_0.7.0.orig.tar.gz)

opsview@LON-SVR-MON1:~$ wget http://archive.ubuntu.com/ubuntu/pool/universe/p/pywbem/pywbem_0.7.0.orig.tar.gz

Once you have downloaded the module extract and run the python installer as root

opsview@LON-SVR-MON1:~$ tar -xzf pywbem_0.7.0.orig.tar.gz
opsview@LON-SVR-MON1:~$ cd pywbem-0.7.0/
opsview@LON-SVR-MON1:~/pywbem-0.7.0$ sudo python setup.py install

Next you need to download the check_esx_wbem.py script (http://communities.vmware.com/docs/DOC-7170) and place it in your libexec folder

opsview@LON-SVR-MON1:~/pywbem-0.7.0$ cd /usr/local/nagios/libexec/
opsview@LON-SVR-MON1:/usr/local/nagios/libexec# wget http://communities.vmware.com/servlet/JiveServlet/downloadBody/7170-102-5-4233/check_esx_wbem.py
opsview@LON-SVR-MON1:/usr/local/nagios/libexec# sudo chown nagios:nagios check_esx_wbem.py
opsview@LON-SVR-MON1:/usr/local/nagios/libexec# sudo chmod a+x check_esx_wbem.py

You can test this from the command line using the following command

opsview@LON-SVR-MON1:/usr/local/nagios/libexec# ./check_esx_wbem.py https://10.9.0.65:5989 root Password

In the case above I received the following output but if everything is working as expected the script should return “OK”

WARNING : Power Supply 3 Power Supplies<br>CRITICAL : Power Supply 2 Power Supply 2: Failure detected<br>

Now we have confirmed the script is running we need to add it to Opsview. The first step here is to reload Opsview to pickup the new plugin. Once complete goto Configuration -> Service Checks and Create New Service Check. Setup your check in a similar way to the image below (remember to substitute “root” and “Password” with a valid username and password to login to your ESX host

Save this service check and then apply this to your ESX hosts. If you have multiple ESX hosts that have different username and passwords then you don’t need to create multiple Service Checks as the later versions of Opsview let you specify exceptions when you configure the check for a host

Once you have configured this reload Opsview and wait for Opsview to start checking the ESX server(s). Below is the screenshot from my server with its disconnected PSU

This should now allow you  to keep an eye on your ESX hosts alongside the rest of your network monitoring system.

Backup Exec 12.5 DFS File Restores

I thought that this deserves a special mention.

Backup Exec backs up the DFSr Replicated Folders using the shadow copy components and in the past to perform a restore you were unable to redirect the files to an alternate location. This could cause issues if you wanted to keep both versions of the file as Backup Exec would overwrite the file and then perform an inital replication of that DFSr folder to the other servers in its replication group.

Whilst you could also perform an Authoritative restore of the DFSr folder this has recently caused me even more issues which resulted in support calls to Symantec and Microsoft to follow up on why this happens and what state my DFS is now in as a result of these restores.

During the inital support call to Symantec they advised me that for the first time in Backup Exec you can redirect the files you restore from the Volume Shadow Copy of the DFSr folders. Simply select the server and location in the File Redirection tab in Backup Exec and you will be able to dump the folder structure to whereever you want it and then copy the relevant files back into your DFS structure as you want it.

Backups – They really are important

Introduction

You really cannot appreciate the need for a solid backup solution until you need to restore that crucial piece of business critical data. Whether it’s a whole server or just one word document it is always important to know that the files are available to be recovered. There is no single solution that works in all scenarios and it is important to select the technologies that meet the needs of the individual site. This article will look at a number of different technologies and try to demonstrate how they can be used in a business environment and help negate the need to use companies like Kroll Ontrack to perform data recovery on hard drives which can be incredibly costly.

Shadow Copy / Previous Version Client

“Shadow Copies for Shared Folders is a new file-storage technology in the Microsoft Windows Server 2003 operating systems. Shadow Copies for Shared Folders provides point-in-time copies of files that are located on a shared network resource, such as a file server. With Shadow Copies for Shared Folders, users can quickly recover deleted or changed files that are stored on the network without administrator assistance, which can increase productivity and reduce administrative costs.” (Shadow Copies for Shared Folders Technical Reference)

This technology is the basis of the Previous Version client and allows recovery of accidently deleted files without having to request tapes or an online restore which may incur further delays in restoring the data. The snapshots are stored on your file server and you should make sure that you have sufficient space to store all your data as well as shadow copies. So that you don’t run out of space on the server a maximum size for the shadow copies is defined and at each snapshot the server will calculate if it can store the next snapshot in the data store without deleting older versions. When it can no longer store new snapshots Shadow Copy will delete the oldest snapshots to make way for the newest changes.

As mentioned this is a nice technology to quickly recover a few files or folders but should NOT be considered a backup solution on its own as you are reliant on your server always being online and having sufficient space to store enough copies of the data that you can restore what you need to. Shadow copy does not allow for hardware failure and should your disk array fail in the server you will lose the data as well as the previous version snapshots.

Tape Backup

Tape backups have been around almost as long as computers have and writing data to a magnetic tape is a tried and tested way of keeping a copy of the data that can be taken off-site to cover the loss of a server. Today backup tapes are able to store up to 1.6TB of data (depending on tape model and compression) on a single cartridge. As a result the tape backup is still widely used today as the backup solution of choice in the workplace as after the initial expenditure of buying the tape drive and software to backup your infrastructure there is little ongoing expense involved in maintaining the tape based backup solution.

The key thing to remember when using a tape based backups is to NOT keep your backup tapes in the same building as the server that you are backing up. You can backup all your data and keep a full year of backups but if they are sitting next to your server and there is a fire you lose both the server and the tapes and are unable to restore the data. It is recommended that once data has been written to tape that the user responsible for changing the tapes removes the tape to a secure location. There are companies, such as Iron Mountain, who offer services to collect tapes on a regular basis and store them in a secure vault. This can give you the peace of mind that you only have the minimum number of tapes on site at any one time.

The number of different backups you keep is completely dependent on how far back you feel you need to recover data. One tape that is overwritten daily is not a safe solution and while it is possible to use a completely new tape for each backup this can quickly become a costly way of backing up data. The most common backup hierarchy is the Grandfather, Father, Son scenario. In this scenario your Son backup would usually be your daily backup and then at the end of each week the Friday/Weekend backup is kept as the Father and at the start of the new week a new set of Son backups is created. At the end of the month the last Father backup is promoted to Grandfather and the process starts again at the beginning of the new month. It is recommended that the Grandfather backups be kept for a set as a reference of the data at that point in time. Over the course of a year using this technology you will need to have 21 tapes to rotate through. (4 tapes for Monday – Thursday, 5 tapes for the Friday/Weekend backups and 12 month end tapes). If you would like to keep two weeks of daily backups you will need a further 4 tapes to cover the second week.

Online Backup

If you have data based across multiple sites or you don’t want to be forced to change tapes on a nightly basis an online backup solution may prove to be a viable solution. In the same way as the tape backup will capture your data on a nightly basis and write it to a magnetic tape the software here will connect to a 3rd party data server and upload the data to be stored here.

Rather than taking a full backup of all the files each night the online backup solutions usually look at taking an initial base backup on site which is integrated into the off-site storage platform and then each night an incremental backup will copy changes since the previous backup to the platform. As a result of this files are stored based on the number of impressions that are pushed to the backup platform i.e. a file can be backed up on day 1 but doesn’t change for 2 months at which point the second impression is saved to the backup platform whereas a file that changes daily will write a new impression each time that file is backed up. The number of impressions you want to keep is dependent on the money you are willing to pay for storage.

When planning for an online backup it is important to work out how much data will be changing on a daily basis and needs to be sent across the Internet to the storage platform. If your Internet connection doesn’t have sufficient bandwidth you will not be able to take a full snapshot each night and could end up with gaps in your backups that prevent complete restoration of all the data.

Disaster Recovery Site

If the nature of your business means you cannot afford to be offline whilst your IT infrastructure is restored then a DR site may be something worth considering. If your Infrastructure is severely crippled then you are able to switch core services to another site and your users are able to continue working with minimal disruption.

Microsoft developed the DFS Replication technology in Server 2003 to enable file shares to be replicated between multiple servers in real time. In the case of your primary file server failing you simply need to switch your referral server to your DR site and users will be able to access data through the same file shares and shouldn’t notice the changeover. Replication of databases such as Microsoft Exchange or SQL is not as easy to replicate in the same way as the database files are constantly changing with each access. In these cases 3rd party applications such as DoubleTake or XOSoft (formerly WANSync) can be used to make sure that your databases are replicated in real time to the DR site so they can be switched over as needed. With these scenarios users are able to keep working whilst the core infrastructure is recovered and then any changes made whilst working in the Disaster Recovery scenario can be replicated back to the main offices.

The Disaster Recovery solution is not a cheap solution as you need to pay for a second set of servers to replicate the data to and run in an alternate site such as a data centre however the running costs need to be compared with the cost to the company whilst services are restored.

What should YOU do?

What you do now is a very individual decision based around the needs of your business. There are companies that implement all four different technologies mentioned to provide resilience against there being an issue with any of the other backups however this is a costly solution that is not viable for a number of small companies. For most, implementing either the tape or the online backup along with the Shadow Copy snapshots will provide enough security to restore the data should files be deleted or a server fail.

It should be noted however that the backup to tape or offsite should never be taken for granted and ignored. As part of any backup strategy you should be looking to run test restores from your backup media to ensure that you can recover the data you have backed up.

HOWTO: Build an open source monitoring solution – Part1 Build the Server

Introduction

No matter what size of network you are responsible for you should always know what is happening with it to make sure any issues are rectified as soon as possible and hopefully with minimal disruption to your users. Obviously the needs of a small company are different to those of a large corporation and in part this guide is not aimed at people who have a single server, single switch and a few PCs but more at the sys admin who needs to keep an eye on a handful of servers and managed switches (although you can still keep an eye on that single server with this setup).

I have split the guide up into a number of sections which, for me at least, is a logical way to install the different components. All the technologies used in this guide are free to setup and if you have an old server lying around the cost to set this up is simply your time.

OK. Enough with the intro let’s start with building the server.

Part 1 – Build the Server

What you need:

  • Server to run this off – a decent PC will suffice for small setups. I am building this as a virtual host on an ESX server
  • Ubuntu 8.04 Server (Download it here) Make sure its Server Edition and also not 8.10 or this won’t work. N.B. you can use other Linux distributions but this is based around Ubuntu 8.04 server

Installation process:

I tried to insert pictures at each step of the installation process but it made the post look untidy so I have created a list of steps that you will complete along the way as you setup your server. If you want to have a look at the screenshots check out the image gallery at the bottom of the post.

  1. Download the ISO from your nearest mirror and burn to a CD (if you are building a virtual machine you can skip burning this to a cd). Stick the CD into your server and power it on
  2. The first thing you will see is a prompt to select your language. Select your preference from here with the arrow keys and press enter – I am going to choose English (screenshot)
  3. You will next be asked what you want to do. This should be fairly self-explanatory what each option does. We want to “Install Ubuntu Server” (screenshot)
  4. The installer will load the Kernel off the CD and you will be presented with a blue/grey screen asking which language you want to use (Yes you are asked twice). Once again use the arrow keys to select the option you want and press Enter. Again I am selecting English here. (screenshot)
  5. Your next prompt asks you which type of English you would like. I am going to choose your localisation. I am choosing United Kingdom.
  6. The next prompt asks you to select your keyboard layout. If you know what keyboard you have connected then select No and you will be asked to select it on the next screens otherwise choose Yes and you will be asked to press keys on the keyboard and the installer will work out what you are using. (screenshot1 screenshot2)
  7. After this has completed the installer will look to load some more components for the setup and try to acquire an IP address of a DHCP server on your network. This is fine as we will be setting this statically later in the guide. (screenshot)
  8. After it has an IP address you need to set your hostname. If you have a naming convention for your site then follow this (e.g. ACME-SVR-MON1) it’s better than just leaving the default as ubuntu. (screenshot)
  9. Once this has done the installer will now ask how you want to partition your disk off. I am going to go with the simplest option “Guided – use entire disk” to give me a nice big partition over the whole drive to work with. If you are confident with how to partition a disk then you can choose manual but that is outside the scope of this guide. (screenshot)
  10. Having chosen the option you need to choose the disk you want to partition. If there is only one disk in the server then you should only see one option here. Select the relevant disk and press Enter. You will be asked one more time to confirm the changes that will be made so review the page and select Yes to proceed.(screenshot)
  11. Ubuntu will now partition the hard drive and start to install the basic OS. This will take a few minutes so go and brew a cuppa. (screenshot)
  12. Enjoyed your drink? Good. Now back to the setup process. You need to setup the user account that you will access the system. First enter your full name, then your username and finally choose a password. (screenshot1 screenshot2)
  13. The next step is to install the relevant core packages you need. Before doing this you will be asked if there is an HTTP proxy between the monitoring server and the Internet. If there is then enter the address here otherwise leave it blank and choose Continue (screenshot)
  14. In this example we are selecting a LAMP (Linux, Apache MySQL, PHP) to provide a web interface and database functionality, Open SSH to give us remote access and Mail to enable our monitoring server to notify us when there are issues. (screenshot)
  15. You next need to enter the password for your root MySQL account and confirm it. Please dont leave this blank as its a big security hole if you do. (screenshot1 screenshot2)
  16. After this you will be prompted for how you want to configure your email. I recommend you choose the Satellite System option as this will allow you to push all email generated by the server to your mail server for delivery. After selecting this option you need to choose the system name (what appears after the @ sign) and then the smart host you are going to relay all your mail through (screenshot1 screenshot2 screenshot3)
  17. Once this is done – go away and make yourself another drink as this next step takes another 5-10 minutes to complete depending on the speed of your server. When you come back however the install is complete. Remove the CD and press Enter to reboot your server. (screenshot)

Initial Login and basic configuration

Now that the installation is complete and your server rebooted you should see a screen similar to the one below. This is your login screen, enter the username and password you setup in step 12 and login to the server.

Base Ubuntu install
Base Ubuntu install

Now you are logged in we need to set the IP address so that it is static and check that the correct DNS servers are listed. Because of the changes we are making we need to run the next few commands as the root account on the server. Your user account has permissions to run commands as root you just need to tell the server that you want to carry out the changes – a bit like UAC in Windows Vista.

To access the shell as the root user type the following command at the console and press enter.

sudo -s

Enter your password that you logged in with and press enter. Your command line should change from

matt@ACME-SVR-MON1:~$

to

root@ACME-SVR-MON1:~#

Anything you enter now will be run as the root user.

To set the IP address to be static we need to edit the network interfaces configuration file. This is a plain text file that tells the server what IP address, Subnet mask, gateway etc to assign to the different interfaces on your server. There are a number of text editors available but I find nano to be a simple and easy to use editor. Type the following command and press enter to open the config file:

nano /etc/network/interfaces

The file will show you the following default configuration for your server:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

This needs to be changed so that the primary network interface (eth0) will not look to the DHCP server but will instead be a static address. The code below shows a customised interfaces file. add in the relevant lines and substitute in the correct values for your network. (N.B. don’t use the number pad to enter the values here as it can cause issues as nano doesnt seem to register that NumLock is turned on)

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.1.3
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.254

Once this has been done press Ctrl+X to exit nano. You will be asked if you want to save the file – press Y to confirm and exit. Your configuration will be saved and you will return to the root command line however your IP address will not have changed yet as we need to restart the networking service for this to take effect. Type the following command and press enter:

/etc/init.d/networking restart

If this is successful you should see the following:

 * Reconfiguring network interfaces...                                   [ OK ]

If you do not see this you have made a mistake in the config file. Open it up and check that each line is correct and then try to restart the networking services again. to confirm your server is now listening on the correct IP address we use the ifconfig command – this is very similar to the ipconfig command in Windows and gives an output similar to this:

eth0      Link encap:Ethernet  HWaddr 00:0c:29:ef:62:67
          inet addr:192.168.1.3  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feef:6267/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6601515 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7587624 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:997379356 (951.1 MB)  TX bytes:759778115 (724.5 MB)
          Interrupt:16 Base address:0x1424

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:966156 errors:0 dropped:0 overruns:0 frame:0
          TX packets:966156 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:106793708 (101.8 MB)  TX bytes:106793708 (101.8 MB)

There is one thing left to check and that is that your DNS servers have been successfully added to the server. If your DHCP setup process was successful we shouldnt need to change anything but its good to make sure its all working. Type the following command and you should see a number of lines saying “nameserver” with the IP address of your DNS server listed next to them:

more /etc/resolv.conf

running this on my server gave me

search home.bisnet
nameserver 192.168.1.1
nameserver 192.168.1.4

If you want to test DNS resolution then try to ping www.google.co.uk and you should get a reply (N.B. Unlike Windows PING this will run until you stop it. Once you are happy you are getting replies press Ctrl+C to stop the ping).

When you are happy this is working press Ctrl+D to log out of the root command line and back to your normal account.

Congratulations. You have now setup your basic server. In Part 2 of this guide I will go through installing the applications you will use as well as show you the basics of configuring them.

Screenshots from the Installation Process

Giving this blog a purpose

Having spent a long time ignoring this blog or simply linking to amusing things on the net that I found through sites like stumbleupon.com I think its time to try and focus what I am writing about and see if I can get a good set of useful articles written.

Having thought about it for about 5 minutes this morning I decided that it should be something related to what I do on a daily basis but also something that I have interest in otherwise what’s the point? Visualization was a first thought but I already read a good blog about vmware (http://www.techhead.co.uk) which I would probably end up plagiarising and isn’t the reason for this. The other thing that I am keen on at the moment in the world of technology is network monitoring and the technologies you can use for it.

Now I will say now I’m quite biased when I am looking at setting up a monitoring solution as I don’t really want to say for the extra hardware or software I use to monitor everything. This does mean I will look for a good open source application(s) to carry out a task and which I can customize rather than paying for a boxed product that does some of what I want to do but not everything.

Now I still like sharing interesting pages I find on the web but I may need to split the blog into 2 sections to look more professional… Still haven’t decided yet but don’t worry the random site links will still be there!

So what’s my first entry under the new incarnation of the blog? I think I will write up the “Howto” on building an open source monitoring machine that can keep an eye on your network. Expect it in a few days.