Exchange 2007 G2…

  • 64bit only
  • 50 databases per server
  • 50 storage groups
  • 5 DBs per storage group
  • No MAPI support
  • Preferred backup method will be VSS coupled with replication
  • Two supported replication options
    • local continuous replication (LCR) – local replica to distinct disk
      • LCR is a log based replication to local server, backup can happen from the replica
    • cluster continuous replication (CCR) – mscs clustered replication between nodes
      • CCR is based on MSCS – log shipping to replica server distinct from primary server
  • VSS backups of exchange
    • Backup programs will require Exchange server 2007 server aware VSS requestor
    • eseutil is no longer required
    • Windows server 2003 NTBackup does not have a VSS 2007 requestor
    • Exchange server 2007 db and transaction log must be backed up via the VSS requester
  • Supported VSS backup methods
    • full: .edb, .log, and .chk are backed up, logs are truncated after backup completes
    • copy: the same as full but no truncate
    • incremental: only *.log, truncates log
    • differential: *.log all the way back to full backup, but no truncating
  • Recovery
    • VSS now allows restores directly to recover storage group
    • restores to alternate location, different path, different server, different storage group
    • log files can be replayed without mounting it first
      recovery will be much more granular: mailbox or even
  • message level
    • I would expect that MS will leverage Data Protection Manager (DPM) to facilitate enhanced RPO and RTO

-RJB

“blogsphere” – Part 1

After an exhausting 5 weeks on the road I took the opportunity this weekend to relax, read and recharge. I finished a book my wife gave to me for my birthday entitled “Naked Conversations” by Robert Scoble and Shel Israel. For those of you who don’t know Rober Scoble he is the author of http://scobleizer.wordpress.com/. Robert was one of the early bloggers at Microsoft and his blog continues to be one of the most popular on the web.

The book provided tons of great tips on how to publicize blogs. Because there are some many ways to build blog awareness I thought I would cover a different topic in a multi-part blog series. Last night I setup feedburner, I am hoping that this will drive increased traffic to my site.

Feedburner provides some great tools:

Republish feeds as html pages:

Subscribe to RSS headline updates from: http://feeds.feedburner.com/GotItSolutions”>
Powered by FeedBurner

Add a feed banner to your e-mail signature:

Got IT Solutions

PingShot which notifies popular blog rating services when you publish.

Email Subscriptions:

Enter your email address:???

Delivered by FeedBurner

FeedCount which displays the number of subscribed users:

I am really excited about this, in the last six hours I have added 8 subscribers but only time will tell how effective this actually is. I recommend checkingin it out.

http://feeds.feedburner.com/GotItSolutions

For those of you who are using WordPress, I highly recommend the feedburner plugin.

-RJB

“The Evolution of Disaster Recovery” Podcast – Part 1

Now that the Evolution of Disaster Recovery roadshow is over, it?s time to start releasing the podcast. Because each seminar was just over 3 hours, we are releasing it in 4 parts.

Part 1 – The state of data protection. This includes a discussion around, backups, backup to disk, virtual tape libraries, CDP, archiving, and more.

Part 2 – Edge to core data consolidation. Here we talk about using Cisco WAAS products to consolidate our data in a centralized location. This simplifies the management of our infrastructure, and makes preparing for DR much easier.

Part 3 – Leveraging Server Virtualization for Business Restart. Now that we understand how to protect out data, and have it in a centralized location we need to figure out how to make this data usable. Virtualization enables us to do this.

Part 4 – This section is a blending questions from each of the 9 cities we presented in.

A copy of the presentation can be found here if you?d like to follow along.

-RJB

Thank you!

Today we finished the 3 week 9 city “Evolution of Disaster Recovery” tour with our most interactive session thus far. Congratulation San Diego you were by far one of the most interactive groups. We are thinking about what our next road show topic might be. Maybe “Intelligent Information Management for the SMB”, I am interested in what the folks who attended this road show would like to hear about. Thanks again for taking time out of your busy schedules to come and listen to us. If you have any suggestions on the next show topic please do not hesitate to post a comments. Thanks again!

-RJB

“Content Addressable Storage” (CAS) made easy…

Inline with my previous post “The simple value of archiving…” I thought I would post a CAS (Content Addressable Storage) overview because again it was something that I presented during the “Evolution of Disaster Recovery” seminar series that I felt we could have gone much deeper. Unlike traditional storage “Content Addressable Storage” (CAS) uses a content address or globally unique ID to represent the binary contents of the file, location of the file, etch… in contrast traditional file systems such as UFS and NTFS which use file names to identify files. A content address is typically calculated using an algorithm called a file digest (i.e. MD5, SHA-1, SHA-256, SHA-512) and a hash is created to identify the file contents.

Exercise 1:
If you are curious to see how hashing works you can download md5sum from here , Next open MS Word, Notepad, VI, etc… and type something and save the doc as “filename.doc” now open a command prompt (DOS window) and run “md5sum filename.doc” this will return something like this “b3a6616fb5cee0f1669b1d13dd4c98cb *filename.doc” now open the file “filename.doc” and change a couple of letters do not delete or add characters because this will change the file size (makes the demonstration a little less powerful). For instance if you typed “Hello dad” change it to “Hello mom” and save as “filename.doc” the file should be identical in size to the previous version, now run the “md5sum “filename.doc” the output is a globally unique identifier and it is different because it is examining the binary makeup of the file not the file name, location, etc… Right now on you file system you only have one document called “filename.doc” which contains “Hello mom” the version containing “Hello dad” is gone. If this had been saved to a content addressable storage device both instance would have been saved because although they have the same name they are in fact unique. After this excise you can probably see the value for compliance, corporate governance, revision control, etc…

Exercise 2:
Create another doc named “filename.doc”, run “md5sum filename.doc”. Now copy “filename.doc” to “filename.doc” and “filename3.doc”, run “md5sum ….” on these two new instances of filename.doc you will notice that the hash is identical. On a traditional file system we have consumed 3x the space required because the only identifier is the file name which is unique, on a CAS device the file names would be stored with pointers to “filename.doc” this is what we call single instance storage. The practical application of this scenario dramatically reduces storage capacity required by dramatically reducing the amount of duplication present on most traditional file systems.

The following graphic is a simplistic representation of how “Content Addressable Storage” works:

cas

Hope this adds some additional clarification to the discussions we had at the seminars. If you have an comments, concerns, corrections or questions please comment on this post.

-RJB

The simple value of archiving…

I am flying from the east coast to the west coast to close out the “Evolution of Disaster Recovery” seminar series, with show 7,8 and 9 in LA, Orange County and San Diego respectively. As part of the seminar I spent a significant amount of time discussing Backup, Recovery and Archiving (BURA). I thought it might be a worthwhile endeavor to document this a little further and seeing that I have 6 hours to kill there is no time like the present.

The following example assumes a 5 week backup rotation, which is fairly common. In the event that you have a backup rotation that archives monthly backups for one year and yearly backups for 7 years, etc… the model grows exponentially. Typically most backup policies consist of incremental backups which are taken Monday through Thursday with a weekly full backup on Fridays. Tape sets are vaulted for 4 weeks, on week six week fives tapes are recycled, this rotation maintains 4 offsite copies at any point in time. Assuming a typical weekly rate of change of 10% data duplication is massive which extends backup times and raises cost due to the amount of media required. The following is a graphical representation of a typical 5 week rotation:

typical 5 week backup rotation

By introducing an archiving strategy we can greatly reduce the amount of data that is duplicated in the offsite facility and remove stale or unwanted data from the production data set which greatly improves backup and recovery times. The archive is an active archive which means that archived data is replaced by a stub (not a shortcut, stubs are not traversed during backups) and moved to an archiving platform of choice such as ATA(Advanced Technology Attachment), NAS (Network Attach Storage), CAS (Content Addressable Storage) , tape, optical, etc… – The user experience is seamless. A sample of what an archiving strategy might look like is represented by the following graphic:

typical 5 week backup rotation archiving

Some duplication will continue to exist due to the fact that we may have frequently accessed data that we choose not to archive. The archive is static, any data that is read or modified is pulled back into the production data set thus there is no need to backup the archive on a daily of weekly basis. We refresh the archive backup following an archiving process which in this example takes place monthly.

-RJB

Cleaning up Windows….

As a follow-up to my previous post on modifying the prefetch registry key, I found a nice little FREE tool to cleanup windows temp files and the prefetch directory for those of you who don’t want to mess with the registry settings and do the manual cleanup.? The tool is CCleaner and can be found here.

-RJB