Flash Cache evolution

Last week, I presented about Server Side Flash Cache at the Storage Expo NL.
The contents were about the things to check out like Write-Through or Write-Back caching and why either one is good or bad.

While preparing that presentation there was an announcement about Diablo Technologies, but I read the post on Chris Evans’ site. This technology about using flash storage in memory DIMM format got me thinking about flash as a cache. Just this morning I read Chris Melor’s post on the register about similar developments which to me is another confirmation of my thoughts.

Many storage vendors are in one way or another trying to get a piece of the server side flash pie. Some startups are trying to get a piece by offering an independent software product like Pernix Data FVP, Proximal Data AutoCache with I/O Intelligence and SANDisks FlashSoft.

Others are trying to bind more revenue to themselves by combining hardware and software solutions. These are for example FusionIO, EMC (although EMC XtremeSW also supports most other SSD and Flash devices) and Netapp Flash Accel.
I’m not claiming any one to be better then the other, because it takes more than just a good product to make it a good solution for your situation.

What I’m going to say is my opinion and mine alone. You are free to disagree of course (and many probably will) and if you wish you can post your remarks in the comments.

What I think is going to happen is that the flash cache software, with respect to their complexity and awesome features, will be incorporated into the well known operating systems. Microsoft, VMware and even the open source community in my opinion is smart enough to cook their own recipe and come up with their own flavor that makes it possible to turn a flash or ssd device into a read/write cache. As usual, some of the ISV solutions will get swallowed by one of the big companies. To me, it makes a lot of sense to incorporate the flash cache software into the standard operating systems. This should be a standard feature for systems like Microsoft Server (SQL, Exchange, Hyper-V and more) or the Enterprise Linux distributions. It’s obvious that VMware is already working on it, and it’s just a matter of time before it will be available. What will then happen to the ISV’s that are not yet bought by the big ones?

I think it will be a matter of two to three years before flash cache software solutions become obsolete. Should you hold back on your plans to use them? No, of course not. Three years is still a long time to sit on your hands and not solve the issues you might have. Of course you should start using server side caching if it solves issues you have. And take into account I am not an analyst. I might be so very wrong……

Diablo technologies’ announcement actually spawned another thought. At first I was convinced that in a few years time, every computer system (server, laptop or desktop) motherboard would have a kind of SSD device backed into the circuitry, and HDD’s would no longer be a option when buying a computer or server system. The lifespan of SDD or flash can already be long enough to last with the motherboards. In blades and many consumer products this is already the case.
The usage of flash in the memory DIMM format will not change my view on this. Although this DIMM flash will make it possible to save your system’s state even on power outages makes it more robust, I don’t think the operating systems are going to boot without a device that mimics a drive with a bootsector anytime soon.

If the flash DIMM’s are going to be used as a caching device, this will even be a bigger performance improvement than when using the PCI flash cards. The DIMM is closer to the CPU then the PCI cards. It might be a fair blow to the all-flash system builders, but I don’t think it will put them out of business. The DIMM flash modules will have one big drawback. It’s not shared, unless a extremely fast memory interconnect is used to join multiple systems at the brain. Still, this limits the possible distance between systems. I think the DIMM flash modules will be of more effect to the ssd and flash card vendors. They will have to dive into the DIMM world to get a piece of that action.
The ssd or flash card vendors will not only have to compete with the other storage vendors in this arena, but also with the server vendors, as they will also try to get hold of that market.

Exciting as always.

 

Share

05

11 2013

Storage Expo NL 2013

Well,

I’m on my way home from what I consider a successful Storage Expo event. We have seen a lot of visitors and from what I’ve heard from the exhibitors they had some excellent conversations with (potential) customers. This is after all what the event is all about.

For vendors and partners it is a kind of reunion in which you get to catch up people you don’t see often. But lead generations is the primary goal for most exhibitors obviously.
I am part of the Storage Expo advisory board and feel somewhat responsible for the success of the Storage Expo in terms of presentations and sessions. The last few years we’ve seen visitor numbers decline a bit and we contributed this to the recession, because engineers and management level employees were no longer allowed to spend time on anything other than actual work. Another problem was thinking of good sessions to organize having a catching or appealing topic to discuss. Although the storage business is very busy, not everything is interesting for a presentation and doesn’t attract visitors. It is kind of hard to think of topics of presentations 6 months before the actual event takes place.

From the feedback I got, the sessions were well received this year. I did a presentation on “Server Side Flash Cache” which discusses the caveats to consider when planning for such a solution. While making the presentation I discovered there are many differences between the different vendors in how the go about implementing the software and sometimes hardware. These differences led me to think an overview of the technical details would be a good thing to make for customers to use as a reference. So stay tuned for a new post on this topic in the near future. I’m still waiting on the feedback and evaluation of my  presentation of course. Since it was my first presentation in front of a new and unknown audience, i was kind of anxious. The mailing will be send out somewhere next week. I’m looking forward to the feedback. My presentation and of other presenters as well, will be available for download from the Storage Expo website from next week on. I will update this post with the proper links.

So, looking back on the Storage Expo, I think we had a successful show this year and it’s definitely viable to continue this show for the next couple of years. We don’t want this event to die off like the one in the UK a few years back. I’ve been looking to get the dates changes a bit though, because there always is a one-day overlap with the Storage Networking World Europe even in Frankfurt. But if you consider the difficult planning and organization that goes into such events, you can imagine that you don’t just change some dates. The venue usually is booked years in advance.
On the other hand,  interesting speakers are already in the neighborhood when they speak at SNW Europe, so having them come over for the Storage Expo usually isn’t all that difficult.

There aren’t many independent trade shows in the storage industry in Europe, so it’s good to keep SNW Europe and of course the Storage Expo Benelux going.

 

Share

31

10 2013

IBM feature or enhancements requested by end-users

 

IBM is a huge organization with lots of developers working on a bunch of products. They have a lot to choose from, but still, sometimes their product is missing a feature, long after others might have added that feature.

 

Development cycles are perceived as sluggish and lengthy because it feels like it takes forever for IBM to incorporate new functions. This is true for most huge players in the IT industry. They have to consider a huge customer base with an enormous variety in equipment and software they have to work nicely with. So the quality process just takes long because it needs to be extremely thorough and a lot has to be tested before it can be released to public. The huge vendors have a lot to lose if things don’t work.

Read the rest of this entry →

Share

01

06 2013

Curiosity about UNMAP and array based replication

Most storage and server operating vendors are supporting the SCSI UNMAP function to release unused data blocks in the storage system to free up unused space. It they don’t support it now, they will over time.
We all know this is to needed in addition to the thin provisioning features to keep volumes thin provisioned over time. If the thin provisioned volumes are used as a source volume for replication to another storage array (local or remote) the target volumes can be thin provisioned as well (in most cases anyway). If the source volumes fill up, so will the target volumes. Without the UNMAP function, unused space will remained claimed by the operating system and the thin provisioned volume will end up being a thick provisioned volume. When the unused space is being released using UNMAP, will this also be forwarded to the replication target volumes? Will the unused space be freed on the target storage array system as well?

Read the rest of this entry →

Share

01

06 2013

Storage Field Day delegate potential

Arjan wrote a post on becoming a delegate for Storage Field Day or any other Field Day for that matter a couple of days back, which triggered me to write this one.

It’s not in any way correcting anything Arjan wrote, because he did a good post. I would just like to add some thoughts.

I’ve been a delegate two times now and would love to be a delegate at future “* Field Days”.  Sometimes  I wonder if my contribution is valuable enough to justify sponsors/vendors paying for the trip. Then again, much value comes from your involvement in the event and the feedback your provide. Blogging about it creates exposure for a longer period of time, and depending on the number of readers you have that could be huge exposure or just some.

Read the rest of this entry →

Share
Tags:

23

05 2013

SFD3 Launch of ExaBlox

At the most recent Storage Field Day we were again witness of the “unveiling”  of a storage startup.  The time we had the honor of being part of coming out of stealth mode of ExaBlox. The presenters from ExaBlox were Tad Hunt (CTO and Co-Founder) and Douglas Brocket (CEO). Tad used to work at Bell Labs. You might want to check their LinkedIn profiles for more details. It is quite obvious they aren’t newbies in the world of storage.

Read the rest of this entry →

Share

16

05 2013

Target Driven Zoning

Last week I read a post by Erik Smith regarding experiments in Target Driven Zoning.

Target Driven Zoning introductions were made in January of 2012 and the first lab experiments are now surfacing.

Many storage admins working with Fibre Channel fabrics know exactly how much pain it can be to manage a large installation with thousands of ports. Erik’s posts indicates EMC has a strict single-initiator-target (or even single-initiator-single-target) zoning policy, but it is my experience that almost all vendors with FC connectivity use this policy. As a result, sometimes a clustered system with multiple ports result in a humongous zone count.

In a heterogeneous environment with thousands of ports it’s a hassle to find the right zone or create a zone with the right WWPN’s. And than you have to do it all again for the redundant other fabric, because we are probably using a resilient dual fabric design.

One of the larger problems in these environments is housekeeping. Because it’s so much work, not many admins actually go into the fabric management software to track down unused ports or zones. So in the long run, you will end up with a huge pollution which will only get harder to clean.

Target Driven Zoning is an initiative that would be a good solution here, as it would automate many of this aforementioned administrative burden. If TDZ would eventually become generally accepted and implemented by the storage vendors and FC switch vendors it would create/update/delete zones automatically based on the LUN masking information you provide in the storage device’s management interface. So when a host gets decommissioned (deleted from the LUN masking database), the zone would automatically be removed from the Active Zone sets in the SAN(s).

Since only storage devices that use LUN masking, of some form, will benefit from this, it will most probably not remove all your manual zoning labor. Large installation will likely also have FC connected tape devices that still require manual zoning. Maybe there are even other devices connected to your SAN that will not use TDZ. But the remaining work that cannot be automated using TDZ will be just a very small piece of the large manual labor pie, so the benefit will be huge.

As I can see it now, work is done on this and the T11 standards are also being documented and approved. It took Erik & Friends just over a year to have the first “proof-of-concept” results. I think this is actually pretty darn fast. If you are curious about the progress, you might use the links in the beginning of this post to keep track of Erik and his work.

I am still wondering however, how the target device identifies the zone that needs to created/updated or deleted? There must be some unique information that lets the target device know what zone needs modification? Or will the target tell the switch to modify all the zones the initiator is part of? Ah well, maybe in a next post.

Be sure to read up on this at:

Share

06

05 2013

Flash Cache Acceleration with Pernixdata & SANDisk FlashSoft

Just a few days ago I was lucky enough to be a delegate to the 3rd Storage Field Day event organized by Gestaltit.

We had presentations from various well-established companies as well as from a few startups. Most presentations were excellent and we were able to have a deep dive into the details of various products.

The presentations that made the most impression on me were those by PernixData and SanDisks FlashSoft. To be honest, at first I didn’t have high expectations of SanDisks FlashSoft, since I was not aware SanDisk had enterprise products. I still thought of them as a manufacturer of consumer products. A good thing this presentation could teach me otherwise.

SanDisk

PernixData

For a while now we can see the new trend of sticking SSD drives or FLASH cards in servers that must lead to extreme high IO with minimal latency. In most cases this worked excellently, given the configuration or application was tuned for such usage. Although the performance can be exceptionally high, it hasn’t gotten much traction yet. In enterprise environments, it is quite normal to have the centralized storage capacity replicated to another location for availability or recoverability purposes. The FLASH and SSD that was local to the server couldn’t be replicated unless the server or application would take care of that. The downside is that there is no coherency or consistency between the local SSD or FLASH storage and the back-end storage array, rendering the replication useless.

Another problem is that the local SSD or FLASH storage cannot be shared with another server or application in case of clustering. A failover configuration was not possible.

The SanDisk FlashSoft and PernixData are quite different in this area. They use the SSD or FLASH storage as a cache device. The server holding the SSD or FLASH device can achieve an enormous performance improvement, and still make sure all data will eventually be flushed to the back-end arrays so all the data will be coherent in the array.  Using the SSD and FLASH devices as a cache device will impose a problem if the data resides in the cache long, without it being destaged to the back-end. The data in the back-end will be out of sync, and an outage will result in data corruption and replication is pointless.

That’s why there are a few modes of write caching.

  • Write back cache, is where data is stored in cache and a write complete acknowledgment is returned to the application without actually writing the data to the back-end storage media. Data is at risk if an outage occurs before the data is actually written to back-end media. A huge benefit however can be that multiple writes can be combined or consolidated and ordered, thereby preserving valuable IOs to the back-end. Read IOs can also be served from the cache also preserving IOs to the back-end (depending on read cache hit of course).
  • Write through cache, is where the write complete acknowledgement is not given to the application until the write IO is safely written in the back-end storage device. There is no real performance benefit on the first actual write, but subsequent reads might be served from cache, if the written blocks are also stored in the local SSD or FLASH cache devices. This method is safer than write back cache if you want to replicate the back-end storage or share the back-end storage with other hosts as you would in a cluster.

Now, back to the presenters and their products.

Pernix and FlashSoft are software companies. They support a number of SSD or FLASH devices in the server that can be used as a caching device. Both products are very similar to each other, but differ on some points.  As time passes, these differences will be less as both products will evolve and mature more and get more features.

PernixData was co-founded in Februari 2012 by Satyam Vaghani, the brains behind at least VMWare VMFS and VAAI and Poojan Kumar, who also worked for VMware and Oracle before that. Satyam knows exactly how VMware works and has very deep knowledge of VMware and all storage principles related to VMware. But don’t make the mistake of underestimating SANdisks role in this arena.  SanDisk acquired California based start-up FlashSoft in February 2012 and have added it to complete their stack of enterprise products. This now ranges from various hardware platforms to software. FlashSoft is a working product, whereas PernixData still is in beta.

PernixData is installed on all ESX hosts and the management and reporting utility is installed as a plugin into vCenter. The same applies to FlashSoft, but a separate management and policy utility is installed on a server outside vCenter, although this server might as well be the vCenter server itself.

Both PernixData and FlashSoft are kernel mode addons that are loaded into the VMware hypervisor kernel.  Installing these modules is a non-disruptive operation and after installation acceleration can immediately be enabled, provided supported SSD or FLASH devices are already available in the ESX hosts.

PernixData needs to be installed on all servers in the ESX cluster but not all ESX hosts need to have SSD or FLASH devices installed. In vCenter you will then be able to create a Flash Cluster resource in which all hosts with SSD or FLASH devices can be grouped. When accelerating volumes (ESX LUN’s) or VMDK’s or VMFS’s you can select a protection level of 0 for no cache replication, 1 for a single remote copy on an SSD or FLASH on another server in the cluster if available. If you select 2, you can even create 2 copies on 2 different servers in the cluster if available or on another server with copies on different flash devices if available. SANdisk FlashSoft isn’t at this level of high availability yet, but is getting there in the next release. The way PernixData and FlashSoft solve the failover details of cache coherency differ. Unfortunately FlashData embargo prevents me from revealing those details.

As for SSD or FLASH devices PernixData and FlashSoft slightly differ too. PernixData support all SSD and FLASH devices that are on the VMware Hardware Compatibility List, but FlashSoft currently supports most well known devices. For a detailed list of FlashSoft supported devices I would refer you to their site, but the information doesn’t seem to be publicly available.

The clustered cache accelerator is a true clustered feature with HA failover support, vMotion and all. If a VM is vMotion’d to another ESX host in the cluster, with a SSD or FLASH acceleration device, the cached data will also be migrated to the other host over the ESX network interfaces. This might have a slight delay or cause a slight increase in latency. This allegedly is not noticeable by the VM. If the failover or vMotion goes to or from a server without a cache acceleration device this is still possible and supported. As I said, FlashSoft isn’t at this level yet, but are getting there on a short notice.

For a single non-clustered system with acceleration FlashSoft will work in write-back mode, but in a clustered configuration FlashSoft needs to be in write-trough mode. But again, the future holds new cool features.

Currently, PernixData and FlashSoft only works for VMware vSphere 5.0 and up, but FlashSoft also works with certain Windows and Linux bare metal servers. Both vendors have no support Microsoft Hyper-V yet, but plans are that eventually Hyper-V will also be supported. It’s quire obvious that VMware is by far the biggest market.

Both PernixData and FlashSoft have made sure that not a single change has to be made to the VM’s or applications for the acceleration to work. So the acceleration is completely transparent to the host (bare metal or VM) and no configuration change or agent software is needed.

At this time, acceleration is only possible on block IO devices. File level acceleration might be a thing of the future, where VM’s or Databases on NFS might see massive improvements in performance. This is just a small market in comparison, so I think not much priority is given to this type of acceleration.

Why use Flash Cache acceleration?

The technology behind SSD or FLASH acceleration will enable IO consolidation to be performed on a host and therefore lower the amount of IOs to be fired at the back-end array. This will lower the overall load on the storage arrays by a significant percentage. You will then be able to postpone investing in a newer or faster storage array or increase the total load in IOs on a host or ESX cluster. You will achieve a higher utilization on existing arrays and servers without upgrading in that area.

You might want to check this in a cost perspective though. The acceleration software isn’t free. Without having the exact figures I would guess MSRP is about $3000,- to $5000,- per host and then you need the SSD or FLASH devices which go for any number between $2000,- to as high as you like per device, depending on your capacity needs.

You will eventually need to compare the cost of SSD or FLASH acceleration to the cost of upgrading your storage array.

Conclusion;

Both companies are very similar although some slight difference is there. The difference is temporarily however, since both companies are working hard on improvements and new features. You would think the creator of the VMWare VMFS stack and VAAI would have the greatest advantage and PernixData would have the brightest future in this field. But don’t forget SanDisk is a fortune 500 company with huge cash flow and team of developers. They too see the value of this market and are working eagerly to secure their part. Don’t forget, the all major storage vendors will have SSD or FLASH acceleration software in the end. So you might as well check your preferred vendor for their development in this area. Just be sure to give them a thorough investigation. Use all information you can find to make sure your vendor’s product doesn’t suck.

If you are looking into SSD or FLASH acceleration, make sure you read all you can about it. You could start with these blogs and videos.

PernixData

SanDisk

 

Disclaimer : I was invited to SFD3 and all travel and accommodations were paid for. I was not compensated for the time spend at SFD3 nor was I obligated to write about the sponsors and / or presentations. I did so by my own decision and I have written my own perception of the SFD3 event. 

Share

03

05 2013

Storage Field Day 3rd edition, Arrival day

Well,

the first day is over…..and fortunately so. I’m up 24h by now, and I am absolutely in desperate need of some sleep.

Today actually isn’t a real Tech/Storage Field Day but a day where all delegates arrive and get together for an evening dinner in which we get to know each other. Some have been to a Tech Field Day before others are new to the event.

Although very exhausting, it was good to meet them all get to know them all better during diner.

Now, I will hit the sack and get ready for tomorrows line-up. You too get ready to get all the latest and greatest about

 Marvell 

on this blog, or on the blogs of all other delegates and twitter with hashtag #SFD3.

Share

24

04 2013

Debriefing Software SVC & Storwize Cloud Based SRM

I recently had the opportunity to get in-depth working knowledge of the “Debriefing Software” Cloud based Storage Resource Management solution. The tool has won several awards already, so they must be doing things right.

Debriefing Software is based in Denmark and their servers are also in a datacenter in Denmark if you want to know.

Although it’s name could suggest it to be a general Storage Resource Management tool, it is clearly targeted towards “IBM Tivoli Storage Manager”  customers and/or “IBM San Volume Controller & Storwize” customers.
It is a cloud base Storage Resource Management tool. I haven’t found many real cloud based SRM tools, and I would love to make a comparison or team up with other bloggers who have knowledge or experience with other cloud SRM tools.
As I said, this one targets IBM customers in particular. The company info states IBM as a partner, but with this SRM tool I also consider them as one of IBM’s Totalstorage/Tivoli Productivity Centers competitors.
Debriefing is one of IBM’s beta-test customers, so they have early access to IBM code levels to start their own testing and feature enhancements. The Wizard Storage Portal (WSP for short) closely follows  IBM development on new features and tries to implement them into the portal as quickly as possible. As far as I was able to tell, Debriefing has a very short time to market on new developments. They are mostly ahead of costumers preparing their scheduled upgrades to new versions of the IBM products they might have.

Comment from Jesper Matthiesen (CTO, Founder Debriefing Software);

We are starting test of the SVC 7.1 firmware in a few days.

The WSP is completely cloud based, meaning you need no additional equipment in your datacenter. All the collected information is send out to the Debriefing servers. All you need is a low footprint collecting agent, which is called a Probe and a transfer agent, which conveniently is called an Agent, on a Windows server.

The SVC/Storwize data collection

The Probe collects the data from the SVC/Storwize nodes by means of a SSH/public/private key exchange. It fires some  commands at the SVC clusters and gets a recent config backup XML file and the various performance metrics files. The probe performs an SVC config backup every hour. The probe and agent can be on the same host, but also on different hosts, as long as they both have access to a common file share to exchange the data. User accounts for communication between the probe and SVC are configured locally, and are never communicated to the Debriefing servers. In case of an internet connection disruption between agent and the Debriefing servers, the probe will continue to collect data from your SVC clusters and keep them until data upload to Debriefing eventually has succeeded.

Keep in mind though that the Agent isn’t really good in handling large amounts of files because it continuously does a directory scan. With thousands of files this would become too much to handle and will never seem to complete. But a few days worth of files is no problem.

The agents uses HTTPS to communicate to its upload servers at the Debriefing site. You can change some settings regarding collection of data to match your own needs, but it is limited to setting the interval, not what you want to collect.

The probe and agent are plugins to a tool called the “Wizard Control Center”. The WCC helps you keep the plugins up-to-date. The WCC tool can check for new versions of the plugin, but you have to install them yourself. This download and update part of the tool is somewhat sluggish and the version information of the plugins is hard to navigate. The version window is just too darn small.  Although you probably won’t use this a lot, some rework wouldn’t harm.

From what I was able to experience the portal (website) is very stable. Bugs are fixed quickly and feedback from the support department is getting better and better. Customer suggested improvements to the tool are evaluated and can be incorporated in the tool within a couple of months. I have never seen a vendor be that responsive to customer requests. I have never even seen customer suggested improvements make it to a large vendor’s tool at all. The Debriefing guys have limited resources, so it seems only fair to say not all requests will make it to the tool, and the time it will take Debriefing may vary depending on their development workload. But if you’re improvement will benefit more Debriefing users, they will surely adopt your suggestion.

Some nice features

I will not list all the features, but some of the ones I like best.

The Wizard Storage Portal lets you set performance expectations on pools, mdisks, and volumes. You can enter for instance the expected number of IOPS a storage pool should be able to deliver or the maximum latency you want in a pool. If that IOPS or latency limit is reached an alert can be issued to a storage admin, so he/she can take action. This works for pools, mdisks and volumes. If you have a lot of pools and volumes,  entering the expectations will be a dreadful task. You might want to open a ticket with Debriefing so they can mass-update your content for you.

In a recent release (I believe it was 3.18) they have made heat-maps available based on the performance information in the database. You can check heat-maps on many objects in TSM and SVC/Storwize. As an example I have taken a volume heat-map on a selection. The notepad is to hide the volume names.

So is it all good?

No, there are certainly some flaws in the tool/portal. The Wizard Storage Portal holds many default views and reports. For the TSM part of the portal, many schedules reports are available, but scheduled reports are missing from the SVC/Storwize part. If you want to add a scheduled report in the SVC/Storwize you will find only the TSM options there.  The tool is initially build for the TSM platform and most evolution in the tool is in the TSM area. Some of that evolution is missing for the SVC/Storwize part. But don’t misread this. The tool is not less useable because of this. You just won’t have many custom or scheduled reports at your disposal.

Comment from Jesper Matthiesen (CTO, Founder Debriefing Software);

Scheduled Reports would also be great to have in the SVC/Storwize plugin.

When WSP started out in 2003, the scheduled reports was the first feature that was added, because that was the way backup reporting was made back then. Backup happened at night, reporting was evaluated in the morning, and problems was fixed in the afternoon. Now, backup happens 24/7 – so a Scheduled Report is often outdated before its even read. And the use of online and UPDATED reports on the portal, combined with alerts has taken over.The things that most of our customers asks for in Scheduled Reports, concerns capacity usage for charge back purposes. For that, we have the WSP Dynamic API, that allows deep integration with other portals, cmdb and ERP systems.
We can easily add Scheduled reports for SVC also.
Ideas are very welcome.

 

I wasn’t able to investigate the TSM part of the tool much, so most of my content is based on the SVC/Storwize part of the tool. Be sure to take that into consideration if you are looking at this tool.

Documentation is limited to something I’d call a whitepaper. The fact is proper end-user documentation is completely missing. You can’t download or find any. Debriefing is aware of this and improvements are on their way.  Webpage content and context aware help in-page functions are being worked on. It will help users in using the tool or understanding the report they are reading, but I don’t think that will help customers in incorporating the tool in their environment. Some architectural and technical information is still needed.

There is some inconsistency in the screens. Some reports have text filters, some haven’t. Some reports have radio buttons for selecting options and others have checkboxes. These are just some minor inconsistencies  but you might notice them while working with the tool. I know some redesign is being worked on. So if you read this post beyond WSP version 3.19 things might already have changed.

As a last comment on what isn’t good, the tool provides a lot of (historic) performance information in beautiful graphs, tables and even heat-maps (which is great, because the heat-maps are available for many objects like hosts, volumes, pools and more) but (historic) capacity information is missing. You can see the actual current capacity usage per cluster, pool and host, but if you want to look at some trending you will find that you can’t.  The historic capacity information is in their database, so it can be used. They just need to add some reports for that, and are also working on it.

Comment from Jesper Matthiesen (CTO, Founder Debriefing Software);

So true!!
There are alerts that will warn about capacity shortage.
It’s an area for improvement.

 

As to their Wizard Storage Portal logo. It obviously is a matter of taste, and I think it looks nice, but I can’t help reading it as VSP, the Hitachi Virtual Storage Platform.

Can it replace IBM’s TPC?

Well, depending on what you want it can. I consider it mature enough to use for reporting on your TSM (based on what I have seen) and SVC/Storwize environment, as long as you don’t want to use SAN/Fabric reporting of some kind. If you want to collect information from hosts as the “TPC for Data module” does, you can’t use Debriefing.

So, if you limit your reporting to just the TSM and SVC/Storwize environments, it could replace TPC. If you need more, you will have to look at TPC.

Comment from Jesper Matthiesen (CTO, Founder Debriefing Software);

What I hear from customers, is that WSP’s monitoring/alerting for TSM and SVC is more flexible than TPC.
They also appreciate the usage based invoicing model of WSP – no upfront investment is required.

 

 

Share

26

03 2013