Vodafone dekking Landgraaf/Pinkpop

Beste Vodafone,

Het is in regio Landgraaf al jaren hopeloos met de dekking. Verschillende malen heb ik gehoord dat er snel iets aan wordt gedaan, en net zo vaak heb ik gehoord dat het niet op de korte termijn planning staat om er iets aan te doen. Ook werd al meerdere malen beloofd dat 4G dit probleem vanzelf zou oplossen omdat deze verbinding vele beter was en een beter bereik heeft.
Ik ben geen mobiele telecommunicatie specialist maar in de IT wereld waarin ik verkeer is het zo dat als de snelheid omhoog gaat, de maximale afstand hiervoor juist korter wordt. Meestal ook nog eens evenredig, dus voorspelbaar.

Read the rest of this entry →



06 2014

Flash Cache evolution

Last week, I presented about Server Side Flash Cache at the Storage Expo NL.
The contents were about the things to check out like Write-Through or Write-Back caching and why either one is good or bad.

While preparing that presentation there was an announcement about Diablo Technologies, but I read the post on Chris Evans’ site. This technology about using flash storage in memory DIMM format got me thinking about flash as a cache. Just this morning I read Chris Melor’s post on the register about similar developments which to me is another confirmation of my thoughts.

Many storage vendors are in one way or another trying to get a piece of the server side flash pie. Some startups are trying to get a piece by offering an independent software product like Pernix Data FVP, Proximal Data AutoCache with I/O Intelligence and SANDisks FlashSoft.

Others are trying to bind more revenue to themselves by combining hardware and software solutions. These are for example FusionIO, EMC (although EMC XtremeSW also supports most other SSD and Flash devices) and Netapp Flash Accel.
I’m not claiming any one to be better then the other, because it takes more than just a good product to make it a good solution for your situation.

What I’m going to say is my opinion and mine alone. You are free to disagree of course (and many probably will) and if you wish you can post your remarks in the comments.

What I think is going to happen is that the flash cache software, with respect to their complexity and awesome features, will be incorporated into the well known operating systems. Microsoft, VMware and even the open source community in my opinion is smart enough to cook their own recipe and come up with their own flavor that makes it possible to turn a flash or ssd device into a read/write cache. As usual, some of the ISV solutions will get swallowed by one of the big companies. To me, it makes a lot of sense to incorporate the flash cache software into the standard operating systems. This should be a standard feature for systems like Microsoft Server (SQL, Exchange, Hyper-V and more) or the Enterprise Linux distributions. It’s obvious that VMware is already working on it, and it’s just a matter of time before it will be available. What will then happen to the ISV’s that are not yet bought by the big ones?

I think it will be a matter of two to three years before flash cache software solutions become obsolete. Should you hold back on your plans to use them? No, of course not. Three years is still a long time to sit on your hands and not solve the issues you might have. Of course you should start using server side caching if it solves issues you have. And take into account I am not an analyst. I might be so very wrong……

Diablo technologies’ announcement actually spawned another thought. At first I was convinced that in a few years time, every computer system (server, laptop or desktop) motherboard would have a kind of SSD device backed into the circuitry, and HDD’s would no longer be a option when buying a computer or server system. The lifespan of SDD or flash can already be long enough to last with the motherboards. In blades and many consumer products this is already the case.
The usage of flash in the memory DIMM format will not change my view on this. Although this DIMM flash will make it possible to save your system’s state even on power outages makes it more robust, I don’t think the operating systems are going to boot without a device that mimics a drive with a bootsector anytime soon.

If the flash DIMM’s are going to be used as a caching device, this will even be a bigger performance improvement than when using the PCI flash cards. The DIMM is closer to the CPU then the PCI cards. It might be a fair blow to the all-flash system builders, but I don’t think it will put them out of business. The DIMM flash modules will have one big drawback. It’s not shared, unless a extremely fast memory interconnect is used to join multiple systems at the brain. Still, this limits the possible distance between systems. I think the DIMM flash modules will be of more effect to the ssd and flash card vendors. They will have to dive into the DIMM world to get a piece of that action.
The ssd or flash card vendors will not only have to compete with the other storage vendors in this arena, but also with the server vendors, as they will also try to get hold of that market.

Exciting as always.




11 2013

Storage Expo NL 2013


I’m on my way home from what I consider a successful Storage Expo event. We have seen a lot of visitors and from what I’ve heard from the exhibitors they had some excellent conversations with (potential) customers. This is after all what the event is all about.

For vendors and partners it is a kind of reunion in which you get to catch up people you don’t see often. But lead generations is the primary goal for most exhibitors obviously.
I am part of the Storage Expo advisory board and feel somewhat responsible for the success of the Storage Expo in terms of presentations and sessions. The last few years we’ve seen visitor numbers decline a bit and we contributed this to the recession, because engineers and management level employees were no longer allowed to spend time on anything other than actual work. Another problem was thinking of good sessions to organize having a catching or appealing topic to discuss. Although the storage business is very busy, not everything is interesting for a presentation and doesn’t attract visitors. It is kind of hard to think of topics of presentations 6 months before the actual event takes place.

From the feedback I got, the sessions were well received this year. I did a presentation on “Server Side Flash Cache” which discusses the caveats to consider when planning for such a solution. While making the presentation I discovered there are many differences between the different vendors in how the go about implementing the software and sometimes hardware. These differences led me to think an overview of the technical details would be a good thing to make for customers to use as a reference. So stay tuned for a new post on this topic in the near future. I’m still waiting on the feedback and evaluation of my  presentation of course. Since it was my first presentation in front of a new and unknown audience, i was kind of anxious. The mailing will be send out somewhere next week. I’m looking forward to the feedback. My presentation and of other presenters as well, will be available for download from the Storage Expo website from next week on. I will update this post with the proper links.

So, looking back on the Storage Expo, I think we had a successful show this year and it’s definitely viable to continue this show for the next couple of years. We don’t want this event to die off like the one in the UK a few years back. I’ve been looking to get the dates changes a bit though, because there always is a one-day overlap with the Storage Networking World Europe even in Frankfurt. But if you consider the difficult planning and organization that goes into such events, you can imagine that you don’t just change some dates. The venue usually is booked years in advance.
On the other hand,  interesting speakers are already in the neighborhood when they speak at SNW Europe, so having them come over for the Storage Expo usually isn’t all that difficult.

There aren’t many independent trade shows in the storage industry in Europe, so it’s good to keep SNW Europe and of course the Storage Expo Benelux going.




10 2013

IBM feature or enhancements requested by end-users


IBM is a huge organization with lots of developers working on a bunch of products. They have a lot to choose from, but still, sometimes their product is missing a feature, long after others might have added that feature.


Development cycles are perceived as sluggish and lengthy because it feels like it takes forever for IBM to incorporate new functions. This is true for most huge players in the IT industry. They have to consider a huge customer base with an enormous variety in equipment and software they have to work nicely with. So the quality process just takes long because it needs to be extremely thorough and a lot has to be tested before it can be released to public. The huge vendors have a lot to lose if things don’t work.

Read the rest of this entry →



06 2013

Curiosity about UNMAP and array based replication

Most storage and server operating vendors are supporting the SCSI UNMAP function to release unused data blocks in the storage system to free up unused space. It they don’t support it now, they will over time.
We all know this is to needed in addition to the thin provisioning features to keep volumes thin provisioned over time. If the thin provisioned volumes are used as a source volume for replication to another storage array (local or remote) the target volumes can be thin provisioned as well (in most cases anyway). If the source volumes fill up, so will the target volumes. Without the UNMAP function, unused space will remained claimed by the operating system and the thin provisioned volume will end up being a thick provisioned volume. When the unused space is being released using UNMAP, will this also be forwarded to the replication target volumes? Will the unused space be freed on the target storage array system as well?

Read the rest of this entry →



06 2013

Storage Field Day delegate potential

Arjan wrote a post on becoming a delegate for Storage Field Day or any other Field Day for that matter a couple of days back, which triggered me to write this one.

It’s not in any way correcting anything Arjan wrote, because he did a good post. I would just like to add some thoughts.

I’ve been a delegate two times now and would love to be a delegate at future “* Field Days”.  Sometimes  I wonder if my contribution is valuable enough to justify sponsors/vendors paying for the trip. Then again, much value comes from your involvement in the event and the feedback your provide. Blogging about it creates exposure for a longer period of time, and depending on the number of readers you have that could be huge exposure or just some.

Read the rest of this entry →



05 2013

SFD3 Launch of ExaBlox

At the most recent Storage Field Day we were again witness of the “unveiling”  of a storage startup.  The time we had the honor of being part of coming out of stealth mode of ExaBlox. The presenters from ExaBlox were Tad Hunt (CTO and Co-Founder) and Douglas Brocket (CEO). Tad used to work at Bell Labs. You might want to check their LinkedIn profiles for more details. It is quite obvious they aren’t newbies in the world of storage.


Read the rest of this entry →



05 2013

Target Driven Zoning

Last week I read a post by Erik Smith regarding experiments in Target Driven Zoning.

Target Driven Zoning introductions were made in January of 2012 and the first lab experiments are now surfacing.

Many storage admins working with Fibre Channel fabrics know exactly how much pain it can be to manage a large installation with thousands of ports. Erik’s posts indicates EMC has a strict single-initiator-target (or even single-initiator-single-target) zoning policy, but it is my experience that almost all vendors with FC connectivity use this policy. As a result, sometimes a clustered system with multiple ports result in a humongous zone count.

In a heterogeneous environment with thousands of ports it’s a hassle to find the right zone or create a zone with the right WWPN’s. And than you have to do it all again for the redundant other fabric, because we are probably using a resilient dual fabric design.

One of the larger problems in these environments is housekeeping. Because it’s so much work, not many admins actually go into the fabric management software to track down unused ports or zones. So in the long run, you will end up with a huge pollution which will only get harder to clean.

Target Driven Zoning is an initiative that would be a good solution here, as it would automate many of this aforementioned administrative burden. If TDZ would eventually become generally accepted and implemented by the storage vendors and FC switch vendors it would create/update/delete zones automatically based on the LUN masking information you provide in the storage device’s management interface. So when a host gets decommissioned (deleted from the LUN masking database), the zone would automatically be removed from the Active Zone sets in the SAN(s).

Since only storage devices that use LUN masking, of some form, will benefit from this, it will most probably not remove all your manual zoning labor. Large installation will likely also have FC connected tape devices that still require manual zoning. Maybe there are even other devices connected to your SAN that will not use TDZ. But the remaining work that cannot be automated using TDZ will be just a very small piece of the large manual labor pie, so the benefit will be huge.

As I can see it now, work is done on this and the T11 standards are also being documented and approved. It took Erik & Friends just over a year to have the first “proof-of-concept” results. I think this is actually pretty darn fast. If you are curious about the progress, you might use the links in the beginning of this post to keep track of Erik and his work.

I am still wondering however, how the target device identifies the zone that needs to created/updated or deleted? There must be some unique information that lets the target device know what zone needs modification? Or will the target tell the switch to modify all the zones the initiator is part of? Ah well, maybe in a next post.

Be sure to read up on this at:



05 2013

Flash Cache Acceleration with Pernixdata & SANDisk FlashSoft

Just a few days ago I was lucky enough to be a delegate to the 3rd Storage Field Day event organized by Gestaltit.

We had presentations from various well-established companies as well as from a few startups. Most presentations were excellent and we were able to have a deep dive into the details of various products.

The presentations that made the most impression on me were those by PernixData and SanDisks FlashSoft. To be honest, at first I didn’t have high expectations of SanDisks FlashSoft, since I was not aware SanDisk had enterprise products. I still thought of them as a manufacturer of consumer products. A good thing this presentation could teach me otherwise.



For a while now we can see the new trend of sticking SSD drives or FLASH cards in servers that must lead to extreme high IO with minimal latency. In most cases this worked excellently, given the configuration or application was tuned for such usage. Although the performance can be exceptionally high, it hasn’t gotten much traction yet. In enterprise environments, it is quite normal to have the centralized storage capacity replicated to another location for availability or recoverability purposes. The FLASH and SSD that was local to the server couldn’t be replicated unless the server or application would take care of that. The downside is that there is no coherency or consistency between the local SSD or FLASH storage and the back-end storage array, rendering the replication useless.

Another problem is that the local SSD or FLASH storage cannot be shared with another server or application in case of clustering. A failover configuration was not possible.

The SanDisk FlashSoft and PernixData are quite different in this area. They use the SSD or FLASH storage as a cache device. The server holding the SSD or FLASH device can achieve an enormous performance improvement, and still make sure all data will eventually be flushed to the back-end arrays so all the data will be coherent in the array.  Using the SSD and FLASH devices as a cache device will impose a problem if the data resides in the cache long, without it being destaged to the back-end. The data in the back-end will be out of sync, and an outage will result in data corruption and replication is pointless.

That’s why there are a few modes of write caching.

  • Write back cache, is where data is stored in cache and a write complete acknowledgment is returned to the application without actually writing the data to the back-end storage media. Data is at risk if an outage occurs before the data is actually written to back-end media. A huge benefit however can be that multiple writes can be combined or consolidated and ordered, thereby preserving valuable IOs to the back-end. Read IOs can also be served from the cache also preserving IOs to the back-end (depending on read cache hit of course).
  • Write through cache, is where the write complete acknowledgement is not given to the application until the write IO is safely written in the back-end storage device. There is no real performance benefit on the first actual write, but subsequent reads might be served from cache, if the written blocks are also stored in the local SSD or FLASH cache devices. This method is safer than write back cache if you want to replicate the back-end storage or share the back-end storage with other hosts as you would in a cluster.

Now, back to the presenters and their products.

Pernix and FlashSoft are software companies. They support a number of SSD or FLASH devices in the server that can be used as a caching device. Both products are very similar to each other, but differ on some points.  As time passes, these differences will be less as both products will evolve and mature more and get more features.

PernixData was co-founded in Februari 2012 by Satyam Vaghani, the brains behind at least VMWare VMFS and VAAI and Poojan Kumar, who also worked for VMware and Oracle before that. Satyam knows exactly how VMware works and has very deep knowledge of VMware and all storage principles related to VMware. But don’t make the mistake of underestimating SANdisks role in this arena.  SanDisk acquired California based start-up FlashSoft in February 2012 and have added it to complete their stack of enterprise products. This now ranges from various hardware platforms to software. FlashSoft is a working product, whereas PernixData still is in beta.

PernixData is installed on all ESX hosts and the management and reporting utility is installed as a plugin into vCenter. The same applies to FlashSoft, but a separate management and policy utility is installed on a server outside vCenter, although this server might as well be the vCenter server itself.

Both PernixData and FlashSoft are kernel mode addons that are loaded into the VMware hypervisor kernel.  Installing these modules is a non-disruptive operation and after installation acceleration can immediately be enabled, provided supported SSD or FLASH devices are already available in the ESX hosts.

PernixData needs to be installed on all servers in the ESX cluster but not all ESX hosts need to have SSD or FLASH devices installed. In vCenter you will then be able to create a Flash Cluster resource in which all hosts with SSD or FLASH devices can be grouped. When accelerating volumes (ESX LUN’s) or VMDK’s or VMFS’s you can select a protection level of 0 for no cache replication, 1 for a single remote copy on an SSD or FLASH on another server in the cluster if available. If you select 2, you can even create 2 copies on 2 different servers in the cluster if available or on another server with copies on different flash devices if available. SANdisk FlashSoft isn’t at this level of high availability yet, but is getting there in the next release. The way PernixData and FlashSoft solve the failover details of cache coherency differ. Unfortunately FlashData embargo prevents me from revealing those details.

As for SSD or FLASH devices PernixData and FlashSoft slightly differ too. PernixData support all SSD and FLASH devices that are on the VMware Hardware Compatibility List, but FlashSoft currently supports most well known devices. For a detailed list of FlashSoft supported devices I would refer you to their site, but the information doesn’t seem to be publicly available.

The clustered cache accelerator is a true clustered feature with HA failover support, vMotion and all. If a VM is vMotion’d to another ESX host in the cluster, with a SSD or FLASH acceleration device, the cached data will also be migrated to the other host over the ESX network interfaces. This might have a slight delay or cause a slight increase in latency. This allegedly is not noticeable by the VM. If the failover or vMotion goes to or from a server without a cache acceleration device this is still possible and supported. As I said, FlashSoft isn’t at this level yet, but are getting there on a short notice.

For a single non-clustered system with acceleration FlashSoft will work in write-back mode, but in a clustered configuration FlashSoft needs to be in write-trough mode. But again, the future holds new cool features.

Currently, PernixData and FlashSoft only works for VMware vSphere 5.0 and up, but FlashSoft also works with certain Windows and Linux bare metal servers. Both vendors have no support Microsoft Hyper-V yet, but plans are that eventually Hyper-V will also be supported. It’s quire obvious that VMware is by far the biggest market.

Both PernixData and FlashSoft have made sure that not a single change has to be made to the VM’s or applications for the acceleration to work. So the acceleration is completely transparent to the host (bare metal or VM) and no configuration change or agent software is needed.

At this time, acceleration is only possible on block IO devices. File level acceleration might be a thing of the future, where VM’s or Databases on NFS might see massive improvements in performance. This is just a small market in comparison, so I think not much priority is given to this type of acceleration.

Why use Flash Cache acceleration?

The technology behind SSD or FLASH acceleration will enable IO consolidation to be performed on a host and therefore lower the amount of IOs to be fired at the back-end array. This will lower the overall load on the storage arrays by a significant percentage. You will then be able to postpone investing in a newer or faster storage array or increase the total load in IOs on a host or ESX cluster. You will achieve a higher utilization on existing arrays and servers without upgrading in that area.

You might want to check this in a cost perspective though. The acceleration software isn’t free. Without having the exact figures I would guess MSRP is about $3000,- to $5000,- per host and then you need the SSD or FLASH devices which go for any number between $2000,- to as high as you like per device, depending on your capacity needs.

You will eventually need to compare the cost of SSD or FLASH acceleration to the cost of upgrading your storage array.


Both companies are very similar although some slight difference is there. The difference is temporarily however, since both companies are working hard on improvements and new features. You would think the creator of the VMWare VMFS stack and VAAI would have the greatest advantage and PernixData would have the brightest future in this field. But don’t forget SanDisk is a fortune 500 company with huge cash flow and team of developers. They too see the value of this market and are working eagerly to secure their part. Don’t forget, the all major storage vendors will have SSD or FLASH acceleration software in the end. So you might as well check your preferred vendor for their development in this area. Just be sure to give them a thorough investigation. Use all information you can find to make sure your vendor’s product doesn’t suck.

If you are looking into SSD or FLASH acceleration, make sure you read all you can about it. You could start with these blogs and videos.


  • PernixData : Satyam Vaghani Introduces PernixData
  • PernixData : An Overview of the PernixData Flash Virtualization Platform
  • PernixData : Demonstrating the PernixData Flash Virtualization Platform
  • PernixData : PernixData Technology Deep Dive



  • SanDisk, Flashsoft, and the Economics of Caching
  • SanDisk, Flashsoft, Core Caching Technology
  • SanDisk, FlashSoft for VMware vSphere
  • SanDisk, FlashSoft Performance Results

Disclaimer : I was invited to SFD3 and all travel and accommodations were paid for. I was not compensated for the time spend at SFD3 nor was I obligated to write about the sponsors and / or presentations. I did so by my own decision and I have written my own perception of the SFD3 event. 



05 2013

Storage Field Day 3rd edition, Arrival day


the first day is over…..and fortunately so. I’m up 24h by now, and I am absolutely in desperate need of some sleep.

Today actually isn’t a real Tech/Storage Field Day but a day where all delegates arrive and get together for an evening dinner in which we get to know each other. Some have been to a Tech Field Day before others are new to the event.

Although very exhausting, it was good to meet them all get to know them all better during diner.

Now, I will hit the sack and get ready for tomorrows line-up. You too get ready to get all the latest and greatest about


on this blog, or on the blogs of all other delegates and twitter with hashtag #SFD3.



04 2013