Apart from the Windows Cloud stuff there were some other speakers and demos that interested me, including a neat use of Silverlight at the Hard Rock CafÃ©Â website which you may have seen before as it hasÂ been bounded around as the piÃ¨ce de rÃ©sistance of Silverlight 2 since about March this year.
When you click that link you are presented with the Hard Rock CafÃ© memorabilia website, in the middle you will see an image that can be ‘Deep Zoomed’ into and each different node (item) can be interacted with separately. The resolution on the pictures of the items is amazing andÂ the detail to which you can zoom in to is awe-inspiring. I have just spent a few minutes reading a contract drawn upÂ for a Beatles performanceÂ in 1965, the text is so clear and takes no time at all to load, according to the speaker (whose name I forget, sorry) image optimisation has been a big part of the development of ‘Deep Zoom’ in Silverlight 2 in an attempt to deliver the best quality content to the widest range of users, narrowband and broadband of all speeds. This is definitely something that I can applaude as when I find myself on the road and using my 3G data stick I still want to view high resolution pictures in order to send them on or post them to the web but find myself hanging around, usually having to start again more than onceÂ after passing through tunnels, for what seems like an age for rich content to load, which is more than mildly frustrating.
The hot topic of the day however was Hyper-V andÂ OS virtualisation.
All but twoÂ Windows Server 2008 editions come with Hyper-V services built in. If you don’t know what Hyper-V is, let me try explain itÂ as quickly as possible so that we can move on;
You have one server. Hyper-V can then splitÂ this serverÂ into two (or more)Â ”virtual” servers which means two servers running on one piece of hardware. To anyone but the sysadmins they respond and look as two completely seperate pieces of kit, but they aren’t.Â (SomethingÂ VMWare has actuallyÂ been doing for years)
The whole point of all this is that, if it suits your business model, you could potentially chop your server and datacentre spend into tiny little pieces and make huge savings.Â Servers these days are so powerful and often only a small percentage of that power is utilised, so one could easily gobble up three others and still have power (CPU, RAM, etc.)Â to spare in an emergency.
For example, if you had four file servers each running pretty steadily at 10% utilisation, why buy 4 pieces of kit that are only going to be using 40 out of 400%Â when you could buy one piece of kit that would use 40 out of 100% and do the same job? It’s the choice between paying Â£10,000 for 1 server or Â£40,000 for 4, in the end they’ll both be doing the same job. The green freaks amongst you too will be happy, because only 1/4Â the power is being consumed.
OK. Maybe that’s a little simplistic, but you get the gist of what I’m trying to say, right?
Of course, it ends up introducing other problems such as single point failure, but even if you ran it as a high-availability cluster then you’d still be saving 75% on your hardware bill by only buying 2 servers instead ofÃ‚Â 8.
Also, by happy coincidence,Â MicrosoftÂ Hyper-V Server (notice no ‘Windows’ as there is no GUI)Â was released yesterday, October 1st as a free download and is free to use by anybody. It doesn’t support clustering, it doesn’t have a GUI, it doesn’t support high-availability clustering, in fact, it doesn’t support very much, but, it would be great in a development or testing environment.
During these times of recession and depression and smaller purses Hyper-V may have just come at the exact right time for Microsoft to maximise profit. Way to go Steve.