Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3135 articles
Browse latest View live

A Simple Approach to Automating IT Services

$
0
0

Recently, I've been delivering workshops focused on software-defined infrastructure and IT automation solutions.  I'm finding clients to be excited and eager to begin providing IT offerings through self-service catalogs:  desktops, servers, databases, web servers, email, storage, backup and recovery, etc.  Beyond traditional IT offerings, innovation and imagination are leading to the simplified delivery of complex business offerings such as On-Boarding-as-a-Service, Audit-as-a-Service, Disaster Recovery-as-a-Service, and Dev/Test-as-a-Service (PaaS+).

 

Exciting times for sure!  But it's easy to get distracted by all the shiny new technologies offered by cloud automation vendors. We tend to forget that the most challenging (yet most valuable) assets within IT are the people and processes, not the technologies.  It's easy (relatively) to automate the heck out of every manual step that exists today and declare victory, but that's akin to "paving cowpaths."

 

IMG_4387.jpg

 

Next thing you know, you've built an expensive super-highway and bridge (to nowhere?) over a meandering dirt path. Yes, it's a somewhat smoother ride but is it really faster, less complex, and more efficient?  Does it provide more value than before?  In most cases, clients that simply automate their manual provisioning processes discover they've received little value at a big expense.

 

IMG_4388.jpg

 

So, before you start automating every manual process & decision point within an IT service, take time to reexamine how things are done today.   Maybe paving the cowpath is the right thing to do, but more often than not, it's better to consider alternatives to today's processing.  How should you go about examining your service provisioning processes?  You could use a fancy framework to assess existing processes but I prefer speed and simplicity.  To me, reinventing how you provision IT services boils down to examining the process steps (process), decision points, and actions required to deliver services.  (This goes for "Day 2" activities as well; such as requesting increased capacity for an existing service.) 


In order to understand if a process is needed and valuable, you must asses it (measure it) to determine it's necessity and value.  Once you understand the need and "cost" of a process, you can ask questions such as:

  • Can/Should we automate the process?
  • Is there's a better way to execute the process? (better relative to the process features listed in next section)
  • Can the process be re-sequenced within the broader provisioning of the service so that it's more efficient?
  • Do we still need this process?
  • Should we even offer this service?  (rationalization)

 

When assessing each process within the provisioning of an IT service, I look at the value of five process features (starting from top right of picture):

IMG_4386.JPG

 

1. Duration:  How long does it take for this process to complete?  For example, does a request for increased email storage sit in a manager's inbox for 1 week?

2. Transparency:  Is the process visible to all those concerned?  If I submitted the request for a new server, can I easily know the status of the request?  It the process being logged? Can we audit the execution of process for accountability?

3. Ease of Use:  Is the process easy to execute? Can it be performed by an organization and not a specific person?  It's hard to believe how often I hear a client say, "Only Jane or John can do that."  Tough luck if Jane & John are on leave.

4. Cost:  What does this process cost to execute. Cost is of course crucial because it gets calculated into the cost/price of the service offered.  Cost in a process is typically labor but can come from other sources such as office materials or shipping costs.

5. Governance and Security:  This may apply more towards decision points within the provisioning of a service but often times, there's an over-elaborate process of notifications, sign-offs, and justification.  For instance, "write a justification as to why you need an extra 100 GB of storage for you Wiki-Server."

 

Fairly simple, not perfect.  For instance, you may feel it's not enough to measure Duration alone, we need to break it out by Actual Working Time and Wait Time. Fine by me but I prefer to keep it simple.  Besides, the impact of Actual Working Time should be reflected in the Cost feature of a process.  And, we may revisit process Duration if it takes longer than acceptable.  At that point, we may learn that Actual Working Time is a problem to be addressed.

 

 

I measure and score each process feature using a simple green, yellow, or red indicator:

  • If the features of a process are all green (or mostly green), I tend to leave the process as-is and automate if possible.
  • If the features of a process are all red (or mostly red), I try to fix the process or find a better alternative process to automate. Better yet, if possible, I remove the offending process from provisioning.
  • And then you have everything in between; the "yellowish" process.  Maybe it's all yellow features, maybe some green and some red.  Whatever the combined rating, it's unclear as to how necessary and effective this process is within the provisioning of an IT service.  To me, this is a judgement call for the IT organization to make.  Perhaps the process is very expense but the ease at which it is executed is great.  The organization decides to keep the process because ease of use counts the most.  These are the tradeoffs to consider.  In some cases, just automating the process turns the "red" features to "green" and that's enough to decide to keep the existing process for automation.

 

Again, simple.  But I think it's a great way to venture into automating IT services, especially the provisioning of services to end-users.  Certainly there are other aspects to examine such as shared processing across provisioning services, effectiveness of decision-trees, chain-of-command, and even the portfolio rationalization of the services offered in general.  All good and well but in my opinion, you need to start simple.  You need to start small in scope and gain some quick wins.  Most importantly, you need to experiment and learn early on so that you can adjust your strategy as you progress onto providing more services, and more complex services.


PowerCLI での VM クローン について。(New-VM)

$
0
0

いまさらですが、PowerCLI で VM をクローンしてみようと思います。

 

今回は PowerCLI 5.5 R2 を使用しています。

PowerCLI> Get-PowerCLIVersion | select UserFriendlyVersion

 

UserFriendlyVersion

-------------------

VMware vSphere PowerCLI 5.5 Release 2 build 1671586

 

まず、PowerCLI で vCenter に接続します。

※vCenter のアドレスは「vc55u1-1.vmad.local」にしています。

PowerCLI> Connect-VIServer vc55u1-1.vmad.local

 

Name                           Port  User

----                           ----  ----

vc55u1-1.vmad.local            443   VMAD\Administrator

 

それでは、この VM 「vm001」をクローンしてみます。

PowerCLI> Get-VM vm001

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm001                PoweredOff 1        0.250

 

VM のクローンは、New-VM コマンドレットでできます。

「-VM」でクローン元、「-Name」でクローン先の VM を指定します。

ちなみに、VM のテンプレートからクローンするときは、

「-VM」の代わりに「-Template」に指定します。

 

今回は、ほかにもいくつかオプションを指定しています。

  • 「-VMHost」→ VM を作成する ESXiを指定。
  • 「-Datastore」→ VM を作成するデータストアを指定。
  • 「-DiskStorageFormat」→ 仮想ディスク(VMDK ファイル)のフォーマットを指定。

PowerCLI> New-VM -VM vm001 -Name vm002 -VMHost hv55n1.vmad.local -Datastore ds_hv55n1_01 -DiskStorageFormat EagerZeroedThick

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm002                PoweredOff 1        0.250

 

New-VM には、下記のようにパイプ「|」で VM を指定することもできます。

今回は、別のデータストアにクローンしてみました。

NFS データストアなので Thin 形式にしています。

PowerCLI> Get-VM vm001 | New-VM -Name vm003 -VMHost hv55n1.vmad.local -Datastore ds_nfs_181 -DiskStorageFormat Thin

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm003                PoweredOff 1        0.250

 

指定した ESXi / データストア に VM が作成されました。

PowerCLI> Get-VM vm00? | select Name,VMHost,{$_|Get-Datastore} | sort Name | ft -AutoSize

 

Name  VMHost            $_|Get-Datastore

----  ------            ----------------

vm001 hv55n1.vmad.local ds_hv55n1_01

vm002 hv55n1.vmad.local ds_hv55n1_01

vm003 hv55n1.vmad.local ds_nfs_181

 

VMDK ファイルも、指定したディスク形式になっています。

PowerCLI> Get-VM vm00? | Get-HardDisk | select Parent,Name,StorageFormat,@{N="GB";E={[int]$_.CapacityGB}},Filename | ft -AutoSize

 

Parent Name           StorageFormat GB Filename

------ ----           ------------- -- --------

vm001  Hard disk 1            Thick 10 [ds_hv55n1_01] vm001/vm001.vmdk

vm002  Hard disk 1 EagerZeroedThick 10 [ds_hv55n1_01] vm002/vm002.vmdk

vm003  Hard disk 1             Thin 10 [ds_nfs_181] vm003/vm003.vmdk

 

vNIC の種類(VMXNET3など)やポートグループは、クローン元の設定を引き継ぎます。

しかし、MAC アドレスは、ちゃんと再生成されます。

PowerCLI> Get-VM vm00? | Get-NetworkAdapter | select Parent,Name,Type,NetworkName,MacAddress | sort Parent | ft -AutoSize

 

Parent Name                 Type NetworkName  MacAddress

------ ----                 ---- -----------  ----------

vm001  Network adapter 1 Vmxnet3 pg-vlan-0005 00:50:56:90:30:b0

vm002  Network adapter 1 Vmxnet3 pg-vlan-0005 00:50:56:90:38:59

vm003  Network adapter 1 Vmxnet3 pg-vlan-0005 00:50:56:90:00:e6

 

大量のVM を作成したい場合は、

下記のようにまとめて作成したりもできます。

※この例だとクローン元 VM や ESXi などが決め打ちですが・・・

※赤字がコマンド入力している部分です。

PowerCLI> 4..10 | foreach {

>> $vm_name = "vm" + $_.ToString("000")

>> New-VM -VM vm001 -Name $vm_name -VMHost hv55n1.vmad.local -Datastore ds_nfs_181

>> }

>>

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm004                PoweredOff 1        0.250

vm005                PoweredOff 1        0.250

vm006                PoweredOff 1        0.250

vm007                PoweredOff 1        0.250

vm008                PoweredOff 1        0.250

vm009                PoweredOff 1        0.250

vm010                PoweredOff 1        0.250

 

あらかじめ CSV ファイルを用意して、

それを読み込んで New-VM を実行したりすることもできます。

 

このようなファイルを作成しておき・・・

PowerCLI> cat C:\work\vm_list.txt

SrcVM, NewVM, ESXi,            Datastore

vm001, vm011, hv55n1.vmad.local, ds_nfs_181

vm001, vm012, hv55n1.vmad.local, ds_nfs_181

vm001, vm013, hv55n2.vmad.local, ds_nfs_181

vm001, vm014, hv55n2.vmad.local, ds_nfs_181

 

CSV として読み込んで、New-VM を実行します。

読み込んだファイルに記載してあるように VM がクローンされます・・・

PowerCLI> $vm_list = Import-Csv C:\work\vm_list.txt

PowerCLI> $vm_list | foreach {

>> New-VM -VM $_.SrcVM -Name $_.NewVM -VMHost $_.ESXi -Datastore $_.Datastore

>> }

>>

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm011                PoweredOff 1        0.250

vm012                PoweredOff 1        0.250

vm013                PoweredOff 1        0.250

vm014                PoweredOff 1        0.250

 

最後に、

VM の削除は、Remove-VM コマンドレットでできます。

デフォルトだと vCenter のインベントリから VM を登録削除するだけですが、

「-DeletePermanently」を指定すると、VMDK ファイル自体も削除できます。

 

まちがって必要な VM を削除をしないように、

Get-VM で実行対象を絞ってから Remove-VM することをお勧めします。

 

まず、対象の VM を確認。

 

PowerCLI C:\> Get-VM vm00[2-9], vm01?

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm002                PoweredOff 1        0.250

vm003                PoweredOff 1        0.250

vm004                PoweredOff 1        0.250

vm005                PoweredOff 1        0.250

vm006                PoweredOff 1        0.250

vm007                PoweredOff 1        0.250

vm008                PoweredOff 1        0.250

vm009                PoweredOff 1        0.250

vm010                PoweredOff 1        0.250

vm011                PoweredOff 1        0.250

vm012                PoweredOff 1        0.250

vm013                PoweredOff 1        0.250

vm014                PoweredOff 1        0.250

 

 

確認した Get-VM を、そのまま Remove-VM にパイプで渡します。

vm001 だけは、削除せずに残してみました。

PowerCLI C:\> Get-VM vm00[2-9],vm01? | Remove-VM -DeletePermanently -Confirm:$false

PowerCLI C:\> Get-VM vm0??

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm001                PoweredOff 1        0.250

 

以上、PowerCLI での VM のクローンについてでした。

Downgrade vm hardware version 10 to 9

VMware Newsletter 6.29

$
0
0

From the editors virtual desk

Hi everyone, my week was consumed with a number of very worthwhile activities. First up I worked with my customers on a training plan and certification matrix. This is such a worthwhile exercise and allows the organisation to plan not only when their employees might be away on training but also ensure that each employee is getting the training they require and are able to become certified should they choose.

 

I also spent time working through our roadmaps as I am soon to present these to my customers. I find this such a great exercise and it always makes me pretty excited to learn about our upcoming technology and then get to present to my customers.

 

We are also nearing the most fantastic event on the VMware calendar, VMworld in San Francisco. I will do my best to bring you as much news about this event prior to, during and after as I possibly can.

 

I wish you a fantastic week and look forward to speaking to you next week. Please of course feel free to get in touch with me anytime.

 

Virtually Yours

Neil Isserow

Senior Technical Account Manager

VMware Australia

 

Local Events

 

Local Training Classes

Tinyurl.com/au7z3cr

VMware vSphere: Optimize and Scale [V5.5] 04-08 Aug

VMware vSphere: Fast Track [V5.5] 18-22 Aug

 

 

VMware Bloggers

 

ESXI unexcepted shotdown

$
0
0

Dear All,

 

Please help me for the error logs generated as my ESXI unexpected shutdown.

 

Thanks

Naved Ansari

Voting for OpenStack Paris Summit Talks

$
0
0

The talks presented at the OpenStack design summit are selected based on votes from end-users like you.

 

The voting for Paris Design Summit already started couple of days ago and will end on Wed, Aug 6th, 10pm PST. So please take a look at the talks below and if they look interesting, vote for them. Once again, we expect to have a very strong showing of VMware at the OpenStack summit.  Some talks are by VMware customers who are using technologies like NSX and vSphere in their deployments.  Other presentations are by VMware staff, either about VMware technologies in OpenStack, or about ways that VMware is helping the OpenStack community.

 

In both cases, we need your help making sure these sessions are selected, so please vote today!


Sessions by Customers running VMware with OpenStack:

 

Hands on Lab Sessions by VMware:

 

Sessions from VMware:

 

 

 

Sessions from VMware OpenStack Partners related to VMware:

GSS Tech Semina (2014. 05. 09) Determining Network/Storage Firmware and driver version in ESXi/ESX 4.x & 5.x

Future-Proof Your Virtual SAN With New Trade-in Program

$
0
0

You may have heard that VMware VSA is no longer available, and support for it will end in September 2018. Going forward, the alternative virtual storage solution from VMware – Virtual SAN – will pose some serious challenges for VMware users. The first is cost; VSAN licensing costs are substantial, and increased storage, memory and networking requirements add extra cost to the system. Second, Virtual SAN and similar solutions aren’t always suitable for storage on the edge, meeting their core requirements for a cost-effective solution with a minimal IT footprint. Also, there’s no easy upgrade path from VMware VSA to VMware VSAN, making the transition difficult and resource-intensive.

If you’d like to get ahead and move to a solution with a clear future that’s being adopted at a fast-growing rate in the industry, we’d like to offer you a trade. For a limited time, you can exchange your VMware VSA for StorMagic’s SvSAN – and only for the costs of your ongoing maintenance that are undoubtedly already a part of your IT budget.

SvSAN, a software solution that delivers a virtualized shared storage platform, enables enterprises to eliminate downtime and ensure high availability of business critical applications at the edge where disruptions directly equate to losses in revenue and service. By leveraging existing server storage and presenting it as a virtual SAN, SvSAN supports a wide range of organizations – from those with 10 edge sites to those with 10,000 or more – through minimal IT infrastructure. It also offers IT organizations a clear migration path from any existing storage solutions, minimizing transition burdens.

For more information on this new trade-in program, or to register, visit www.stormagic.com/future. Or, for a chance to try SvSAN for yourself through a free 60-day trial, visit www.stormagic.com/trial.


Novos exames da trilha Datacenter Virtualization

$
0
0

VCAP-DCD-1.jpg

 

A VMware anunciou no último 4 de Agosto de 2014, a disponibilidade do exame VCAP-DCD baseado no vSphere 5.5, o código do novo exame é o VDCD550.

 

Ao passar no exame VDCD550 ou VDCD510 (baseado no vSphere 5.0/5.1) o candidato obtém a certificação VMware Certified Advanced Professional 5 – Data Center Design (VCAP5-DCD).

 

Com este anúncio, a três principais certificações da trilha de Datacenter Virtualization estão disponíveis para as versões 5.0/5.1 e 5.5 do vSphere, conforme códigos abaixo:

 

VMware Certified Professional 5 - Data Center Virtualization (VCP5-DCV)

vSphere 5.5 Based Exam – Exam Code VCP550 (disponível em 21 de Janeiro de 2014)

vSphere 5.0/5.1 Based Exam – Exam Code VCP510

 

VMware Certified Advanced Professional 5 - Data Center Administration (VCAP5-DCA)

vSphere 5.5 Based Exam – Exam Code VDCA550 (disponível em 07 de Abril de 2014)

vSphere 5.0 Based Exam – Exam Code VDCA510

 

VMware Certified Advanced Professional 5 - Data Center Design (VCAP5-DCD)

vSphere 5.5 Based Exam – Exam Code VDCD550 (disponível em 04 de Agosto de 2014)

vSphere 5.0/5.1 Based Exam – Exam Code VDCD510

 

Se você estiver se preparando para realizar uma das provas acima, não deixe de fazer o download do blueprint do exame correto para não ser pego de surpresa do dia do exame.

 

Outro ponto importante, é que com a disponibilidade dos novos exames, possivelmente muito em breve a VMware irá descontinuar os exames baseados nas versões antigas.

Welcome CloudCred Version III

$
0
0

Announcing:

CloudCredVersion III

It's all about your CloudCredPrestige

 

 

Reach 20K points and you will be ready to move to the next level of play.

There are 15Prestige levels; each has its own tasks.

Get ready to test your knowledge!  Explore harder tasks the higher you climb.

            Level 2 Tasks           Level 3 Tasks           Level 4 Tasksand beyond…

 

CloudCredhas recruited some of the most senior administrators (VCDX & vExperts) to craft more challenging tasks as you climb up through the levels. You will not be exposed to advanced-level tasks until you reach your next Prestige.

 

     

 

 

Leaderboards are now focused on where you are and getting to the next level.  Race your team members up the ladder to the next shield. But understand, each time you Prestige, you drop down to the bottom of the Leaderboard and begin again with a score of zero.  But don't worry.  All the repeatable tasks will open back up to help you on your way!

Reach 20K and look for thePrestigebutton on your profile page!

                               

 

 

Contests and Prizes will be able to operate within levels in order to allow players at differing skill levels to compete for rewards and prizes.

So, get ready to Play, Learn, and WINin a whole new way!

CloudCred Version III - Releasing August 9, 2014.

 

- YourCloudCredAdmin Team

 

Fine Print for Existing Players:

For those who have been playing: You have automatically been set to the appropriate Prestige level, and the remainder of your points is saved. Nothing has been lost. Now you will see your appropriate Prestige level & badge, and your rank on the Leaderboards is based on your remaining points.

For those trying to win a 3D printer: Total points have been recorded. Leaders will be calculated and published once a week.

 

 

[VMware] Virtual SAN Design and Sizing Guide for Horizon View

$
0
0

Virtual SANが利用できる環境でHorizon Viewを導入する際に考慮すべきサイジングとデザインのガイドがリリースされました。

このガイドはVirtual SANとHorizon Viewの特徴説明をはじめ、ホスト、ストレージ、ネットワークまで項目別に説明をしながら構成デザインについても説明しています。

guide.jpg

 

 

ご興味のある方は是非読んでみてください。

ダウンロードはここから。。。

[VMware] OS Optimization Tool 2014

$
0
0

2013年公開されましたHorizon View用マスターイメージ最適化ツール、OS Optimization Toolの最新版がFlingsより公開されました。

 

 

このバージョンでは、以下のような機能が強化、追加されました。

  • Windows 7, 8用最適化テンプレートのアップデート
  • Windows Server 2008-2012用最適化テンプレートの追加
  • リモート/ローカルツールの統合
  • テンプレート管理機能の向上
  • 最適化結果のレポーティング機能追加

 

osoptimizationtool2014.jpg

 

テンプレートに”Windows Server 2008-2012”が追加されてますね。

 

 

ダウンロードはここからできます。

Business Continuity and Disaster Recover Design Workshop (BCDR Workshop) Summary - Module I

$
0
0

Hi all ..
During my study for VCAP-DCD, I listened to the Business Continuity and Disaster Recover Design Workshop (BCDR Workshop) by VMware. It's online course for 4.5 Hrs and can be found in this link.

This workshop is really building a base for practicing DR Sites and Business Continuity Plans and it's really helpful.  If you're VCP level or lower you may find this summary incomplete and you need to review the full modules, but if you're VCAP level or higher I think this can summary up the modules for you.

 

This is a summary for the first module of this workshop, which contains the most important notes I took during listening to it online. Let's Start:

 


Disaster Definition:

Disaster definition isn't the same for all organizations. But in general, it means a certain happening or event that causes a major damage to the business or the organization.
It may be classified based on its cause: (Natural/Man-made) or area of effect (Catastrophe: Wide Geographical Area/Disaster: Certain Building or Data Center/Service Disruption:failure of single application or component inside the Data Center). Any disaster and its effect can be mitigated using entire DR Plan or parts of it. 



DR Sites Types:

Dedicated vs. Non-dedicated: Dedicated DR site is a site with idle hardware to be used only by failed-over systems in case of a disaster, while non-dedicated DR site is a site -usually regional campus- where there’s another production environment and some of its capacity is reserved for failover in case of a disaster. Dedicated DR Sites - and only Dedicated type- can be Hot, Warm or Cold.

Hot vs. Warm vs. Cold: Hot DR site can be failed over to in case of a disaster in duration of minutes to hours. Warm DR site requires duration of hours to few days to be ready for failover. Cold DR site requires many days to be ready for failover.



Disaster Recovery Plan (DRP) vs. Business Continuity Plan (BCP):

DRP: A plan contains all procedures and steps to be made during and right after the disaster to fail all the systems to the DR site and get all the systems back online AFAP. It also includes all the procedures to protect personnel and asserts during the disaster.

BCP: A plan contains all procedures required for running the systems and keep them online at the DR site with the max. available capacity can be used there. In case for non-dedicated DR site, BCP may also include the required procedures about how to run both recovered original system and production system at the DR site side by side with and interference. It also includes all the steps and procedures required to fail the systems back to the original site after recovering from the disaster.



Steps of Creating DRPs & BCPs:

1-) Management Buy-in: Management should agree on costs of DRP & BCP required. It includes software required for replication, HW required and any other facility. All levels of management should participate in developing DRP & BCP, testing them and executing DRP and BCP when required.

2-) Performing Business Impact Analysis (BIA): This includes:

a-) Identify Key Assets: Determining the most important items to be protected, like: software, user data, blueprints and implementation documents, etc. In addition, it’s important to identify the critical business functions and map them to the key assets identified map how these critical functions depend on each other  and on key assets for continuity of the business.

b-) Define Loss Criteria: Defining the impact of losing any of the business key assets to define the priority of these assets to the business.

c-) Define Maximum Tolerated Downtime (MTD): MTD is the max. downtime of any key asset after which a major damage to the business will occur and business continuity can’t be maintained. MTD is defined as the following categories:

                                                                                i-) Critical: minutes. to hours downtime.

                                                                                ii-) Urgent: hours to 1 day downtime.

                                                                                iii-) Important: within 3 days downtime.

                                                                                iv-) Normal: up to 14 days downtime.

                                                                                v-) Non-important: up to 30 days or more downtime.

3-) Define RPO (Recovery Point Objective): Defining RPO indicates how much data loss the business can tolerate, measured in time, for example: RPO is 1 hr means that data must be restored to its original state 1 hr before the disaster. Data not covered by the RPO are lost forever. For the previous example, the data within the last hr before the disaster is lost forever and can be tolerated by the business.

4-) Define RTO (Recovery Time Objective): Defining RTO indicates how much downtime the business can tolerate with major damage.

5-) Perform Risk Assessment: Defining all available risks around the business, their possibility and the possible impact to avoid them. Risk Assessment should include all natural and man-made risks.

6-) Examine Regulatory Compliance: Always check for any legal requirements that DRP and BCP should fulfill.

7-) Develop DRP: By completing all the previous steps, all the required analysis is done and DRP can be developed correctly. DRP should contain all the pre-defined RPO/RTO, the key assets to protect and

the procedure to bring all critical systems back online.

8-) Design DR Systems: This includes choosing the DR site and if it’ll be Dedicated/Non-dedicated and Hot/Warm/Cold. It also includes designing storage replication system with planned backups and network required for replication and operation of the DR system in case of a disaster. It also includes all the hardware/software required for failover of the main site.

9-) Creating Run-books: Run-book is a document contain all the required steps and procedures to fail the system over to the DR site in case of a disaster. It includes step-by-step guide for re-building the system from scratch, reloading the critical applications and user data. Re-operate the applications for users to continue to work. Each DR site should contain its own run-books, each for certain key asset and ordered to be used in specific order based on DRP and RPO & RTO of each key asset. Any run-book should take into consideration the difference between the main site and the DR site in configurations and facilities. Run-book is hard to be maintained as systems and applications used are fast changing as well as their dependencies which will affect their restoring techniques and restoring order.

10-) Develop BCP: BCP should contain all the required steps for maintaining systems and applications daily operations at the DR site, such as: daily backups. It also should include the detailed solution of all expected problems -resulted of lack of some resources and facilities at the DR site- as well as the detailed procedures to fail all systems and applications back to the main site after recovery from the disaster.

11-) Test DRP and BCP: DRP & BCP should be tested frequently to show any problems with them. It must be done carefully in order not to disrupt production systems, specially in case of non-dedicated DR site.

 


Share the Knowledge ...

 

**Update Log:

**08/11/2014: Update Dedicated vs. Non-dedicated DR Sites Comparison.

VMware Newsletter 6.30

$
0
0

Email not displaying correctly? View it in your browser: http://us2.campaign-archive1.com/?u=84638d120599c4461b514e1a0&id=3a514c417a&e=[UNIQID]

 

From the editors virtual desk

Hi Everyone, I have had a great and very productive week with many discussions with customers on their plans for managing the costs of their virtual environment. This is a hot topic and one that I believe VMware is well positioned to address. As we have spent the past 10+ years working with our customers in their datacenters and helping them virtualise, save money and become more agile we have also been hard at work on the financial side of things. This has become even more important of late with many customers under pressure to provide not only operational requirements for their cloud deployments but also financial cost and justification. The Hybrid Cloud and VMware's VCHS solution provides even more tangible reasons to extend the datacenter to the cloud but again costing is imperative.

 

I have written a number of blog posts on this subject and it is one that I am very interested in pursuing further. Feel free to take a read of the latest blog posting I have done at: https://www.linkedin.com/today/post/article/20140727100816-4007630-revisiting-the-costs-for-a-virtual-environment?trk=mp-edit-rr-posts

 

With VMworld US fast approaching I am dedicating a new section to this amazing event. I hope you enjoy it and if you havent already done so join the @VMWARETAM Twitter and VMWARETAM on Facebook for more updates.

 

Have a fantastic and productive week.

 

Virtually Yours

Neil Isserow

Senior Technical Account Manager

VMware Australia

@VMWARETAM

fb.com/vmwaretam

 

Local Events (BRISBANE)

- VMUG
VMware vCAC 6.0
http://www.vmug.com/p/cm/ld/fid=7274&source=5

Local Training Classes

Tinyurl.com/au7z3cr

VMware vSphere: Fast Track [V5.5] 18-22 Aug

VMware vSphere: Install, Configure, Manage [V5.5] 25-29 Aug

 

VMware vSphere: Troubleshooting Workshop [V5.5] 01-05 Sept

 

@VMworld Tweets

New blog post: "The #VMworld 2014 Partner Lounge Insider" bit.ly/1rTxjEJ

Here are your chances for networking at VMworld 2014 US: http://bit.ly/1nZbzps  #Learn #Meet #Party

#TBT http://bit.ly/1qOpsc1  We are finalizing this year's party. Find out how we plan to top last year: http://bit.ly/1qOpsbZ

Save 25% on 3 hot topic training courses at #VMworld. Join us early or attend online remotely! http://bit.ly/1pK3AIe

 

VMware Bloggers

BCDR made easy? Virtualization can make you a believer (http://blogs.vmware.com/smb/?p=2941)

Business continuity and disaster recovery (BCDR) solutions represent a form of business insurance of your IT. Important? By all means. But traditional BCDR solutions are costly and complicated–frankly, a gold-plated policy for the privileged few. And even for those who can afford to pony up for the premiums, there is the added challenge of integrating […]

External Bloggers

 

 


This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
VMware · Lvl 18 333 Ann Street · Brisbane, Queensland 4000 · Australia

AWS adds Zocalo Enterprise File Sync Share and Collaboration

$
0
0

AWS adds Zocalo Enterprise File Sync Share and Collaboration

In case you missed it today, Amazon Web Services (AWS) announced Zocalo an enterprise class storage and file sharing service. As you might have guessed, by being file sync and share of cloud storage Zocalo can be seen as a competitor or option to other services including Box, Dropbox and Google among many others in the enterprise file sync and share (EFSS) space.

Amazon Zocalo enterprise storage and sharing service

AWS Enterprise File Sync Share (EFSS) Zocalo overview and summary:

  • Document collaboration (Comments and sharing) including available with AWS WorkSpaces
  • Central common hub for sharing documents along with those owned by a user
  • Select AWS regions where data is stored, along with set up users polices and audit trails
  • Sharing of various types of documents, worksheets, web pages, presentations, text and PDF among other files
  • Support for Windows and other PCs, Macs, tablets and other mobile devices
  • Cost effective (priced at $5 per user per month for 200GB of storage)
  • Free 30 day trial for up to 50 users each with 200GB (e.g. 10TB)
  • Secure leveraging existing AWS regions and tools (encryption in transit and while at rest)
  • Active directory credentials integration

Learn more in the Zocalo FAQ found here

Register for the limited free Zocalo trial here

Additional Zocalo product details can be found here

AWS also announced as part of its Mobile ServicesCognito a mobile service for simple user identity and data synchronization, along with SNS, Mobile  Analytics and other enhancements. Learn more about AWS Cognito here and Mobile Services here.

Check out other AWS updates, news and enhancements here

Ok, nuff said

Cheers
gs


Is there an information or data recession? Are you using less storage? (With Polls)

$
0
0

Is there an information or data recession? Are you using less storage? (With Polls)

StorageIO industry trends

Is there an information recession where you are creating, processing, moving or saving less data?

Are you using less data storage than in the past either locally online, offline or remote including via clouds?

IMHO there is no such thing as a data or information recession, granted storage is being used more effectively by some, while economic pressures or competition enables your budgets to be stretched further. Likewise people and data are living longer and getting larger.

In conversations with IT professionals particular the real customers (e.g. not vendors, VAR's, analysts, blogalysts, consultants or media) I routinely hear from people that they continue to have the need to store more information, however they're data storage usage and acquisition patterns are changing. For some this means using what they have more effectively leveraging data footprint reduction (DFR) which includes (archiving, compression, dedupe, thin provision, changing how and when data is protected). This also means using different types of storage from flash SSD to HDD to SSHD to tape as well as cloud in different ways spanning block, file and object storage local and remote.

A common question that comes up particular around vendor earnings announcement times is if the data storage industry is in decline with some vendors experience poor results?

Look beyond vendor revenue metrics

As a back ground reading, you might want to check out this post here (IT and storage economics 101, supply and demand) which candidly should be common sense.

If all you looked at were a vendors revenues or margin numbers as an indicator of how well such as the data storage industry (includes traditional, legacy as well as cloud) you would not be getting the picture.

What needs to be factored into the picture is how much storage is being shipped (from components such as drives to systems and appliances) as well as delivered by service providers.

Looking at storage systems vendors from a revenue earnings perspective you would get mixed indicators depending on who you include, not to mention on how those vendors report break of revenues by product, or amount units shipped. For example looking at public vendors EMC, HDS, HP, IBM, NetApp, Nimble and Oracle (among others) as well as the private ones (if you can see the data) such as Dell, Pure, Simplivity, Solidfire, Tintri results in different analysis. Some are doing better than others on revenues and margins, however try to get clarity on number of units or systems shipped (for actual revenue vs. loaners (planting seeds for future revenue or trials) or demos).

Then look at the service providers such as AWS, Centurlylink, Google, HP, IBM, Microsoft Rackspace or Verizon (among others) you should see growth, however clarity about how much they are actually generating on revenues plus margin for storage specific vs. broad general buckets can be tricky.

Now look at the component suppliers such as Seagate and Western Digital (WD) for HDDs and SSHDs who also provide flash SSD drives and other technology. Also look at the other flash component suppliers such as Avago/LSI whose flash business is being bought by Seagate, FusionIO, SANdisk, Samsung, Micron and Intel among others (this does not include the systems vendors who OEM those or other products to build systems or appliances). These and other component suppliers can give another indicator as to the health of the industry both from revenue and margin, as well as footprint (e.g. how many devices are being shipped). For example the legacy and startup storage systems and appliance vendors may have soft or lower revenue numbers, however are they shipping the same or less product? Likewise the cloud or service providers may be showing more revenues and product being acquired however at what margin?

What this all means?

Growing amounts of information?

Look at revenue numbers in the proper context as well as in the bigger picture.

If the same number of component devices (e.g. processors, HDD, SSD, SSHD, memory, etc) are being shipped or more, that is an indicator of continued or increased demand. Likewise if there is more competition and options for IT organizations there will be price competition between vendors as well as service providers.

All of this means that while IT organizations budgets stay stretched, their available dollars or euros should be able to buy (or rent) them more storage space capacity.

Likewise using various data and storage management techniques including DFR, the available space capacity can be stretched further.

So this then begs the question of if the management of storage is important, why are we not hearing vendors talking about software defined storage management vs. chasing each other to out software define storage each other?

Ah, that's for a different post ;).

So what say you?

Are you using less storage?

Do you have less data being created?

Are you using storage and your available budget more effectively?

Please take a few minutes and cast your vote (and see the results).

Sorry I have no Amex or Amazon gift cards or other things to offer you as a giveaway for participating as nobody is secretly sponsoring this poll or post, it's simply sharing and conveying information for you and others to see and gain insight from.

 

Is there an information or data recession? (Click here to view poll and cast your vote)

 

How about are you using or buying more storage, could there be a data storage recession?  (Click here to view poll and cast your vote)

 

Some more reading links

IT and storage economics 101, supply and demand
Green IT deferral blamed on economic recession might be result of green gap
  Industry trend: People plus data are aging and living longer
  Is There a Data and I/O Activity Recession?
  Supporting IT growth demand during economic uncertain times
  The Human Face of Big Data, a Book Review
  Garbage data in, garbage information out, big data or big garbage?
  Little data, big data and very big data (VBD) or big BS?

Ok, nuff said (for now)

Cheers gs

GSS Tech Seminar (2014. 05. 16) Get Technical Support from VMware

GSS Tech Seminar (2014. 05. 23) Upgrading vCenter Server 5.5 Best Practices (2053132)

GSS Tech Seminar (2014. 08. 01) Horizon View SSL Certificate Part I

$
0
0

안녕하세요,

 

2014년 8월 1일자 세미나의 녹화는 아래 링크에서 보실 수 있습니다.

 

https://vmware.webex.com/vmware/lsr.php?RCID=103ea4390e154288adace7038c743457

Horizon View에서 SSL 인증서를 구성하는 방법에 대한 내용이며 시리즈로 이어지는 세미나입니다.


발표에 사용된 문서는 첨부하였습니다. 7월 25일에 계획되었으나 사전 공지가 되지 않아 8월 1일에 진행되었습니다


감사합니다.

GSS Tech Seminar (2014. 08. 08) Horizon View SSL Certificates Part II

Viewing all 3135 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>