Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3135 articles
Browse latest View live

Check if the virtual machines time is synced with the vSphere host

$
0
0

Hello,

if you want to check is the virtual machines time is synced with the vSphere host, you can use this powershell/powercli code:

 

Get-VM | Select Name, @{N="SyncTimeWithHost";E={$_.ExtensionData.Config.Tools.SyncTimeWithHost}}

 

 

(As usual, remember to use the Connect-VIServer cmdlet before!)

 

 

Best regards,

Pablo


Play, learn, and WIN - These two won MakerBots! You can too.

$
0
0

CloudCred congratulates two top players who, on October 15, 2014, won the site's top 180-Day Prize

 

A MakerBot Replicator 3D Printer

MakerBot.png

 

 

Both winners are great leaders in our community and can be followed on Twitter or at his Blog:

 

Ravi Venka.jpg

 

 

Ravi Venkatasubbaiah - "Ravi" topped both the 180-Day VSAN and Log Insight World Leaderboards

 

@Ravi_Venk - The Daily Caffeine

 

 

 

 

 

 

 

 

 

Allan Trambouze.jpeg

 

 

Allan Trambouze - "VirtualQuebec" scored first on the CloudCred overall 180-Day Leaderboard, as well as the NSX World Leaderboard.

 

@VirtualQuebec - Virtual Quebec

 

 

 

 

 

 

Is the party over? No.

There are plenty of ways to jump in and benefit from the resources provided at CloudCredibility.com.

 

  • Over 1500 specific technical tasks in more than 20 categories, created to both sharpen your expertise and encourage you to share insights with other players.
  • Anniversary Tasks to help new players find what's popular and an easy place to start
  • Over 40 Hand-on Lab Tasks from VMworld 2014
  • VSAN World, featuring over 100 tasks from the introductory to the complex
  • New tasks released weekly, and Triple-Point Tuesday bonus points every week!
  • Prizes awarded for point achievement, and specific contests running regularly

 

Simply visit CloudCredibility.com and create an account. In less than five minutes, you can Play, Learn, and Win with Ravi, VirtualQuebec, and hundreds of others. See you there!

Virtualizing Microsoft Active Directory Domain Services (AD DS)-Windows 2012 on vSphere Best Practices

$
0
0

Hi All ..

 

Active Directory Domain Services (AD DS) is the core of our IT Infrastructure nowadays. It's the authentication and authorization center of any IT Infrastructure. Being here since 1990's, AD DS has been through a great development till reached this version on Windows 2012 with many new features. Luckily, some of these are only for helping virtualizing Domain Controllers (DC's) with min. effort and to leverage all of Virtualization advantages and features. These features are:

 

1-) VM Generation ID (Virtualization Safeguard): An ID that's generated and add to any Windows 2012 VM when being deployed on a supported hypervisor (vSphere 5.0 U2) .vmx file. It's used to trace VM life-cycle. When it's power on for first time, a new ID is generated using a special generator driver on Windows 2012 by default. It changes only when the VM is either imported, deployed from a template or a copy, hot/cold cloned, hot/cold snapshot-ed, restored from a backup or replicated to another VM. This ID is saved in AD Computer Object attributes of the VM as "msDS-GenerationId Attribute" and that helps the AD to keep track of the VM version and if the VM is restored or reverted from different version. This allowed us to use the next features: AD DC Snapshot and AD DC Cloning.

 

 

2-) AD DC Snapshot: Before Windows 2012, any try to use a snapshot with DC would lead to a corruption in DC database. That's because of a known issue called "USN Rollback" which leads to creating different objects on different DC's with the same SID. Long story short, each DC has to maintain a version number of its DB called "Update Sequence Number - USN". When creating an object on a DC for example, the USN will be incremented by 1 and hence, the DB -on this DC- will have a higher version number and that's why any other DC will accept a replication from that DC. When reverted from snapshot, USN will be reverted to old state. When the admin create new objects, the USN on the creator DC will increment till reach -or even pass- the same USN on any other DC, but with different objects and will not either initiate or accept any replication because it maintains the same USN -till it passes the USN of other DC's-, but with different objects. With using VM Generation ID, AD DC's now have a VM version which is used with USN to validate that each DC maintain its DB version and VM version. If a DC is reverted from snapshot, its VM Generation ID will change and hence, its DB will have a new combination of USN and VM Generation ID. This alerts other DC's to initiate a two-way replication to acquire any objects created on the reverted DC and push any missing objects to it.

 

3-) AD DC Cloning: In Windows 2012, Cloning of a DC is a new feature which facilitates the scaling of AD Infrastructure when needed. It also uses the VM Generation ID to maintain the version of the DC's DB and hence, prevents the corruption due to USN Rollback Issue. Using Windows PowerShell, a XML file will be created to save the configuration needs to be deployed when the DC machine is cloned. Then, after cloning and configuration, a new DC is created with a new VM Generation ID in the same point of time of the source DC. This feeds the need for replication of AD objects and extremely useful in large environments or in case of DR process. For the cloning process step-by-step guide, check thispaper.

 

When virtualizing AD DC using vSphere, it's recommended to follow Microsoft Best Practices in sizing as well as leverage any feature provided by vSphere to provide the needed level of performance and availability. AD DC isn't a performance-intensive application and its availability is easy to maintain at the required level. The following are a collection of MS and VMware best practices when virtualizing an AD DC based on Windows 2012. I divided these best practices into six categories that align with the known Design Qualifiers: Availability, Manageability, Performance, Recoverability and Security in addition to another qualifier: Scalability.

 

Availability:

1-) Try to separate FSMO roles between many AD DC VMs and separate these VMs on different host using VMs Anti-Affinity Roles.

2-) Try to separate all DC VMs on separate back-end Storage Arrays. If not available, try to host one DC VM on a local datastore as a protection in case of back-end shared storage failure. Keep in mind that, DC VM on local datastore won’t use features like: HA or vMotion.

3-) Try to separate your DC VMs on many clusters on many physical racks or blade chassis using Soft (Should) VM-Host Anti-affinity rule. At least, dedicate a management cluster separated from production cluster.

4-) Make sure to set HA Restart Priority for all DC VMs in your HA Cluster(s) to High in order to be restarted first before any other VMs in case of host failure.

5-) Try to use VM Monitor to monitor activity of the AD DC VMs and restart them in case of OS failure.

 

Manageability:

1-) Time Sync:

Time Synchronization is one of the most important things in AD DS environments. As stated by VMware Best Practices to Virtualize AD DS on Windows 2012here, it’s recommended to follow a hierarchical time sync as follows:

- Sync the PDC in Forest Root Domain to an external Startum 1 NTP servers.

- Sync all other PDCs in other child domains in the forest to the Root PDC or any other DC in the Root Domain.

- Sync all ESXi Hosts in the VI to the same Startum 1 NTP Server.

- Sync all workstations in every domain to the nearest DC in their domains respectively.

To configure the PDC to time-sync with an external NTP server using GPO, check thislink.

Also, it’s recommended to disable time-sync between VMs and Hosts using VMware Tools totally (Even after uncheck the box from VM settings page, VM can sync with the Host using VMware Tools in case of startup, resume, snapshotting, etc.) according to the followingKB.

This will make the PDC VM only sync its time with the configured time source using GPO.

 

2-) Use Best Practices Analyzer (BPA):

It’s recommended to use BPA for AD DCs to make sure that your configuration is coherent with Microsoft recommended configuration. In some cases and for valid reasons, you can drift from Microsoft recommendations. For more information, refer to thislink.

 

3-) Use AD DS Replication Tool:

This tool, offered by Microsoft for free, can help detect any issue in replication between all DCs in your environment and show them and the related KB articles to solve these issues. It’s the next generation from REPADMIN CLI tool. Download it from thislink.

 

4-) Snapshots:

As mentioned above, using AD DC on Windows 2012, you can use snapshots without worrying about reverting to old snapshot and the related USN Rollback issue. AD DC on Windows 2012 leverages the new VM-Generation ID feature that makes the AD DC Virtualization aware and hence, any hot/cold snapshot can be used to revert to it safely. Check VMware Best Practices to Virtualize AD DS on Windows 2012herefor more information about VM-Generation ID and related Virtualization Safeguards.

 

Performance:

1-) vCPU Sizing:

Site Size

No. of vCPUs

<500 Users per Site

Single vCPU

<10,000 Users per Site

2 vCPUs

>10,000 Users per Site

3+ vCPUs

This assumes that the primary work of the directory is user authentication. For any additional workload, like Exchange Server, additional vCPUs may be required. Capacity Monitoring is helpful to determine the correct vCPUs required.

 

2-) Memory Sizing:

Memory of AD DC can help in boosting the performance by caching AD DB in the RAM, like any other DB application. Ideal case is to cache all AD DB in the RAM for max. performance. This is preferred in environments that have AD integrated with other solutions, like Exchange servers. The following guide line is a start:

Site Size

Min. RAM Size

<500 Users per domain per Site

512 MB

500-1,000 Users per domain per Site

1GB

>1,000 Users per domain per Site

2 GB

For the correct sizing of RAM, start with min. required and use Windows Performance Monitor to monitor “Database/DB Cache Hit%” for lsass service after extended period after deploying this DC. Add RAM if required using vSphere Hot Add feature (Keep in mind that you have to enable it before starting up the DC VM). When the RAM is sized correctly enough to cache proper portion of DB, this ratio should be near 100%.

Keep in mind that, this is only for AD Domain Services, i.e. additional RAM is required for the Guest OS.

 

3-) Storage Sizing:

The following equations are general for sizing the required storage size for a DC:

“Storage required= OS Storage+ AD DB Storage+ AD DB Logs Storage+ SYSVOL Folder Storage+ Global Catalogue Storage+ Any data stored in Application Partition+ Any 3rd Part Storage”

“AD DB Storage= 0.4GB for any 1,000 users≈ 0.4MB*Total No. of Users”

“AD DB Logs Storage= 25% of AD DB Storage”

“SYSVOL Folder≈ 500MB+”          May increase in case of high no. of GPOs.

“Global Catalogue Storage= 50% of AD DB for any additional Domain”

“Any data stored in Application Partition": is to be estimated.

“Any 3rd Part Storage": includes any installed OS patches, paging file, backup agent or anti-virus agent.

The following table shows the Read/Write behavior of each of AD DC components:

AD DC Component

Read/Write

RAID Recommended

AD DB

Read Intensive

RAID 5

AD DB Logs

Write Intensive

RAID 1/10

OS

Read/Write

RAID 1

Keep in mind that for large environments with many integrated solutions with AD, separation of OS, AD DB and AD DB Logs on many disks is recommended for IO separation on many disks and vSCSI adapters. In such environments, caching most of AD DB on RAM will give a performance boost.

 

4-) Network Sizing:

AD DC VM should have a VMXNET3 vNIC which gives the max. network performance with least CPU load and this should be sufficient on 1GB physical network. The port group to which AD DC VM is connected should have a teamed physical NICs for redundancy.

 

Recoverability:

1-) Try to use a backup software that is VSS-aware to safely backup your AD DB. AD DC on Windows 2012 can be backed up with a backup software that uses VSS to backup and restore entire DC VM, because AD DC on Windows 2012 leverages the new VM-Generation ID feature that makes the AD DC Virtualization aware and hence, any restore process of entire DC VM can be done safely. Check VMware Best Practices to Virtualize AD DS on Windows 2012herefor more information about VM-Generation ID and related Virtualization Safeguards.

2-) Make sure to backup any DC System State. System State contains AD DB, AD DB Logs, SYSVOL Folder and any other OS critical component like registry files.

3-) For DR, you can use native AD DCs replication to replicate the AD DB between the main site and the DR site. This approach requires min. management overhead and good DR capability. This approach only lacks the ability to protect the five FSMO role holders.

4-) Another approach for DR is to leverage VMware SRM with VM-Generation ID capability on Windows 2012. This approach helps to continuously replicate AD DC VMs using SRM Replication or Array-based Replication and failover in case of disaster. This allows to protect FSMO roles holders as well as provide AD infrastructure to failed-over VMs in the DR site.

 

Security:

1-) All security procedures done for securing physical DCs should be done in DC VMs, like: Role-based Access Policy and hard drive encryption.

2-) Follow VMware Hardening Guide (v5.1/v5.5) for more security procedures to secure both of your VMs and vCenter Server.

 

Scalability:

1-) For greater scalability, try to upgrade your AD DCs to Windows Server 2012. AD DC on Windows 2012 leverages the new VM-Generation ID feature that makes the AD DC Virtualization-aware and hence, it can be cloned easily and any cloning process can be done safely. Check VMware Best Practices to Virtualize AD DS on Windows 2012herefor more information and cloning process step-by-step guide. Cloning can help in case of urgently needed expansion in AD DC infrastructure, DR process or testing. It also cuts down heavy network utilization by AD DCs in replication of entire DB to the new promoted-from-scratch DCs. Keep in mind that Cold Cloning is the only one supported by both VMware and Microsoft. Hot Cloning isn’t supported in production by either VMware or Microsoft.

 

I wish that this comprehensive -long unintentionally- guide helps you in virtualizing this simple but important application and providing the required level of availability and performance. For more information or detailed explanation of any item, check the References section.

Share the knowledge ...

 

References:

-- Virtualizing MS Business Critical Applicationby Matt Liebowitz and Alex Fontana.

-- MS Infrastructure Planning and Design Guide (IPD).

-- Virtualizing AD Domain Services on vSphere

-- vSphere Design Sybex 2nd Editionby Scott Lowe, Kendrick Coleman and Forbes Guthrie.

Virtualizing Business Critical Applications (BCAs) Series

$
0
0

Hi all ...

 

During my journey towards VCAP-DCD (All pray for me ), I found a nice topic on the exam blueprint. It's about "Gathering and Analysis Business Application Requirements". When I began to examine it, I found that it was not only about as stated by its headline, but about also the best practices to deploy some Business Critical Applications (BCAs) in your vSphere environment. It included all of "Microsoft Exchange, SQL and Sharepoint, Enterprise Jave Applications, SAP HANA & Oracle". These applications are considered -in most of environments if not all- Tier 1 applications that require wide eyes and careful attention when dealing with and sepcially when migrating to virtual world on vSphere Infrastructure.

 

I tried to summarize all I could found during my readings in this topic,as I know it's a critical topic and mastering it requires some deep knowledge about vSphere capabilities and how to leverage them to serve these applications. In addition, this topic is a point of VCAP-DCD exam blueprint and one of its most tricky points if not the most at all.

 

First, let's define what a Business Critical Application (BCA) is:
"Business Critical Application is the one without which the business is either stopped or suffers great losses in its revenue. It's critical to lose that application and business requires always the highest levels of performance, availability and recoverability -in case of a disaster- for this application"

 

Now, someone will ask about the reason for taking the difficult road of virtualizing BCAs as long as they're running physically without any problems. The answer is plain simple: Better availability, same performance and may be better in case of scaling out, easier recovery and all for lower cost. vSphere Platform is capable of delivering the requirements of these applications of performance. In addition, VMware has its own HA capabilities that can be used solely or with another clustering solutions for highest levels of availability. HA isn't only the clustering feature available, VMware offers another clustering feature: DRS, which helps to load balance and distribute the load on many ESXi Hosts to maintain the required performance for BCAs while not affecting other lower-tiers applications. Last but not least, VMware offers its own DR solution: Site Recovery Manager (SRM), which automates the process of DR as well as allowing the responsible personnel to test their DR plan whenever they want.

 

After defining these two points, now we will discuss the best practices to deploy Business Critical Applications in your vSphere environment and will include all of:
Microsoft AD DS, Microsoft Cluster Services, Microsoft Exchange, Microsoft SQL, Microsft Sharepoint, Oracle DB, SAP HANA & Enterprise Java Applications

I tried as much as possible to make it related to the main Design Qualifiers (Availability, Manageability, Performance, Recoverability and Security - AMPRS). I also added another aspect: Scalability, as I felt that this aspect is important to consider when designing for such applications. When applicable, Cost also is considered against all of these qualifiers.


Now, let's start:

1- Virtualizing Microsoft AD DS Best Practices.

2- Virtualizing Microsoft Clustering Services (MSCS) Best Practices.

3- Virtualizing Microsoft Exchange Best Practices.

4- Virtualizing Microsoft SQL Best Practices.

5- Virtualizing Microsoft Sharepoint Best Practices.

6- Virtualizing Oracle DB Best Practices.

7- Virtualizing SAP HANA Best Practices.

8- Virtualizing Enterprise Java Applications Best Practices.

 

Share the knowledge ...

 


VMware Newsletter 6.40

$
0
0

Email not displaying correctly? View it in your browser: http://us2.campaign-archive2.com/?u=84638d120599c4461b514e1a0&id=3cdfc0c512&e=[UNIQID]

 

From the editors virtual desk

Hi everyone, well I I had a really cool opportunity this week to meet with one of our extremely important analysts at our Palo Alto campus. The session was to discuss the metrics that a VMware Technical Account Manager collects every quarter with their customer.

 

Some background on this service. Every quarter we (TAMs) run a utility on our customers systems that allows us to deliver to them a very detailed report of their current environment. This information is also provided back to the internal data warehouse where it is analysed and provided back to all the TAMs as a dashboard summary of the many metrics that we collect but across the TAM world. This provides us with amazing insight into the world of virtualization for our customers and allows us to compare our customers environment using a variety of metrics with industry  averages that have been analysed and collected.

 

If you are a TAM customer or thinking of becoming a TAM customer I suggest that this is a really valuable exercise that is performed by TAMs for their customers.

 

VMworld Europe is about to begin. I hope that those of you that are attending or participating have a wonderful time and I look forward to all of the announcements from the event and will try and bring you all of the news in the newsletter.

 

I hope you have a great week and look forward to chatting to you again next week. Thank you for reading the newsletter and of course please get in touch with anything that you might have on your mind.

 

Virtually Yours

Neil Isserow

Staff Technical Account Manager

VMware Inc - San Francisco, CA

nisserow @ vmware . com

 

LOCAL TRAINING CLASSES

 

VMWARE BLOGGERS

VMware Newsletter 6.41

$
0
0

From the editors virtual desk

Hi everyone, I hope your week was as good as mine. I got to spend the day at Dreamforce, the annual Salesforce convention in San Francisco. It was absolutely awesome. I attended a number of excellent presentations and took a look around the exhibition areas as well. It was really a great event to attend and for me it was my first introduction to San Francisco events since I arrived.

 

Of course there was another notable event on this past week, VMworld Europe, an event I would love to attend one day. I have received much feedback on this event and that it was a huge success, if you attended I hope you enjoyed it, watch for many of the announcements from the week in the newsletter.

 

A number of items worth noting this week. There was a new VMware KB article with guidance on Transparent Page Sharing (TPS) and the upcoming changes that are taking place so please take a look at that, the article can be found here https://blogs.vmware.com/security/2014/10/transparent-page-sharing-additional-management-capabilities-new-default-settings.html.

 

Have a fantastic week, enjoy the newsletter and please don't forget to check the latest KB articles each week to ensure they do not affect your environment.

 

Virtually Yours

Neil Isserow

Staff Technical Account Manager

VMware Inc - San Francisco, CA

nisserow @ vmware . com

 

LOCAL TRAINING CLASSES

 

VMWARE BLOGGERS

Implementing a process to monitor some entity state from a vCO plug-in

$
0
0

Very interesting question in the Communities about vCO plug-in development and how to implement some background process within the plug-in to monitor plug-in objects:

Re: Launching asynchronous (background) processes from a plugin.

 

I answered based on what I've seen / I've done in some plug-ins... but I'd like to know if anyone else following this blog (if any) has by chance experience on implementing vCO triggers/monitoring tasks and wants to share it

VMware vForum Online & CloudCred Contest

$
0
0

vForum web banner.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Get ready for the VMware vForum Online, an interactive half-day event specifically designed for business and IT professionals that includes breakout sessions, expert chats, and Hands-On Labs Online.

CloudCredibility.com will be teaming up once again with vForum Online to bring the fun & prizes to CloudCred players. Every time you complete a vForum Online Contest task, you will be entered to win either an iPad mini or one of two Google Chromecasts.

Pre-event tasks will be available starting this Monday, November 3. A preview of other contest tasks will also be coming your way.

So, mark your calendar! You won't want to miss this valuable event.


Why are my packets dropping?

NSX 6.1 Walk-Through covering Installtion, Configuration, and Integration to vCAC 6.1

$
0
0

I've been working on putting together Step-by-Step How-To's for installing and configuring NSX as well as vCAC.  Below are links that have the Step-by-Step overview for both NSX and vCAC.  Be sure to check back as additional use cases and more advanced configuration are being posted regularly.

 

NSX 6.1 Walk-Though

vCAC 6.x Walk-Through

VMware Newsletter 6.42

$
0
0

Email not displaying correctly? View it in your browser: http://us2.campaign-archive1.com/?u=84638d120599c4461b514e1a0&id=4b69b52580&e=[UNIQID]

 

From the editors virtual desk

Hi everyone and welcome to this week's newsletter. A lot has been going on in VMware land this past week. With all of the many announcements now out from VMworld Europe we can look forward to many new solutions from VMware to learn about.

 

For those that have been following my relocation to San Francisco this week was a big week for me as I moved in to my permanent apartment rental now. I am very excited about this and hope it will allow me to become more productive with my customers now that I can get properly settled. San Francisco is amazing and I am loving everything it has to offer.

 

I won't take up any more of your time this week, please enjoy reading the newsletter and all of the weeks news and of course pay careful attention to this weeks KB articles in case they may affect you. If you are in any doubt contavt your VMware TAM or other VMware representative to check.

 

Virtually Yours

Neil Isserow

Staff Technical Account Manager

VMware Inc - San Francisco, CA

nisserow @ vmware . com

 

LOCAL TRAINING CLASSES

http://tinyurl.com/m4wmen5 for full list

 

VMWARE BLOGGERS

VMware HeartBeat 6.6 Update 1 Implementazione

Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

$
0
0

Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

 

This is the first post of a two part series, read the second post here.

 

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD's as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

 

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

Seagate 1200 SSD
Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

 

Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

 

The best server and storage I/O is the one you do not have to do

Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

 

Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

flash cache locality of reference
Server memory storage I/O hierarchy, locality of reference

 

Seagate 1200 12Gbs Enterprise SAS SSD's

Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD's and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

Seagate 1200 Enteprise SSD

This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD's including 12Gbs SAS 6TB near-line high-capacity drives.

 

Seagate 1200 Enterprise SSD Proof Points

The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

 

For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

Server Storage I/O config
Server Storage I/O configuration for proof-points

Microsoft Exchange Email proof-point configuration

For  this proof-point, Microsoft Jet Stress Exchange performance workloads were placed  (e.g. Exchange Database - EDB file) on each of the different devices under test  with various metrics shown including activity rates and response time for reads  as well as writes. For the Exchange testing, the EDB was placed on the device  being tested while its log files were placed on a separate Seagate 400GB  Enterprise 12Gbps SAS SSD.

 

Test configuration: Seagate 400GB 12000 2.5” SSD  (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX)  6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with  6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K  RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on  VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM  (VMware vSphere 5.5) was on a SSD based datastore, had a physical machine  (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI  9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device  Mapped (RDM) where EDB resided. VM on a SSD based separate data store than  devices being tested. Log file IOPs were handled via a separate SSD device also  persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

Microsoft Exchange VMware SSD performance
  Microsoft Exchange proof-points comparing various storage devices

TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

SSD's are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

 

TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD's

 

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

 

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-B sql server database SSD performance
TPC-B SQL Server database proof-points comparing various storage devices

TPC-E (Database, Financial Trading) proof-point configuration

The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD's

 

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

 

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-E sql server database SSD performance
TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

 

Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

 

Ok, nuff said (for now).

Cheers gs

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

$
0
0

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

 

This is the second post of a two part series, read the first post here.

 

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD's as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

 

The Server Storage I/O Blender Effect Bottleneck

The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

traditional server storage I/O
Non-virtualized servers with dedicated storage and I/O paths.

 

Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

virtual server storage I/O blender
Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

 

The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

 

In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

Creating a server storage I/O blender bottleneck

xxxxx
Addressing the VMware Server Storage I/O blender with cache

Addressing server storage I/O blender and other bottlenecks

For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD's The 6TB (Enterprise Capacity) HDD was configured as a VMware datastore and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

xxxxx
Server storage I/O with virtualization roof-point configuration topology

 

The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

 

If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

storage I/O blender solved
Solving the VMware Server Storage I/O blender with cache

 

The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization. For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a datastore using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

 

The guest VM system disks which included paging, applications and other data files were virtual disks using a separate datastore mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.     For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

 

During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads. Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD's

 

Seagate 6TB 12Gbs SAS high-capacity HDD

While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

 

This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD's and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD's are also available with Self-Encrypting Drive (SED) options.

Summary

Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD's and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

 

Key themes to keep in mind include:

  • Aggregation can cause aggravation which SSD can alleviate
  • A relative small amount of flash SSD in the right place can go a long way
  • Fast flash storage needs fast server storage I/O access hardware and software
  • Locality of reference with data close to applications is a performance enabler
  • Flash SSD everywhere does not mean everything has to be SSD based
  • Having some amount of flash in different places is important for flash everywhere
  • Different applications have various performance characteristics
  • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

 

Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

 

Ok, nuff said (for now).

Cheers gs

Is Computer Data Storage Complex? It Depends

$
0
0

Is Computer Data Storage Complex? It Depends

 

I often get asked, or, told that computer data storage is complex with so many options to choose from, apples to oranges comparison among other things.

 

On a recent trip to Europe while being interviewed by a Dutch journalist in Nijkerk Holland at a Brouwer Storage Consultancy event I was presenting at, the question came up again about storage complexity. Btw, you can read the article on data storage industry trends here (its in dutch).

 

I hesitated and thought for a moment and responded that in  some ways it's not as complex as some make it seem, although there is  more to data storage than just cost per capacity. As I usually do when asked or told how complex data storage is my response is a mixed yes it (storage, data and information infrastructure) are complex, however lets put it in perspective which is storage any more complex than other things?

 

Our conversation then evolved with an example that I find  shopping for an automobile complex unless I know exactly what I'm looking for.  After all there are cars trucks SUV's used new buy lease different manufacturers  makes and models speeds cargo capacity management tools and interfaces not to  mention metrics and fuel.

 

This is where I usually mention how IMHO buying a new car or vehicle is with all the different options, that is unless you know what you want, or know your selection criteria and options. Same with selecting a new laptop computer, tablet or smart phone, not to mention a long list of other things that to the outsiders can also seem complex, intimidating or overwhelming. However lets take a step back to look at storage then return to compare some other things that may be confusing to those who are not focused on them.

Stepping back looking at storage

Similar to other technologies, there are different types of data storage to meet various needs from performance to space capacity as well as support various forms of scaling.

server and storage I/O flow
Server and storage I/O fundamentals

Storage options
Various types of storage devices including HDD's, SSHD/HHDD's and SSD's

Storage type options
Various types of storage devices

Storage I/O decision making
Storage options, block, file, object, ssd, hdd, primary, secondary, local and cloud

Shopping for other things can be complex

During my return trip to the US from the Dutch event, I had a layover at London Heathrow (LHR) and walking the concourse it occurred to me that while there are complexities involved with different technologies including storage, data and information infrastructures, there were other complexities.

 

Same thing with shoes so any differ options not to mention  cell phones or laptops and tablets, or how about tv's?

 

I wan to go on a trip do I book based on lowest cost for air  fare then hotel and car rental, or do I purchase a package? For the air fare is it  the cheapest yet that takes all day to get from point a to b via plane changes  at points c d and e not to mention paying extra fees vs paying a higher price  for a direct flight with extra amenities?

 

Getting hungry so what to do for dinner, what type of  cuisine or food?

Hand Baggage options
How about a new handbag or perhaps shoes?

Baggage options
How about a new backpack, brief case or luggage?

Beverage options
What to drink for a beverage, so many options unless you know what you want.

PDA options
Complexity of choosing what cell phone, PDA or other electronics

What to read options
How about what to read including print vs. online accessible content?

How about auto parts complexity

Once I got home from my European trip I had some mechanical things to tend to including replacing some spark plugs.

Auto part options
How about automobile parts from tires, to windshield wiper blades to spark plugs?

 

Sure if you know the exact part number and assuming that part number has not changed, then you can start shopping for the part. However recently I had a part number based on a vehicle serial number (e.g. make, model, year, etc) only to receive the wrong part. Sure the part numbers were correct, however along the line somewhere the manufacture made a change and not all downstream vendors knew about the part change, granted I eventually received the correct part.

 

Back to tech and data infrastructures

Ok, hopefully you got the point from the above examples among many others in that we live in world full of options and those options can bring complexity.

 

What type of network or server? How about operating system,  browser, database, programming or development language as there are different  needs and options?

 

Sure there are many storage options as not everything is the  same.

 

Likewise while there can be simple answer with a trend of  what to use before the question is understood (perhaps due to a preference) or  explained, the best or applicable answer may be it depends. However saying it  depends may seem complex to those who just want a simple answer.

Closing Comments

So is storage more complex than other technologies, tools, products or services?

 

What say you?

 

Ok, nuff said, for now...

 

Cheers
  Gs


What does server storage I/O scaling mean to you?

$
0
0

What does server storage I/O scaling mean to you?

 

Scaling means different things to various people depending on the context or what it is referring to.

 

For example, scaling can me having or doing more of something, or less as well as referring to how more, or less of something is implemented.

 

Scaling occurs in a couple of different dimensions and ways:

  • Application workload attributes - Performance, Availability, Capacity, Economics (PACE)
  • Stability without compromise or increased complexity
  • Dimension and direction - Scaling-up (vertical), scaling-out (horizontal), scaling-down

Scaling PACE - Performance Availability Capacity Economics

Often I hear people talk about scaling only in the context of space capacity. However there are aspects including performance, availability as well as scaling-up or scaling-out. Scaling from application workloads perspectives include four main group themes which are performance, availability, capacity and economics (as well as energy).

  • Performance - Transactions, IOP's, bandwidth, response time, errors, quality of service
  • Availability - Accessibility, durability, reliability, HA, BC, DR, Backup/Restore, BR, data protection, security
  • Capacity - Space to store information or place for workload to run on a server, connectivity ports for networks
  • Economics - Capital and operating expenses, buy, rent, lease, subscription

Scaling with Stability

The latter of the above items should be thought of more in terms of a by-product, result or goal for implementing scaling. Scaling should not result in a compromise of some other attribute such as increasing performance and loss of capacity or increased complexity. Scaling with stability also means that as you scale in some direction, or across some attribute (e.g. PACE), there should not be a corresponding increase in complexity of management, or loss of performance and availability. To use a popular buzz-term scaling with stability means performance, availability, capacity, economics should scale linear with their capabilities or perhaps cost less.

Scaling directions: Scaling-up, scaling-down, scaling-out

server and storage i/o scale options

 

Some examples of scaling in different directions include:

  • Scaling-up (vertical scaling with bigger or faster)
  • Scaling-down (vertical scaling with less)
  • Scaling-out (horizontal scaling with more of what being scaled)
  • Scaling-up and out (combines vertical and horizontal)

 

Of course you can combine the above in various combinations such as the example of scaling up and out, as well as apply different names and nomenclature to see your needs or preferences. The following are a closer look at the above with some simple examples.

server and storage i/o scale up
Example of scaling up (vertically)

 

server and storage i/o scale down
Example of scaling-down (e.g. for smaller scenarios)

server and storage i/o scale out
Example of scaling-out (horizontally)

 

server and storage i/o scale out
Example of scaling-out and up(horizontally and vertical)

Summary and what this means

There are many aspects to scaling, as well as side-effects or impacts as a result of scaling.

 

Scaling can refer to different workload attributes as well as how to support those applications.

 

Regardless of what you view scaling as meaning, keep in  mind the context of where and when it is used and that others might have  another scale view of scale.

 

Ok, nuff said (for now)...

Cheers gs

Lenovo ThinkServer TD340 StorageIO lab Review

$
0
0

Storage I/O trends

Lenovo ThinkServer TD340 Server and StorageIO lab Review

Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

Lenovo TD340

The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

The Lenovo TD340 Experience

Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first  answering some questions  to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

 

One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

Welcome to the TD340
Lenovo ThinkServer Setup

 

TD340 Setup
  Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

TD340 as tested

TD340 Selfie of whats inside
  TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

 

TD340 disk drive bays
  TD340 internal drive hot-swap bays

Speeds and Feeds

The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

 

You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

  • Operating systems support include various Windows Servers  (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
  • Form factor is 5U  tower with weight starting at 62 pounds depending on how configured
  • Processors include support for up to two (2) Intel E5-2400 v2  series
  • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to  129GB.
  • Expansion slots vary depending on if a single or dual cpu socket.  Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical,  1 x PCIe Gen3
  • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz  FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled.  These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x  PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8  mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
  • Two 5.25” media bays for CD or DVDs or other devices
  • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter  models
  • Internal storage varies depending on model including up to eight  (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD's or SSDs).
  • Storage  space capacity varies by the type and size of the drives being used.
  • Networking interfaces include two (2) x GbE
  • Power supply options include single 625 watt or 800 watt, or 1+1  redundant hot-swap 800 watt, five fixed fans.
  • Management tools include ThinkServer Management Module and  diagnostics

Lenovo TD340

What Did I do with the TD340

After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

 

Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

What I liked

Unbelievably quiet which may not seem like a big deal,  however if you are looking to deploy a server or system into a small office  workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server  that can be installed into a home media entertainment system, well, this might  be a nice to have consideration ;). Speaking of IO slots, naturally I'm interested in Server  Storage I/O so having multiple slots is a must have, along with the processor  that is multi-core (pretty much standard these days) along with VT and EP for  supporting VMware (these were disabled in the BIOS however that was an easy  fix).

What I did not like

The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

Summary

Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

 

Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

 

Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.


Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

 

Ok, nuff said

Cheers
  Gs

Good Ole P2V

$
0
0

This year I havent had much opportunity to blog due to the type of work I have been involved in. However, as soon as I find the opportunity I grab it and this one happens to be about P2V. I recently had to use P2V after a long time for a POC I cant share details about at this time. Having not done a P2V in ages, it will be safe to say that my confidence was sort of crushed from some of my initial tests. Here are some of the things I learned. Read more...

[Microsoft] VDA ライセンスポリシーを変更

$
0
0

VDI導入のハードルを上げている原因の一つがライセンスです。特にVDAは悩ましいものです。

元々シンクライアントやゼロクライアントのようにnon-Windows OSでバイスから仮想デスクトップに接続するために必要なものでデバイス単位でのみ購入が可能、利用可能なデバイスは最大4台といった制限があったからです。

 

 

 

この制限が次のように変更されました。

  ・VDAをユーザー単位でも購入可能となりました。

  ・VDAの接続デバイス制限数が廃止されました。

 

 

 

マイクロソフト社としては思い切った決定だと思いますが、あともう一歩、クラウド上からもクライアントOSを自由に利用できるようにボリュームライセンスポリシーが変更して欲しいですね。

 

 

 

詳しい内容はここをご参照ください。

StorageIO Out and About Update - VMworld 2014

$
0
0

StorageIO Out and About Update - VMworld 2014

 

Here is a quick video montage or mash-up if you prefer  that Cory Peden (aka the Server and StorageIO Intern @Studentof_IT) put together  using some video that recorded while at VMworld 2014 in San Francisco. In this  YouTube video we take a quick tour around the expo hall to see who as well as what we  run into while out and about.

 

VMworld 2014 StorageIO Update

 

For those of you who were at VMworld 2014 this  will give you a quick Dejavu memory of the sites and sounds while for those who  were not there, see what you missed to plan for next year. Watch for  appearances from Gina Minks (@Gminks) aka Gina Rosenthal (of BackupU)and Michael (not Dell)  of Dell Data Protection, Luigi Danakos (@Nerdblurt) of HP Data Protection who  lost his voice (tweet Luigi if you can help him find his voice). With Luigi we were able to get in a  quick game of  buzzword bingo before catching up with Marc Farley (@Gofarley) and John Howarth  of Quaddra Software. Mark and John talk about their new   solution from Quaddra which  will enable searching and discovering data across different storage systems and  technologies. 

 

Other visits include a quick look at an EVO:Rail from  Dell, along with Docker for Smarties overview with Nathan LeClaire (@upthecyberpunks)  of Docker (click here to watch the extended interview with Nathan).

Docker for smarties

 

Check out the conversation with Max Kolomyeytsev of StarWind Software (@starwindsan)  before we get interrupted by a sales person. During our walk about, we also bump into Mark Peters (@englishmdp)  of ESG facing off video camera to video camera.

 

Watch for other things including rack cabinets that look like compute servers yet that have a large video screen so they can be software defined for different demo purposes.

virtual software defined server

 

Watch for more Server and StorageIO Industry Trend Perspective podcasts, videos as well as out and about updates soon, meanwhile check out others here.

 

Ok, nuff said (for now)

 

Cheers gs

Viewing all 3135 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>