Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3135 articles
Browse latest View live

Workspace ONE Access PeopleSearch - how to sync your people information daily

$
0
0

In this blog, we will walk through the steps to get your People Information synced on a daily basis with AWS Lambda and CloudWatch.

 

I will be assuming that you already utilise Workspace ONE Access and you have already an Active Directory associated with Workspace ONE Access and it syncs frequently.

 

Prerequisites:

1. Login in your Workspace ONE Access tenant as an Administrator over https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/auth/login

2. Switch to Administration console

3. Navigate to Identity & Access Management tab.

4. Open your directory information.

5. Go to Sync settings.

6. Confirm the sync frequency and check whether scheduled syncs are successful.

Screenshot 2020-02-28 at 18.24.35.png

Screenshot 2020-02-28 at 18.24.48.png

 

7. Make yourself familiar with API calls in Workspace ONE Access.

8. Create a Service Client Token to be able to run API calls (Create Remote App Access Client)

9. Get your directory ID opening Inspect Element in Chrome, Choose Network, XHR and Navigate to Identity & Access Management tab. One of the lines loaded contains your directory id.

Screenshot 2020-02-28 at 18.22.42.png

 

Step 1: Configure PeopleSearch

This initial configuration shows how to enable People Search and get to a once per week sync.

1. Click on the drop-down arrow on the Catalog tab button.

2. Choose Settings.

3. Navigate to People Search.

4. Check Enable and click Next.

Screenshot 2020-02-28 at 18.26.55.png

5. Select your directory.

(Note: if you have multiple directories added, you will be able to configure People Search only for one of them!)

6. Check all the attributes that you want People Search to sync and display in the People Tab in the end user catalog portal. Click Next.

Screenshot 2020-02-28 at 18.27.55.png

 

7. Map the VMware Workspace ONE Access attribute names to the Active Directory attribute names. Click Next.

Screenshot 2020-02-28 at 18.28.56.png

 

8. Specify the users that you want to sync. Click on Save & Sync.Screenshot 2020-02-28 at 18.29.48.png

9. Verify that People Tab appears in End User Portal.

Screenshot 2020-02-28 at 18.31.39.png

 

Step 2: Import pictures into your Active Directory

1. Log in to a domain controller

2. Run PowerShell as an Administrator and enter following commands:

     $photo = [byte[]](Get-Content path of pic -Encoding byte)
     Set-ADUser username -Replace @{thumbnailPhoto=$photo}

     Example:
     $photo = [byte[]](Get-Content C:\Users\Public\Pictures\"Sample Pictures"\cuser1_picture.jpg -Encoding byte)
     Set-ADUser cuser1 -Replace @{thumbnailPhoto=$photo}

 

Step 3: Run a manual sync

There is no tab where you can check the sync schedule or status of PeopleSearch sync. This can be done only over an API call executed in Postman.

1. Run following API call:

GET : https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile

Headers :

Content-Type : application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.schedule+json

Accept : application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.schedule+json

Authorisation : HZN eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdG

You will get Response 200 OK with following body:

{

 

    "syncSchedule": {

        "frequency": "weekly",

        "dayOfWeek": "sunday",

        "hours": 21,

        "minutes": 55,

        "seconds": 0

    },

    "photoAttribute": "thumbnailPhoto",

    "_links": {

        "self": {

            "href": "/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile"

        },

        "hw-photo-sync": {

            "href": "/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile/sync"

        }

    }

}

 

Currently, you cannot set any other value for frequency, but "weekly". If you want to get pictures or any other PeopleSearch information synced sooner than Sunday evening, you can run the following API:

POST : https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile/synchttps://sva-madhuri.hs.trcint.com/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/5af06e79-7567-4632-abd2-e99336c408bc/syncprofile/photosyncprofile

Headers :

Content-Type : application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.schedule+json

Accept : application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.schedule+json

Authorisation : HZN eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdG

Request Body : {"ignoreSafeguards" : true}

Response : 200 Ok

 

Step 4: Schedule a daily sync with AWS Lambda and CloudWatch

In modern, dynamic companies, weekly sync is not satisfying for your users. People change their pictures and want this to be reflected as soon as possible. You also want to find your colleagues' phone number immediately, if you need it and not wait for a week. The best option is to automate the manual sync with a simple and efficient Python script and in order for its execution to not depend on your availability, you can schedule it to run daily. A very good and simple tool is AWS Lambda.

1. Write your Python script.

 

import json

 

import requests


user = "<your_service_client_token_id>"

shared_secret = "<the_shared_secret_of_your_service_client_token>"

 

def get_access_token():

  header = {'Content-Type': "application/x-www-form-urlencoded"}

  data = {'grant_type': 'client_credentials'}

  request = requests.post('https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/auth/oauthtoken', headers=header, params=data, auth=(user, shared_secret))

  token = request.json()['access_token']

   return token

 

def manual_sync():

  token = get_access_token()

  header = {}

  header['Authorization'] = "HZN %s" % token

  header['Content-Type'] = "application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.sync+json"

  header['Accept'] = "application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.sync+json"

  body = {'ignoreSafeguards':True}

  body = json.dumps(body)

  url = "https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile/sync"

  response = requests.request('POST', url, headers=header, data=body)

  print(response)

  print(response.text)


manual_sync()

 

2. Log into your AWS Console and navigate to Lambda.

3. Click on Create Function.

Screenshot 2020-02-28 at 18.33.01.png

4. Choose Author from scratch, give your function an applicable name and choose Python 3.7 as Runtime. Click again on Create function to proceed.

Screenshot 2020-02-28 at 18.34.02.png

 

5. In the next screen, choose Edit code inline and you can write your code in the same way you would do it in your preferred IDE. Please note that your main function has to be modified in order to work properly in Lambda.

(Note: You need to add "event" and "context" as parameters of your function. The function does not have to be called. Iy has to be specified in the handler field.)

 

def manual_sync(event, context):

  token = get_access_token()

  header = {}

  header['Authorization'] = "HZN %s" % token

  header['Content-Type'] = "application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.sync+json"

  header['Accept'] = "application/vnd.vmware.horizon.manager.connector.management.directory.sync.profile.photosync.sync+json"

  header['cache-control'] = "no-cache"

  body = {'ignoreSafeguards':True}

  body = json.dumps(body)

  url = "https://<your_Workspace_ONE_Access_tenant_URL>/SAAS/jersey/manager/api/connectormanagement/directoryconfigs/<directory_Config_Id>/syncprofile/photosyncprofile/sync"

  response = requests.request('POST', u, headers=header, data=body)

  print(response)

  print(response.text)

 

Screenshot 2020-02-28 at 18.35.28.png

6. Click on Add trigger and choose CloudWatch Events/EventBridge.

7. From the list with rules, opt for Create a new rule.

8. Give your rule an applicable name, description and add a cron expression to set the time when you want your function to be executed.

(Note: Cron expressions are by default in UTC. This cannot be changed. The example is for a rule that is triggered every day at 10:00am UTC.

Screenshot 2020-02-28 at 18.37.40.png

 

9. Click on Add.

10. Test your function and save it.

 

Enjoy your up-to-date information every day.


Adding Nvidia GPU Management Pack in VMware VROPS

$
0
0

Adding Nvidia GPU Management Pack in VROPS


1. In vROPS UI, go to Administration tab, click on Solutions, click + to create a new solution

 

 

2. Select the Nvidia vGPU vROPS Management Pack (NVIDIA_vGPU_Management_Pack_1.1.25168221_signed.pak)

 

 

3. Click Upload and then click Next

 

 

4. Accept the EULA and click Next to start the PAK installation

 

 

5. Click Finish the complete the PAK installation

 

 

6. Select Nvidia Virtual GPU solution and click the Gear icon to configure

 

 

7. Add the vCenter Server for Nvidia and provide the credentials, test the connection and save the settings

 

 

8. Click Close to complete the Nvidia GPU Management Pack in VMware vROPS.

Installing and Configuring vROPS Horizon View Broker Agent

$
0
0

Installing and Configuring vROPS Horizon View Broker Agent

1. Login to Horizon View Connection Server (Primary)

2. Launch the Horizon Broker Agent settings wizard (Run as Administrator)

3. Test and Pair the connection server with Horizon Adapter using the pairing credential. Click Next

4. Provide Horizon Administrator credential using the Active Directory Service Account

5. Enter the Events DB credentials (Use the SQL User account configured for View Events Database

6. Add the App Volumes Managers: (Enter the IP Addres or FQDN of App Volumes Manager instances with a Active Directory Service Account used for AVM integration.

7. Choose default timeout values and click Next

8. Default logging levels for Broker. Default is good for most circumstances

9. Ensure Broker Agent service status shows Running

10. Review the summary and click Finish to complete the configuration wizard.

11. Verify that the View Adapter is now listed as “Collecting” and “Data Receiving”

12.  Configuring vROPS Horizon View Broker Agent is complete!

Configuring LDAP Source in VMware vROPS

$
0
0

Configuring LDAP Source in VMware vROPS


1. In vROPS UI, go to Administration, Authentication Sources. Click on the “+” icon to add the new LDAP source

 

 

 

2. Provide the LDAP source details for your domain (e.g. AS.corp.local)

 

 

3. Click Ok.

4. Sync users group from the selected LDAP Source

 

 

5. Import Active Directory Security Group from the domain (e.g. as.corp.local\vdiadmins).

 

 

6. Grant Administrator role and allow access to all the objects.

 

 

7. Click Finish.

8. Click Yes to continue


9. Test the login with a domain credential to vROPS UI page.

vSAN の SCSI-3 Persistent Reservation(SCSI-3 PR)を Linux で確認してみる。

$
0
0

vSphere 6.7 U3 では、ネイティブな VMDK が SCSI-3 Persistent Reservation(SCSI-3 PR)対応になりました。

 

VMware vSAN 6.7 Update 3 リリース ノート

ネイティブ vSAN VMDK 上の Windows Server Failover Clusters (WSFC)。vSAN 6.7 Update 3 では SCSI-3 PR がネイティブにサポートされており、Windows Server Failover Clusters を最初のクラスのワークロードとして VMDK に直接デプロイできます。この機能を使用すると、物理 RDM のレガシー環境または外部ストレージ プロトコルを vSAN に移行できます。

 

vSAN 6.7 U2 以前でも、vSAN iSCSI ターゲット(VIT)による LUN を共有ディスクを利用することで

vSAN による仮想ディスク(VMDK)で SCSI-3 PR が利用でき、

WSFC(Windows Server Failover Clustering)のようなクラスタウェアの

排他制御のしくみで利用することができていました。

しかし 6.7 U3 以降では、VIT による LUN ではない(ネイティブな)VMDK(いわゆる 共有 VMDK)でも、WSFC などで利用可能とのことです。

これは下記のあたりが参考になります。

 

Configuring a shared disk resource for Windows Server Failover Cluster (WSFC)

and migrating SQL Server Failover Cluster Instance (FCI) from SAN (RDMs) to vSAN

https://kb.vmware.com/s/article/74786

 

Hosting Windows Server Failover Cluster (WSFC) with shared disk on VMware vSphere: Doing it right!

https://blogs.vmware.com/apps/2019/05/wsfc-on-vsphere.html

 

そこで今回は、vSAN 6.7 U2 と vSAN 6.7 U3 の環境で、Linux VM から SCSI-3 PR の様子を見てみました。

 

今回の vSAN 環境。

vCenter 6.7 U3 に、ESXi 6.7 U2 / ESXi 6.7 U3 を登録して、

それぞれで vSAN 6.7 U2 / vSAN 6.7 U3 のクラスタを作成してあります。

そして、vSAN バージョンの異なるクラスタを用意しています。

 

クラスタ: vSAN-Cluster-67u2

  • vSAN 6.7 U2
  • ESXi 6.7 U2(Build 13006603)
  • 稼働している VM: vm01-on-67u2、vm02-on-67u2

 

クラスタ: vSAN-Cluster-67u2

  • vSAN 6.7 U3
  • ESXi 6.7 U3(Build 14320388)
  • 稼働している VM: vm01-on-67u3、vm02-on-67u3

 

vSAN-Cluster-67u2 クラスタに参加しているすべての ESXi のビルドは 13006603 なので、

ESXi 6.7 U2 → vSAN 6.7 U2 になります。

vsan-env-05.png

 

vSAN 6.7 U2 では、vSAN オン ディスク フォーマット が バージョン 7 です。

vsan-env-04.png

 

一方、vSAN-Cluster-67u3 クラスタに参加しているすべての ESXi のビルドは 14320388 なので、

ESXi 6.7 U3 → vSAN 6.7 U3 になります。

 

 

vSAN 6.7 U3 では、vSAN オン ディスク フォーマット が バージョン 10 です。

vsan-env-06.png

 

vSAN のバージョンアップによる新機能を利用するには、この環境のように

新バージョンの vSAN にあったオン ディスク フォーマットにしておく必要があります。

vSAN バージョンとオン ディスク フォーマットの関係は、下記で確認できます。

 

Build numbers and versions of VMware vSAN

https://kb.vmware.com/s/article/2150753

 

今回の vSAN 上のゲスト OS。

検証で利用する ゲスト OS は、Oracle Linux 7 です。

[root@vm01-on-67u2 ~]# cat /etc/oracle-release

Oracle Linux Server release 7.7

 

SCSI-3 PR の動作確認では、Linux の sg_persist コマンドを利用します。

これは、sg3_utils パッケージ(Linux ディストリビューション同梱のもの)に含まれています。

[root@vm01-on-67u2 ~]# which sg_persist

/usr/bin/sg_persist

[root@vm01-on-67u2 ~]# rpm -qf /usr/bin/sg_persist

sg3_utils-1.37-18.0.1.el7_7.2.x86_64

 

これらの VM では、vSAN データストアに配置した 10GB の VMDK「ハード ディスク 2」を接続しています。

vsan-env-14.png

 

仮想 SCSI コントローラ 1 で(デフォルトの SCSI コントローラ 0 とは分けて)接続しています。

そして SCSI コントローラ 1 では、SCSI パスの共有を「物理」に設定しています。

vsan-env-15.png

 

この ハード ディスク 2 は、ゲスト OS からは

「VMware Virtual disk」である /dev/sdb という名前で認識されています。

[root@vm01-on-67u2 ~]# lsscsi

[0:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda

[1:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sdb

[4:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0

 

/dev/sdb は「ハード ディスク 2」なので 10GB です。

[root@vm01-on-67u2 ~]# lsblk -i

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda           8:0    0   16G  0 disk

|-sda1        8:1    0    1G  0 part /boot

`-sda2        8:2    0   15G  0 part

  |-ol-root 249:0    0 13.4G  0 lvm  /

  `-ol-swap 249:1    0  1.6G  0 lvm  [SWAP]

sdb           8:16   0   10G  0 disk

sr0          11:0    1 1024M  0 rom

 

vSAN 6.7 U2 での ネイティブ VMDK と SCSI-3 PR。

vSAN 6.7 U2 までは、ネイティブ VMDK だと SCSI-3 PR に対応していません。

sg_persist を実行すると、サポートされていないことがわかります。

ちなみに、-s は SCSI-3 PR のステータスを、-k では SCSI-3 PR で登録するキーを読み取ろうとしています。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdb -s

  VMware    Virtual disk      2.0

  Peripheral device type: disk

PR in (Read full status): bad field in cdb including unsupported service action

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdb -k

  VMware    Virtual disk      2.0

  Peripheral device type: disk

PR in (Read keys): bad field in cdb including unsupported service action

 

vSAN 6.7 U2 での vSAN iSCSI LUN と SCSI-3 PR。

vSAN 6.7 U2 でも、vSAN iSCSI ターゲット(VIT)の LUN であれば、SCSI-3 PR に対応しています。

そこで Linux から VIT の LUN に接続して、SCSI-3 PR の動作確認をしてみます。

 

まず、vSAN で iSCSI ターゲットと、LUN を作成します。

ここでは iqn.2016-09.jp.go-lab:vit67u2 という IQN の iSCSI ターゲットを作成して、

そこに 5GB の LUN を追加しています。

vsan-env-13.png

 

Linux OS 側には iscsi-initiator-utils をインストールして・・・

(設定コマンドの出力結果は、一部省略しています)

[root@vm01-on-67u2 ~]# yum install -y iscsi-initiator-utils

[root@vm01-on-67u2 ~]# systemctl enable iscsid

[root@vm01-on-67u2 ~]# systemctl start iscsid

[root@vm01-on-67u2 ~]# systemctl is-active iscsid

active

 

iscsiadm コマンドで、iSCSI ターゲットに接続します。

[root@vm01-on-67u2 ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.33 --discover

[root@vm01-on-67u2 ~]# iscsiadm --mode node --portal 192.168.1.33:3260 --login

[root@vm01-on-67u2 ~]# iscsiadm -m session

tcp: [1] 192.168.1.33:3260,257 iqn.2016-09.jp.go-lab:vit67u2 (non-flash)

 

接続された iSCSI LUN は「VMware Virtual SAN」による /dev/sdc として認識されました。

[root@vm01-on-67u2 ~]# lsscsi

[0:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda

[1:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sdb

[4:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0

[34:0:0:0]   disk    VMware   Virtual SAN      0001  /dev/sdc

 

さきほどのスクリーンショットにあった LUN なので、/dev/sdc は 5GB です。

[root@vm01-on-67u2 ~]# lsblk

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda           8:0    0   16G  0 disk

tqsda1        8:1    0    1G  0 part /boot

mqsda2        8:2    0   15G  0 part

  tqol-root 249:0    0 13.4G  0 lvm  /

  mqol-swap 249:1    0  1.6G  0 lvm  [SWAP]

sdb           8:16   0   10G  0 disk

sdc           8:32   0    5G  0 disk

sr0          11:0    1 1024M  0 rom

 

では、sg_persist コマンドで様子を確認してみます。

この iSCSI LUN であれば、SCSI-3 PR に対応しているため、さきほどの /dev/sdc とは出力結果がことなります。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -s

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x0

  No full status descriptors

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -k

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x0, there are NO registered reservation keys

 

ためしに /dev/sdc デバイスに、SCSI-3 PR の Reservation を設定してみます。

まずは、キーを登録します。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -o --register -S 123

  VMware    Virtual SAN       0001

  Peripheral device type: disk

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -k

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x1, 1 registered reservation key follows:

    0x123

 

この時点では、まだ Reservation を持っていません。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -r

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x1, there is NO reservation held

 

それでは、登録したキーを指定(-K 123)して、

他のホストから書き込めないように Reservation を設定(-T 1)してみます。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -o --reserve -K 123 -T 1

  VMware    Virtual SAN       0001

  Peripheral device type: disk

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -r

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x1, Reservation follows:

    Key=0x123

    scope: LU_SCOPE,  type: Write Exclusive

 

/dev/sdc デバイスには、この Linux からは、書き込みができます。

※デバイスまで書き込むように oflag=sync をつけています。

[root@vm01-on-67u2 ~]# dd if=/dev/zero of=/dev/sdc count=1 oflag=sync

1+0 レコード入力

1+0 レコード出力

512 バイト (512 B) コピーされました、 0.00960002 秒、 53.3 kB/秒

 

一方で、おなじ LUN に接続している別の Linux からは、書き込みができなくなっています。

ホスト名 vm02-on-67u2 の Linux からも同様に iSCSI LUN に接続して /dev/sdc と認識していますが・・・

[root@vm02-on-67u2 ~]# iscsiadm -m session

tcp: [1] 192.168.1.33:3260,257 iqn.2016-09.jp.go-lab:vit67u2 (non-flash)

[root@vm02-on-67u2 ~]# lsscsi

[0:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda

[1:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sdb

[4:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0

[34:0:0:0]   disk    VMware   Virtual SAN      0001  /dev/sdc

[root@vm02-on-67u2 ~]# lsblk /dev/sdc

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sdc    8:32   0   5G  0 disk

 

SCSI-3 PR の Reservation によって、デバイスへの書き込みはエラーになりました。

[root@vm02-on-67u2 ~]# dd if=/dev/zero of=/dev/sdc count=1 oflag=sync

dd: `/dev/sdc' に書き込み中です: 入力/出力エラーです

1+0 レコード入力

0+0 レコード出力

0 バイト (0 B) コピーされました、 0.0144499 秒、 0.0 kB/秒

 

そして Reservation  を開放すると、他のホストからの書き込みもできるようになります。

vm01-on-67u2 で Reservation  を開放(--release)すると・・・

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -o --release -K 123 -T 1

  VMware    Virtual SAN       0001

  Peripheral device type: disk

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -r

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x1, there is NO reservation held

 

vm02-on-67u2 でも、書き込みができるようになります。

[root@vm02-on-67u2 ~]# dd if=/dev/zero of=/dev/sdc count=1 oflag=sync

1+0 レコード入力

1+0 レコード出力

512 バイト (512 B) コピーされました、 0.0161903 秒、 31.6 kB/秒

 

最後に、キーの登録も削除しておきます。

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -o --register -K 123

[root@vm01-on-67u2 ~]# sg_persist -d /dev/sdc -k

  VMware    Virtual SAN       0001

  Peripheral device type: disk

  PR generation=0x2, there are NO registered reservation keys

 

vSAN 6.7 U3 での ネイティブ VMDK と SCSI-3 PR。

vSAN 6.7 U3 では、ネイティブ VMDK でも SCSI-3 PR に対応します。

 

sg_persist コマンドでキーを読み取ろうとすると iSCSI LUN と同様の出力結果となり、

SCSI-3 PR に対応していそうな様子が見られました。

※この時点ではキーが未登録なので「NO registered reservation keys」です。

[root@vm01-on-67u3 ~]# sg_persist -d /dev/sdb -k

  VMware    Virtual disk      2.0

  Peripheral device type: disk

  PR generation=0x0, there are NO registered reservation keys

 

ただし、Full Status を確認しようとすると、失敗(command failed)してしまいます。

[root@vm01-on-67u3 ~]# sg_persist -d /dev/sdb -s

  VMware    Virtual disk      2.0

  Peripheral device type: disk

persistent reservation in: scsi status: Busy

PR in (Read full status): command failed

 

また、キー登録を試みても同様に失敗してしまいました。

[root@vm01-on-67u3 ~]# sg_persist -d /dev/sdb -o --register -S 123

  VMware    Virtual disk      2.0

  Peripheral device type: disk

persistent reserve out: scsi status: Busy

PR out: command failed

 

vSAN 6.7 U3 でのネイティブ VMDK の SCSI-3 PR 対応は、WSFC をサポートするようです。

しかし、他のクラスタウェアなどで SCSI-3 PR 要件がある場合に vSAN でネイティブ VMDK の共有接続をするには

実際に利用する想定のソフトウェアで動作確認をしたほうがよさそうかなと思いました。

 

以上、vSAN の SCSI-3 PR の様子を確認してみる話でした。

VMware Photon OS on Azure #Part 1 - introduction

$
0
0

Welcome to the launch of my blog here at communities.vmware.com and my first blog post! My name is Daniel Casota, I am living in Switzerland and excited to share with you more of what I'm doing in my homelab!

 

I have regained the joy of keep being curious in IT. How can curiosity be lost? I would answer my 20-year-old self, that one can predict events in life, good or bad, and that it is worth taking two steps to the side. Like many other IT system engineers, I do not have to set up my own company, rather stick to what I can take and give.

Being twenty-five years in IT I have finally realized all of the benefits that blogging can bring. Writing things down makes me feel more calm when an idea whirling around my head is ready to jump out, and in some sort happier when tinkering around with VMware software products. As a VMware enthusiast, server consolidations and maintaining datacenter infrastructure belonged to daily duties and still does. Defecting vSphere and Horizon workload anomalies, writing some code snippets and assisting to updates/upgrades of software components made my day.

 

Moving forward quickly, customers did not asked if it would be more affordable and more secure to let manage VMware's workloads by cloud providers. It just happened.

 

As this is a four-part blog about Photon OS on Azure, I apologize in advance to readers of this first, really long blog post.

 

I realized that I was quite late

A participation to the interregion public cloud provider Microsoft gained traction since the launch of the cloud-hosted versions of Office365 with its rolling release model. In addition, in Azure with the license mobility rights for Windows Server, small- and middle-sized companies focus more on their value proposition, and want to have less friction during their transition progresses from traditional IT applications to cloud-native IT applications.

 

Some colleagues already worked on configuring hybrid tenants using active directory federation services, with on a software as a service basis of Exchange and with Cisco spark collaboration suite for teams. I realized that I was quite late. Their customers' blueprint priority was on modernizing the c# apps, and, again, not on strategically keeping the lights on of their sme on-premise datacenter infrastructure.

 

Back in the homelab I became desillusionated. What are opportunities to go for? Rethinking that "99.6% of all Swiss companies have fewer than 250 employees and that these smes account two thirds of the employees in Switzerland", you cannot focus on VMware biased IT infrastructure hosting Microsoft windows servers only can you? Hence, a year ago I decided to learn more about Azure.

 

To gain hands-on experience in Azure infrastructure services, I decided to mix the from-the-scratch-learning-path with VMware Photon OS. Photon OS, a VMware operating system, is an open source Linux container host for cloud-native applications. Like most people I did not knew much about Photon. But that minimal resource footprint of OS, the built-in package-based lifecycle management systems and the growing automation&integration support for IoT environments caught my attention.

 

Photon OS

Photon OS is a native 64bit operating system, and supported on ARM64 architecture as well. Companies are looking towards new economic opportunities using IoT technology.

 

The full installation of Photon OS comes with a ready-to-go environment and a bunch of packages gcc, make, glibc-devel, linux-devel, etc. for developing programs, and for system engineering as well. Please be benevolent, I am still a Linux beginner and didn't have had in the last twenty-five years the opportunity to develop software drivers for compute hardware. The last compiling, assembling and linking in some sort was using VMware ThinApp components ten years ago.

 

How do system administrators maintain a fleet of computers with different packages, files and configuration installed in different order? Photon its so called RPM-OSTree provides the relevant magic sausage for an identical, predictable installed systems. It should be installed first. Developer teams building their OSTree host and client environment should use a lightwave directory service with a certificate authority. Lighwave directory is an open source LDAP v3 directory service developed by VMware. Facing all Project Lightwave components, in short, it has similarities with Microsoft domain controllers concepts, Microsoft's version of kerberos and domain name services.

 

As a windows developer activity, building an .msi upon a specific .NET base line with if-then-else-clauses for windows os flavours does not solve machine-machine-dependencies. It might be a tech highlight - Photon its open source yum-compatible package manager called tiny dandified yum (tdnf) with OSTree preserves the same package-management capabilities of yum meanwhile with added benefit of self-upgrade capability. For most classic Microsoft Windows package builders this is different.

Photon OS runs docker containers. And really performant! And persistent data in mind, yes you can reboot the machine into an existing image or into a newly created image build.

 

How to monitor container workloads? The guys at the company Opvizor know a lot about monitoring vSphere environments. In their blog post they describe a handy dandy monitoring solution for Photon OS hosted containers.

 

If you start deploying Photon on vSphere, you definitely should read the posts from Powershell&PowerCLI expert Luc Dekens Deploy Photon 2.0 - Part 1 - LucD notes, Cloud-init - Part 3 - Photon OS - LucD notesand the VMware Technology Network (VMTN) forum posts vmware Photon OS 3.0 Customization of the users vin01 (Vineeth Kondapally) and LucD (Luc Dekens). I need to thank them for providing valuable information about creating Photon OS templates, and for pointing me to the right direction in why it is important the rolling release model aka SDDC way!

 

When I discussed the first blog post draft with Michael Rebmann, Senior Solution Architect at VMware, Michael (thanks mate!) advised me to learn more about Kubernetes as this is a main topic in the future. And back on Photon, he pointed out that Photon shares the same OS security patches quite with some more VMware virtual appliances first I thought of. Here's the list (not complete!)

 

The VMware Photon team does listen to submitted bug reports and feature requests.There is no commercial support for Photon OS as a standalone operating system, but simply use their Github Issues page or, if you wish to contribute code, make sure that you can build Photon OS and sign the VMware Open Source Software (OSS) Contributor License Agreement (CLA).

 

When it comes to the Photon OS minimal installation, my lessons learned is to keep in mind the SDDC way. Photon is a managed operating system for IoT gateway hardware (see Photon OS and VMware Pulse IoT Center 2.0 on Dell Edge Gateway 5000). And as an enthusiast, seeing niche options like the support of PowerCLI core on Photon OS and a fully working Powershell Gallery provider fascinates me.

Outside of a baremetal or of a vSphere environment, provisioning Photon OS is supported as Amazon AMI machine, as Google Compute machine, on Raspberry Pi3 and as an Azure virtual machine. You can find the download bits at https://github.com/vmware/photon/wiki/Downloading-Photon-OS.

 

Azure basics

For Azure I had to learn the bunch of basics from the scratch. I think it still was a good decision. The decision was definitely the replacement of normal routine with something fresh and even a little uncomfortable. Where to begin?

I accomplished the Azure fundamentals path in Microsoft learn. The Microsoft learn path holds a mix of functionalities I knew from the VMware Education Services training portal as well as from the VMware Hands-on Labs Platform.

To allocate some homelab resources, Microsoft Azure webportal offers a student or free Azure subscription. An Azure subscription is a logical container used to provision resources in Microsoft Azure. It holds the details of all your stored objects like virtual networks, storage accounts and much more. Azure offers free and paid subscription options. The most commonly used subscriptions are:

• Free

• Pay-As-You-Go

• Enterprise Agreement

• Student

This is to remember when paid Azure support plans are to be expected.

 

The following figure depicts windows Azure hypervisor biased virtual servers defined within a single group on the same virtual network. Afaik Hyper-V is still the only supported Azure type-1 hypervisor. Virtual servers can be Windows OS and Linux OS (see Linux integration services for Hyper-V and Azure), and no Mac OS.

Figure 1 Azure resource manager model

 

This Azure Resource Manager (ARM) model is used as resources can be deployed, managed, monitored as a group. The resource group contains virtual machines and virtual machines defined in the Availability Set. Assigning virtual machines to an Availability Set causes them to be hosted on different fault domains and upgrade domains in the encapsuled data center. A storage account provides shared storage for virtual machines.

 

Azure provides three classic administration tools to control resource groups:

  • Azure portal (web interface)
  • Azure CLI (console)
  • Azure Powershell (console)

 

They all offer the same amount of control; any task that you can do with one of the tools, you can likely do with the other two. All three are to be considered as cross-platform-aware, running on Windows, Mac OS, and Linux. They differ in syntax, setup requirements, and whether they support automation.

 

Az is the formal name for the Azure CLI commands containing arguments to work with Azure features. It contains hundreds of arguments that let you control an Azure resource. Azure Powershell or Az CLI can work with resource groups, storage, virtual machines, Azure active directory, containers, and so on.

Azure Powershell module is an open source component available on GitHub. You can install the module onto your local machine through the Install-Module command. You need an elevated Powershell shell (run as administrator) to install modules from the Powershell Gallery. To install the latest Azure Powershell module, type the following command:

 

Install-Module -Name Az -AllowClobber

 

Working with a local install of Azure Powershell, you will need to authenticate before you can execute Azure commands. The connect-azaccount cmdlet prompts for your Azure credentials and then connects to your Azure subscription. It has many optional parameters, but if all you need is an interactive prompt, no parameters are needed:

 

connect-azaccount

 

How in powershell do we know that we are connected to the Azure environment? You can use the module command get-azcontext. It will display subscription and account information.

 

Let's create some resources on Azure:

  • Resource group
  • Storage account
  • Virtual network
  • Virtual machines

Figure 2 Create a resource group

Some findings:

  • Most cmdlets like new-azresourcegroup are processed in sync.
  • Hence, the ProvisioningState directly returns "succeeded" (or throws an error). Most cmdlets with prefix "new-" know the two params resourcegroupname and location.
  • The binding to the geolocation of a resource group is important for the resource provider functions requested. Not all locations always have the same set of provider services.

 

Figure 3 Azure locations listing

 

In nowadays you can virtually traverse through Azure regions and allocate resources, and you should get the same user experience.

Allocating and using storage to store files, virtual machines and images needs an account to be created. As there are different types of storage, redundancy purposes and geolocation replication options, the inner layer programmatically is provided by storage resource providers. It helped me thinking of it as a rolling release model for its published SKU types.

 

Figure 4 Create a storage account

 

Using New-AzStorageAccount, beside the context of resourcegroup with location and the name of the storage account, you need to specify the type of storage and sku. A default storage account key is created automatically.

 

Let's create a virtual network. A simple Azure virtual network consists of a single subnet. The subnet ip range must belong to the vnet ip range. In the following example the subnet ip range is 192.168.1.0/24, and the virtual network subnet is 192.168.0.0/16. You must use the CIDR notation ("/") when using new-azvirtualnetworksubnetconfig and new-azvirtualnetwork.

 

Figure 5 Create a virtual network

 

Now that resource group, storage account and a virtual network has been created, for sure you want to create a virtual machine.

In comparison with an ESXi virtual machine or a Hyper-V virtual machine, in Azure actually, you cannot bind the vm boot medium to a bootable ISO image.

One option is to make use of already uploaded Azure images of the Azure marketplace. To traverse all public offerings, have a look to the following code snippet:

 

get-azvmimagepublisher -location switzerlandnorth | % {get-azvmimageoffer -location switzerlandnorth -publishername $_.Publishername | select Offer,Publishername}

 

Be aware as it takes a while, and the listing is huge.

 

Afaik there are no Photon OS images Azure offers, neither from bitnami, a VMware company, nor from VMware. From docker hub you may use bitnamis Photon OS image.

 

On https://github.com/vmware/photon/wiki/Downloading-Photon-OS you find the release binaries as .vhd for Photon OS 3.0 Rev2 for Azure. It contains the minimal installation.

Azure does not support virtual machine file formats .vmdk, .vdi, or .img. An on-premises Hyper-V hypervisor supports the newer Hyper-V format .vhdx, but this is not the case for Azure virtual machines. The '.tar.gz'ified vhd file size is 195 MB. The extracted .vhd file size is 16GB. This fixed size, non-thinprovisioned file is compatible to upload it to a storage container. For this purpose we need to create a storage container inside of the storage account. Before doing so, let's recapitulate the steps:

  1. Resource group
  2. Storage account
  3. Virtual network
  4. create a storage container when uploading files

 

As alternative to the Azure Powershell cmdlet new-azstoragecontainer the next example uses the Az CLI argument az storage container create.

 

Figure 6 Create a storage container when uploading files

 

Now we upload the Photon OS vhd file, using the Az CLI command

 

az storage blob upload

 

The argument --type page for the storage provider signalizes a so called page blob which are for random read/write storage such as .vhd files. az storage blob upload has some similarities to the copy command, but, you need to specify the source file, see the argument --file $vhdfile , as well as the target file, see --name ${BlobName}.

 

Figure 7 Uploading file

 

The upload may take a while.

 

Figure 8 Uploaded file

 


Let's have a break here. A good additional resource to get familiar with Azure fundamentals and manage resources in Azure courses is the getting starting guide.

 

In Part2 we will discuss some findings about latest Azure virtual hardware generation and premium disk support, and we will take a closer look into powershell code to create a first Photon OS image.

VMware Photon OS on Azure #Part 2 – create an Azure Photon OS virtual machine

$
0
0

Photon OS integrated manufacturing capabilities are used inside of many VMware Virtual Appliances software products. The open source standalone Linux operating system runs on VMware infrastructure as well as a secure, container workloads optimized virtual machine on public cloud infrastructure.

 

In blog post Part1 we walked through a straightforward public cloud Microsoft Azure introduction as the interoperability is the main topic of this blog series. To provision an Azure Photon OS virtual machine inside of our precreated resources we learned to specify our environment with the resource model parameters

  • resourcegroup
  • location
  • storageaccount
  • create a storage container when uploading files
  • virtual network with at least a subnet

 

Similar to vSphere CLI and PowerCLI for administrators, Azure CLI and Azure Powershell provide useful interaction cmdlets. You can download Azure CLI from here(Windows) or here (navigation page of install the Azure CLI), and use install-module -name Az for the Azure Powershell installation.

 

Let's go through the following study script.

 

#
# change current directory to the .vhd file path of the locally extracted Photon OS binary
j:
cd j:\photon-azure-3.0-9355405.vhd.tar

# create a resourcegroup
$LocationName="switzerlandnorth"
$ResourceGroupName="photonoslab-rg"
new-azresourcegroup -name $ResourceGroupName -location $LocationName

# create a storageaccount
$StorageAccountName="photonosaccount"
new-azstorageaccount -ResourceGroupName $ResourceGroupName -name $StorageAccountName `
  -location $LocationName -kind storage -skuname Standard_LRS
$storageaccountkey=(get-azstorageaccountkey -ResourceGroupName $ResourceGroupName -name $StorageAccountName)

# create a virtual network with at least a subnet
$vnetaddressprefix="192.168.0.0/16"
$subnetaddressprefix="192.168.1.0/24"
$singlesubnet=new-azvirtualnetworkSubnetConfig -Name frontendSubnet -AddressPrefix $subnetaddressprefix
$vnet = new-AzVirtualNetwork -name "photonos-network" -ResourceGroupName $ResourceGroupName `
  -Location $LocationName -AddressPrefix $vnetaddressprefix -Subnet $SingleSubnet
$vnet | set-AzVirtualNetwork

# create a storage container when uploading files
$containername="disks"
az storage container create --name ${containername} --public-access blob `
  --account-name $StorageAccountName --account-key ($storageaccountkey[0]).value
# upload
$vhdfile=".\photon-azure-3.0-9355405.vhd"
$blobname="photon-azure-3.0-9355405.vhd"
az storage blob upload --account-name $StorageAccountName --account-key ($storageaccountkey[0]).value `  --container-name ${Containername} --type page --file $vhdfile --name ${BlobName}

# create a network interface
$NiName="photonNI"
$nic = New-AzNetworkInterface -Name $NiName -ResourceGroupName $ResourceGroupName `
  -Location $LocationName -SubnetId $vnet.Subnets[0].Id

# create the Azure Photon OS virtual machine using the offer Standard_B1ms (1 vCPU, 2 GB RAM)
[System.Management.Automation.Credential()]$VMLocalcred = (Get-credential -message `
  'Enter username and password for the Azure Photon OS virtual machine local user account to be created. `  Password must be at least 12 characters long. Be aware of upper case and lowercase letters in username.')
$vmSize="Standard_B1ms"
$vmName="photon"
$URLOfUploadedVhd="https://${StorageAccountName}.blob.core.windows.net/${ContainerName}/${BlobName}"
az vm create --resource-group ${ResourceGroupName} --location ${LocationName} --name ${vmName} --size ${VMSize} `  --storage-account ${StorageAccountName} --storage-container-name ${ContainerName} --nic $NiName `  --image ${URLOfUploadedVhd} --use-unmanaged-disk --os-type linux --computer-name ${vmName} `  --admin-username $($VMLocalcred.GetNetworkCredential().username) --admin-password $($VMLocalcred.GetNetworkCredential().password) `  --generate-ssh-keys --boot-diagnostics-storage https://${StorageAccountName}.blob.core.windows.net

Get-AzVM -ResourceGroupName $ResourceGroupName -Name $vmName

 

 

Line 2-5: The Photon OS Azure .vhd file must be downloaded and extracted to a local directory. On a Windows machine you can use tools like 7zip to extract the .vhd file from the .tar.gz binary. We change the current directory to the photon os vhd file directory path as the az vm create cmdlet in the version used has some culprits if the working directory does not match with the current path.

Line 6-10: creation of the resourcegroup. Change value of params resourcegroup name and location.

Line 11-16: creation of the storage account. Change value of param storageaccountname.

Line 17-24: creation of a virtual network with a single subnet. Change value of params vnetaddressprefix and subnetaddressprefix. Both require CIDR annotation.

Line 25-34: creation of a storage container and upload of the .vhd file. Change value of param containername. The value of param vhdfile must be the .vhd filename. The value of param blobname must have the filename extension .vhd as prerequisite for the az vm create parameter --image.

Line 35-39: precreate an Azure network interface for our Azure Photon OS virtual machine. Change value of param NiName.

The common arguments of cmdlet New-AzNetworkInterface are ResourceGroupname and Location, and with the precreated subnet and the specified network interface name.

Why do we create first an Azure network interface and not the virtual machine?

In vSphere, a Virtual Standard vSwitch (VSS) port in use is not the virtual nic adapter of the 1:1 connected ESXi VM. In vSphere the port in use is in some sort a virtual function of its VM as no other VM can use this port simultaneously. Using the PowerCLI cmdlet new-networkadapter you directly specify the VM and the VirtualNetworkAdapterType. In a distributed vSwitch scenario you can additionally specify a port to which you want connect the new network adapter.

This distinguation is necessary as in Azure you create a ~network interface virtual function when using new-aznetworkinterface, with no need to bind it directly to a vm.

Let's go through the next code lines.

 

Line 40-52:create the Azure Photon OS virtual machine using the offer Standard_B1ms. Change value of param vmName.

In the SDDC way, you can create your own Photon OS ISO with factory defaults for computername, username and password for any flavor of on-premise and cloud-based installation. The built-in Microsoft Azure Linux Agent (waagent) of the Photon OS Azure .vhd minimal installation processes the az vm create params --computer-name, and --admin-username and --admin-password. Specify as value of param VMLocalCred credentials for a local useraccount to be created. The username cannot be the user root. You can change the root password in the post configuration.

 

We already uploaded the Photon OS Azure .vhd file in our precreated storage container. The storage container blob got an url https://${StorageAccountName}.blob.core.windows.net/${ContainerName}/${BlobName}. This url is passed in az vm create as value of argument –image.

 

In part1 we've specified for the Azure Photon OS lab the storageaccount argument -type storage and -sku Standard_LRS. 'Good enough' in the younger Azure days mean "okay, please specify 'good enough'". You get charged for any type of operation, from compute, ram, storage to network resources in use. To get an idea about storage SLA's underlying OLA components have a look to the information about storage container (page) blobs and disks here.

Page blobs writes and reads are billed on a per transaction-basis. Estimate a low bandwidth data stream with every minute one 64KB createContainer operation and every minute ten 64KB getBlob operation. The cost insight pricing estimation a year actually would be 2 Swiss francs for all ~ 32GB writes, and read operations.

 

As --os-type we specify Linux. Prerequisites for a minimal Photon OS virtual machine installation are

  • 2GB of free RAM (recommended)
  • 512MB of free storage space (minimum)

 

As virtual machine size I use a minimal, burstable B-series offering Standard_B1ms with 1 CPU core, 2GB RAM, 4GB temporary storage (SSD) and with moderate network throughput. Change value of param vmSize to your needs.

 

An Azure virtual machine console isn't enabled by default. We specify the argument --boot-diagnostics-storage for enabling console interaction.

 

The argument --generate-ssh-keys is optional, as in this study script we will connect through the Azure serial console for Linux. .--generate-ssh-keys creates ssh public and privat key files in the /root/.ssh directory.

 

Line 53: As powershell output we will see the created Azure Photon OS virtual machine.

 

I hope you have enjoyed part2. Please reach out in case of questions or suggestions. Finally in part3 we start with configuring Photon OS.


vSphere Storage Troubleshooting - Part 1: HBA & Connectivity

$
0
0

Storage infrastructure is one of the main part of IT environment, so good design and principled configuration will cause better and easier troubleshooting of each possible issue related to this area. One of the primary components of storage infrastructure is HBA, connector of servers to the storage area. Then we can consider some of the greatly possible storage-related problems back to the Host Bus Adapter installed in the ESXi host and also its physical connections to the SAN storage or SAN switche. So let's begin how to investigate step by step storage troubleshooting inside the VMware infrastructure.

First situation may be occured for local array of disks that are not detected as a local datastore. You can check status of internal disk controller (for exmaple in a HP Proliant server) via running the following command:

hpsa.png

 

cat /proc/driver/hpsa/hpsa0


The result will be shown like this:

(please be careful when I used the hba word and when the capital form)

 

 

 

 

 

vmkmgmt.png

But if the considered datastore is not local and is a shared volume of existing SAN storage in our infrastructure, then we must check the HBA status:

./usr/lib/vmware/vmkmgmt_keyval -a | less

 

The last mentioned command has been used in the ESXi version 5.5 and higher, so for older versions you must check the following folder for both of HBA most popular vendors:

  •    Qlogic:   /proc/scsi/qla2xxxx
  •    Emulex: /proc/scsi/lpfc

 

 

Also if you don't find the related vmhba adapter in result of the following command, it means the ESXi host did not detect your HBA yet

  • vmkchdev -l | grep hba
  • esxcfg-info | grep HBA

vmkchdev.png

swfw.png

 

 

Also you can run the swfw.sh command and combine it with grep to find related information of the connected HBA devices to the ESXi, include: device model, driver, firmware and also WWNN for FC-HBA (InstanceID value)

./usr/lib/vmware/vm-support/bin/swfw.sh | grep HBA

 

 

 

core-device.png

 

 

In another situation imagine you have deploy a new SAN storage inside a vSphere cluster, but you are not ensure that HBA could detect the provided LUN or not. As the first step run the below ESXCLI:

esxcli storage core device list

 

For the shown result, please check important fields, like these ones: Display Name, Device Type, Devfs path, Vendor & Model.

And next you can run the following command, then it will give you back more information about the HBA adapters and state of each one of them:

esxcli storage core adapter list

 

VMware Definition Tip1: NAA (Network Addressing Authority) or EUI (Extended Unique Identifier)  is the preferred method of identifying LUNs and the number that follows is generated by the storage device itself. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same.

core-adapter.png

partition.png

 

Also this command will show you list of available and detected partition by the ESXi host:

esxcli storage core device partition list

 

VMware Definition Tip2: You can see two types of fb& fc. fb is the system ID for VMFS and fc is the vmkernel core dump partition (vmkcore)

There is more useful storage command, like the oldman CLI esxcfg-scsidevs. (-a show HBA devices, -m for mapped VMFS volumes and -l list all known logical devices)

esxcfg-scsi.png

So finally as the conclusion of first part of troubleshooting the problems related to the storage side of vSphere environment, we understood that we need to check the status of HBA, how they are performing and connected disk devices, LUNs & volumes via each one of them. I hope it can be helpful for you all

Link to my personal blog: vSphere Storage Troubleshooting - Part 1: HBA & Connectivity


Using vSphere Auto Deploy

$
0
0

Using vSphere Auto Deploy

cover the following recipes:

  • Enabling vSphere's auto deploy service
  • Configuring a TFTP server with the files required to PXE boot servers
  • Configuring a DHCP server to work with auto deploy
  • Preparing the vSphere environment – create host profile, configure the deploy
  • rules and activating them
  • Enabling stateless caching
  • Enabling stateful install

 

Introduction

 

In a large environment, deploying, and upgrading ESXi hosts is an activity that requires a lot of planning and manual work. For instance, if you were to deploy a set of 50 ESXi hosts in an environment, then you might need more than one engineer assigned to perform this task. The same would be the case if you were to upgrade or patch ESXi hosts. The upgrade or the patching operation should be done on each host. Of course, you have vSphere update manager that can be configured to schedule, stage, and remediate hosts, but again the process of remediation would consume a considerable amount of time, depending on the type and size of the patch. VMware has found a way to reduce the amount of manual work and time required for deploying, patching, and upgrading ESXi hosts. They call it vSphere auto deploy. In this chapter, you will learn not only to design, activate, and configure vSphere auto deploy but also to provision the ESXi hosts using it.

 

vSphere auto deploy architecture

 

vSphere auto deploy is a web server component that, once configured, can be used to quickly provision a large number of the ESXi hosts without the need to use the ESXi installation image to perform an installation on the physical machine. It can also be used to perform the upgrade or patching of the ESXi hosts without the need for vSphere update manager. Now, how is this achieved? vSphere auto deploy is a centralized web server component that lets you define rules that govern how the ESXi servers are provisioned. It, however, cannot work on its own. There are a few other components that play a supporting role for auto deploy to do its magic and here they are:

  • The auto deploy service
  • A DHCP server with scope options 66 and 67 configured
  • A TFTP server hosting files for a PXE boot
  • Servers with PXE (network boot) enabled in their BIOS
  • Host profiles configured at the vCenter server

SC1.png

 

The ESXi Host first begins to network boot by requesting for an IP address from the DHCP Server. The DHCP Server responds with an IP address and the DHCP scope options

providing the details of the TFTP Server. The ESXi Host then loads the PXE boot image from the TFTP Server to bootstrap the machine and subsequently sends an HTTP Boot

Request to the Auto Deploy Server, to load an ESXi Image into the host's memory. The image is chosen based on the rules created at the Auto Deploy Server. The workflow is

shown here:

 

SC2.png

 

Enabling vSphere auto deploy service

Auto deploy services, by default, are left disabled and need to be enabled explicitly. Understandably so, unless the environment warrants having specific features, they are left disabled to keep the resource consumption optimal. There are two specific services that need to be enabled to ensure that auto deploy functions as desired. In this recipe, we shall walk through the process of enabling the auto deploy service and image builder service on the vCenter Server Appliance.

 

The following procedure through enabling the appropriate services to activate

Auto Deploy:

1. Log in to vCenter Server Appliance.

2. Navigate to Home | Administration | System Configuration as illustrated in the

following screenshot:

SC3.png

 

3. Click on Nodes and select the intended vCenter instance and Related Objects as

shown here:

SC4.png

 

4. Highlight Auto Deploy service and click on Start.

5. Click on Settings and set Automatic to start automatically as shown here:

SC5.png

 

6. Highlight ImageBuilder Service and click on Start.

7. Click on Settings and set Automatic to start automatically.

8. Confirm that services are started from the Recent Tasks pane:

 

How it works...

Auto deploy services are, by default, left to start manually although integrated with vCSA. Hence, if the environment warrants having the feature, the administrator has to enable the service and set it to start automatically with vCenter.

 

Configuring TFTP server with the files required to PXE boot

 

Trivial File Transfer Protocol (TFTP) enables a client to retrieve a file and transmit to a remote host. This workflow concept is leveraged in the auto deploy process. Neither the protocol nor the workflow is proprietary to VMware. In this recipe, we shall use an open source utility to act as the TFTP server, there are other variants that can be used for similar purposes.

 

The following procedure would step you through configuring the TFTP server to be PXE boot ready:

1. Log in to vCenter Server Appliance.

2. Navigate to Home | vCenter | Configure | Auto Deploy

3. Click on Download TFTP Boot Zip instance as depicted here:

SC7.png

 

4. Extract the files to the TFTP server folder (TFTP-Root) as demonstrated in the

following screenshot:

SC8.png

 

5. Start the TFTP service as shown here:

 

SC9.png

 

How it works...

TFTP is primarily used to exchange configuration or boot files between machines in an environment. It is relatively simple and provides no authentication mechanism. The TFTP server component can be installed and configured on a Windows or Linux machine. In this recipe, we have leveraged a third-party TFTP server and configured it to provide the relevant PXE files on demand. The TFTP server, with the specific PXE file downloaded from vCenter, aids the host in providing a HTTP boot request to the auto deploy server.

 

Configuring the DHCP server to work with auto deploy

Once the auto deploy services and TFTP servers are enabled, the next most important step in the process is to set up the DHCP server. The DHCP server responds to servers in scope with an IP address and specifically redirects the server to the intended TFTP server and boot filename. In this recipe, we shall look into configuring the DHCP server with TFTP server details alongwith the PXE file that needs to be streamed to the soon-to-be ESXi host. In this recipe, we shall walk through setting up a Windows-based DHCP server with the specific configuration that is prevalent. Similar steps can also be repeated in a Unix variant of DHCP as well.

 

Getting ready

Ensure that the TFTP server has been set up as per the previous recipe. In addition, the steps in the following recipe would require access to the DHCP server that is leveraged in the environment with the appropriate privileges, to configure the DHCP scope options.

 

How to do it...

The following procedure would step through the process of configuring DHCP to enable PXE boot:

1. Log in to the server with the DHCP service enabled.

2. Run dhcpmgmt.msc.

3. Traverse to the scope created for the ESXi IP range intended for PXE boot.

4. Right click on Scope Options and click on Configure Options... as shown in the following screenshot:

SC10.png

 

5. Set values for scope options 066 Boot Server Host Name to that of the TFTP server.

6. Set values for scope options 067 Bootfile Name to the PXE file undionly.kpxe.vmw-hardwired as demonstrated here:

SC11.png

 

How it works...

 

When a machine is chosen to be provisioned with ESXi and is powered on, it does a PXE boot by fetching an IP address from the DHCP server. The DHCP scope configuration option 66 and 67 will direct the server to contact the TFTP server and load the bootable PXE image and an accompanying configuration file. There are three different ways in which you can configure the DHCP server for the auto deployed hosts:

1. Create a DHCP scope for the subnet to which the ESXi hosts will be connected to.

Configure scope options 66 and 67.

2. If there is already an existing DHCP scope for the subnet, then edit the scope

options 66 and 67 accordingly.

3. Create a reservation under an existing or a newly created DHCP scope using the

MAC address of the ESXi host.

Large-scale deployments avoid creating reservations based on the MAC addresses, because that adds a lot of manual work, whereas the use of the DHCP scope without any reservations is much preferred.

 

Preparing vSphere environment – create host profile, configure the deploy rules and activate them

Thus far, we have ensured that auto deploy services are enabled, and the environmental setup is complete in terms of DHCP configuration and TFTP configuration. Next, we will need to prepare the vSphere environment to associate the appropriate ESXi image to the servers that are booting in the network. In this recipe, we will walk through the final steps of configuring auto deploy by creating a software depot with the correct image, then we will create auto deploy rules and activate them.

 

How to do it...

The following procedure prepares the vSphere environment to work with auto deploy:

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles as shown here:

3. Click on Extract Profile from host as shown:

 

4. Choose a reference host based on which new hosts can be deployed and click on Finish:

5. Navigate to Home | Auto Deploy.

6. Click on Software Depots | Import Software Depot, provide a suitable name and browse to the downloaded offline bundle as shown here:

7. Click on the Deploy Rules tab and then click on New Deploy Rule.

8. Provide a name for the rule and choose the pattern that should be used to identify the target host; in this example we have chosen the IP range defined in the DHCP scope, also multiple patterns can be nested for further validation:

9. Choose an image profile from the list available in the software depot as shown here:

10. (Optional) Choose a host profile as shown here:

11. (Optional) In the Select host location screen, select the inventory and click on OK to complete:

12. Click on Activate/Deactivate rules.

13. Choose the newly created rule and click on Activate as shown here:

14. Confirm that the rule is Active as shown here:

How it works...

To prepare the vSphere environment for auto deploy, we perform the following steps:

1. Create a host profile from a reference host, a host profile conserves the efforts in replicating much of the commonly used configuration parameters typically used

in the environment. There is a natural cohesion of the feature with auto deploy.

2. Create a software depot to store image profiles, typically more than one depending on the environment needs.

3. Create deploy rules to match specific hosts to specific images.

 

In a complex and large infrastructure, there could be heterogeneous versions of products in terms of software, hardware, drivers, and so on. Hence, the auto deploy feature enables the

creation of multiple image profiles and a set of rules through which targeted deployments could be performed. In addition, auto deploy use cases stretch beyond the typical deployments to managing the life cycle of the hosts, by accommodating updates/upgrades as well.

There are two primary modes of auto deploy:

Stateless caching: On every reboot, the host continues to use vSphere auto deploy infrastructure to retrieve its image. However, if auto deploy server is inaccessible, it falls back to a cached image.

Stateful install: In this mode, an installation is performed on the disk and subsequent reboots would boot off the disk. This setting is controlled through the host profile setting system cache configuration.

 

Enabling stateless caching

In continuation of the previous recipe, an administrator can control if the ESXi hosts boots from the auto deploy on every instance of reboot, or perform an installation through auto

deploy and have subsequent reboots to load image from disks. The option to toggle between stateless and stateful is performed by amending the host profile setting. In this recipe, we shall walk through the steps to enable stateless caching.

How to do it...

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles.

3. Select the host profile and click on Edit host profile.

4. Expand Advanced Configuration Settings and navigate to System Image Cache Configuration as shown here:

. Select on Enable stateless caching on the host or Enable stateless caching to a USB disk on the host.

6. Provide inputs for Arguments for first disk or leave at default: this is the order of preference of disk on which the host would be used for caching. By default, it will detect and overwrite an existing ESXi installation, if the user indicates the specific disk make, model or driver used, the specific disk matching the preference is chosen for caching:

7. For the option Check to overwrite any VMFS volumes on the selected disk, leave it unchecked. This would ensure that if there were any VMs on the local VMFS volume, they are retained.

8. For the option Check to ignore any SSD devices connected to the host, leave it unchecked. You may need to enable this setting only if you have SSD for specific use cases for the local SSD, such as using vFlash Read Cache (vFRC).

 

How it works..

The host profile directs the installation mode in an auto deploy-based deployment. In a data center where we see blade architectures prevalent, the local storage is rather limited and data is more often stored in external storage with the exception of hyperconverged infrastructures. The stateless caching feature specifically aids in such scenarios to limit dependency on local storage. In addition, users may also choose the option to enable stateless caching to USB disk.

 

Enabling stateful install

While the stateless caching feature is predominantly built to tackle disk specific limitations on server hardware, the stateful install mode is more of a legacy installation through PXE mechanism. Apart from the installation procedure that is set to scale, it mimics the attributes of a standard manual installation. In this recipe, we shall walk through the steps to enable stateful install.

 

How to do it...

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles.

3. Select the host profile and click on Edit host profile.

4. Expand ;Advanced Configuration Settings and navigate to System Image Cache Configuration as shown here.

5. Click on Enable stateful install on the host or Enable stateful install to a USB disk on the host:

6. Provide inputs for Arguments for first disk or leave at default: this is the order of preference of disk on which the host would be used for installation. The administrator may also indicate the specific disk make, model or driver used, the specific disk matching the preference is chosen for installation.

7. For the option Check to overwrite any VMFS volumes on the selected disk, leave it unchecked. This would ensure that if there were any VMs on local VMFS volume, they are retained.

8. For the option Check to ignore any SSD devices connected to the host, leave it unchecked; you may need to enable this setting only if you have SSD for specific use cases for the local SSD such as using vFRC.

 

If this help please mark helpful or correct.

PowerCLi Script to Automate to configure vSwitch, add Uplink and bulk virtual port groups

$
0
0

I have created this script to automate to configure vSwitch, Uplink and bulk virtual port groups in cluster. when needs to create 1000 of port groups just append the script under # This configures vSwitch0 and the VPG .

 

The intention  of this script is created for reduced the manual work.

How to run this as follow below procedure:

  • Install PowerCLi 5.5 /6.x on jumpbox or vCenter server
  • Open the VMware power cli
  • Go to the path where you kept the script
  • Run the script.\AddStandardSwitch.PS1
  • It will prompt to give the credential to connect vCenter server.
  • Now you can see the progress in PowerCli

 

==============================================================================================================================

    <#

.SYNOPSIS

        Host Configuration for a Cluster

    .DESCRIPTION

        This script sets Configuration of vSwitch0 on ESXi  servers, Creates vSwitch0 Data networking for migration

 

    .NOTES

        Author: Nawal Singh

    .PARAMETER $VMCluster

        ESX(i) Host Configuration

          

.EXAMPLE

#********Reminder!!!   Open up a fresh PS window and then run this script.

    #>

#Connection to vCenter

 

 

$mycred = Get-Credential

Connect-VIServer "VCSA.local.com" -Credential $mycred

 

Write-Progress -Activity "Configuring Hosts" -Status "Working" ;

 

 

# Change this setting for the Cluster that will be configured

$VMCluster = "CL_MGMT01"

 

# This configures vSwitch0 and the VPG .

Get-Cluster $VMCluster | Get-VMHost  | New-VirtualSwitch -Name VSwitch0 -Nic vmnic2

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_100" -vLanid 100

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_200" -vLanid 200

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_201" -vLanid 201

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_MGMT_2010" -vLanid 2010

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_vMotion_1010" -vLanid 1010

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_FT_2023" -vLanid 2023

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_Data_121" -vLanid 121

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_OOB_Mgmt_105" -vLanid 105

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_VMwareTEST_180" -vLanid 180

 

Disconnect-VIServer -Server *  -Force -Confirm:$false

===============================================================================================================================

Note: This is tested in test, dev and prod it worked perfectly.

 

Please mark helpful or correct

【VxRail】 新・既存環境からvSAN 環境へのMigration【Flings】【Cross vCenter Workload Migration Utility】

$
0
0

本記事では、VMware Flingsで提供される無償のCross vCenter vMotionツールについて紹介します。

前回までの記事は以下です。

 

        既存環境からvSAN 環境へのMigration:その① 【イントロダクション】

既存環境からvSAN 環境へのMigration:その②  【PowerCLIのInstall】

既存環境からvSAN 環境へのMigration:その③ 【既存環境からのvMotion】

【VxRail】 続・既存環境からvSAN 環境へのMigration【実環境向けのコマンド考察】

 

 

本シリーズの久々の投稿となります。

本シリーズは、Non-Shared SSO環境(≒ ELMなし)のvCenter間でのvMotionについて、PowerCLIを利用した方法を解説してきました。

今回はPowerCLIから離れて、より簡単に同じことが実施できるツールについて紹介させていただきます。

 

VMware Flingsとは?

今回紹介するツールはVMware Flingsで公開されてます。

     Flings | VMware Flings

VMware Flingsとは何なのか?という疑問に思う方もいらっしゃると思います。

VMware Flingsのサイトにアクセスするとトップに以下の説明(?)がありました。

     Flings are apps and tools built by our engineers and community that are intended to be explored.

 

つまり、vmwareの有志のエンジニアが作成している各種便利ツールをやアプリを公開している場です。(だと理解してます)

ここで公開されているツールは、基本的にvmware製品サポートには含まれませんが、便利だと思います。(需要があれば)

 

Cross vCenter Workload Migration Utility

今回ご紹介するツールは、以下のURLからダウンロードできます。

Cross vCenter Workload Migration Utility | VMware Flings

 

使い方は非常に簡単で、JRE(Java Runtime Environmen)がInstallされているWindows PCで実行するだけです。

ダウンロードページのInstructionと、Look & Feelで使えるとは思いますが、改めて本ブログでも紹介させていただきます。

 

メリットとデメリット

メリット

非常に簡単

コマンドを打たなくていい

環境要件が緩い(PowerCLIも不要)

PowerCLIでネックとなった、vNICの接続ポートグループも個別に指定可能

 

デメリット

ツール自体はサポート対象ではない(PowerCLIも通常のサポートには含まれないので差はないかも?)

失敗した場合のトラブルシューティングには、通常のCross vCenter vmotionと同等の知識が必要となる

 

環境要件

Requirementは以下のようになっています。

 

  • vCenter Server 6.0 Update 3 or above (ESXi hosts must also be 6.0u3+
  • Java Runtime Environment 1.8-10
  • Web Browser
  • Please review https://kb.vmware.com/kb/2106952 for Cross vCenter vMotion requirements

 

注意点は特にないです。vSphere 6.x以降であれば6.0~6.5~6.7で相互にvMotionすることも可能です。(執筆時点では7.x以降との互換性については不明)

Javaは1.8.-10となってますが、筆者の環境はJava Build 1.8.0_231で動いてますので厳しい制限ではないと思います。

Javaが入っていればWindowsに限らないはずですが、筆者は試してません。

ブラウザはChromeでOKです。

Cross vCenter vMotionの制限はKBを参照いただく通りですが、EnterprisePlusライセンスが必要、という点は重要です。それ以外は工夫で何とかなると思います。

 

 

使い方

Note: Windows環境での利用を想定しておりますが、Linux環境でも同様だと思われます。

 

まずはファイルをダウンロードしましょう。

1.PNG

 

 

 

 

 

 

 

使い方は非常に簡単で、PowerShellかコマンドプロンプトを開いて、ダウンロードしたjarファイルを以下のような形で実行してあげるだけです。

PS C:\Users\Administrator\Downloads> java -jar .\xvm-3.1.jar

 

 

ファイル名は実際にダウンロードしたファイル名を指定してください。Versionにより異なります。

 

 

公式のInstructionでは起動時にvCenterの情報を指定する方法が示されていますが、指定せず後で入力する形のほうがシンプルですので、こちらの方法を採用しています。

 

 

 

 

実行が完了すると以下のような出力が出ます。

 

PS C:\Users\Administrator\Downloads> java -jar .\xvm-3.1.jar

14:48:00 INFO  *** Cross vCenter vMotion Utility ***

14:48:02 INFO  Starting ApiController v3.1 on SAMPLEPCNAME with PID 8016 (C:\Users\Administrator\Downloads\xvm-3.1.jar started by kanedn in C:\Users\Administrator\Downloads)

14:48:04 DEBUG Running with Spring Boot v2.0.3.RELEASE, Spring v5.0.7.RELEASE

14:48:04 INFO  No active profile set, falling back to default profiles: default

14:48:09 INFO  Using app port 8443. The default port is 8443 and can be changed by using the -Dserver.port flag

14:48:09 DEBUG Initialized controller with empty state

14:48:13 INFO  Started ApiController in 12.276 seconds (JVM running for 15.589)

14:48:13 INFO  XVMotion app initialized successfully!

 

 

 

 

 

ここまで出力されていれば、実行は完了です。

 

 

 

 

JavaのプログラムがWindows PC上で実行されてますので、ブラウザでアクセスしましょう。

 

2.PNG

https://localhost:8443

デフォルトでは8443ポートが利用されます。変更することも可能ですが、多くの場合は変更不要と思います。

 

 

 

 

 

 

ページが開いたらまずMigrateボタンを押します。

3.PNG

 

 

 

 

 

次にRegisterボタンを押します。押した先でソースとターゲットになるvCneterを登録します。

4.PNG

 

 

 

 

 

 

この段階ではまだ何も登録されていないはずですが、登録されているvCenterがある場合は以下のように表示されます。

5.PNG

 

 

 

 

 

 

新規で登録する場合はフォームに従って必要な情報をいれてSubmitを押します。

6.PNG

 

 

 

 

 

ソースとターゲットの両方を登録しましょう。

2つ登録されていると以下のようになります。

7.PNG

 

 

 

 

 

登録が終わったらMigrateボタンをして、vMotionの設定に移ります。

※vMotion Utilityという名前ですが、Cold Migrationも可能です。

 

必要事項を入力してSubmitを押すだけです。

8.PNG

 

実行開始るとTask Informationのページに移ります。

 

9.PNG

 

おっと、今回はすぐにErrotとなってしまいました。

errorのところをクリックするとInfoのところに原因などを教えてくれます。

が、個々の情報だけではわからないことが多いです。

 

10.PNG

 

 

 

問題が発生した場合は、vCenter側のログを見る必要があります。

vMotionが開始されるとソースのvCenter側でもタスクができます。Taskの実行状況はvSphere ClientのGUIから確認できます。

失敗した場合は、Taskのエラーメッセージから何か判断できることもあります。

Taskのメッセージからも判断できない場合は、vCenterやESXiのログを見る必要があるでしょう。

 

原因を排除してvMotionがうまいく言った場合、Task InformationのStatusがRunningになり、Progressバーが進行してやがて100%になります。

100%になってもすぐに終了ではなく、StatusがRunningからSuccessになるのを待ちましょう。Successになるまで終了ではありません。

なお、RunningになればGUIを閉じてもTaskは中断されず最後まで実行されます。

 

 

 

 

 

トラブルシューティング

今回はラボ環境でしたが数多くの失敗に遭遇しました。

失敗した場合は以下の流れで見ていくのが良いと思います。

 

1.Cross vCenter Workload Migration Utility のWebUIのErrorメッセージ

2.Utility 起動時に利用した、PowerShell実行画面のメッセージ

3.各vCenter GUIの最近のタスクと詳細情報

4.vCenterのログ(vpxd.log)

5.ESXiのログ(vpxa.log およびhostd.log)

 

代表的な失敗例を挙げてみましたので、トラブルの際にはご参考にしていただければ幸いです。

    1. 仮想ハードウェアVersionに互換性がある

    2. DVSのversionがおなじ(踏み台VDSが必要)

    3. TargetはClusterではなく単体ホストを指定。(Not RespondingがあるためCluster指定だとうまくいかない場合がある。)

    4. vCSAがお互いに名前解決可能

    5. vMotionネットワーク間で疎通がある。

     

    1は移行先のClusterで仮想ハードウェアのバージョン相互運用性がある必要があります。Versionの新しいほうから古いほうに移行する場合に注意が必要です。

     

    2はDVS Versionの制限です。移行先と元でVDS Versionが同じである必要があります。VDSは複数持てるので、同じVersionのVDSを踏み台VDSとして作成すれば問題ありません。

    ただし、踏み台VDSには最低でも1つ以上のUplinkが必要です。Uplinkがない場合はVDSの候補として出てきません。

     

    3は、移行先のターゲットリソースとして、Clusterではなく、ホスト単体を指定したほうが良い、という経験上のBest Practiceです。Cluster配下のHostに障害(Not Respondingなど)がある場合、移行が失敗することがあります。そういう場合は単体ホストを明示的に指定して移行を開始すれば回避できます。ただし、この方法では自動で負荷分散がされないので注意が必要です。移行した先にDRSによる負荷分散が実行される場合もあると思いますが、すべてのvMotionを単体ホストで実行した場合、vMotionの同時実行数の制限に抵触する可能性がありますので、やはり移行の段階から分散したほうが無難です。参考:Limits on Simultaneous Migrations

     

    4は、名前解決の問題です。vCenterが対向サイトのリソース(vCenterやHostなど)を見つけるときに名前解決が必要となります。DNSの設定に留意ましょう。

     

    5は、vMotionネットワーク間での疎通についてです。当然ながらソースホストとターゲットホストでvmotionネットワークに疎通がなければvmotionできません。

    vmotionネットワークの疎通が難しい場合、Cold Migration (仮想マシンがPowerOff状態)にすれば、Management ネットワークが利用されますのでワークアラウンド可能です。

     

    1以外は基本的にどうにかなりますが、1の仮想ハードウェアは一度上げたものは下げれません。Versionの古いClusterに移行する可能性がある場合は、仮想マシンを作る段階で6.0~6.7で互換性のあるVersionを選んで置き、迂闊にUpdateしないことが重要です。

     

     

     

    いかがでしたでしょうか?正直なところ、苦労してPowerCLIでやるよりもこっちのほうがいいと思います。

    サポート対象でない、という点ではPowerCLIとほぼ同等ですし、こちらのツールであればPowerCLIのように作りこむ必要性がありません。

    ぜひお試しいただければと思います。

    Network File Share via Per-App VPN using Workspace ONE

    $
    0
    0

    I’m writing this post in the midst of the world trying to get a handle of the COVID-19 pandemic (date and time check: March 18, 2020). Because of COVID-19, a lot of countries have enforced social distancing and quarantine measures. Businesses are affected, and a lot of organizations now encourage working from home.

     

    Some of my customers who didn’t really consider giving their employees the ability to work remotely suddenly find themselves in a bind. Because now, it’s not just simply an option or nice-to-have feature, but a must-have requirement! And one of the asks I get recently is for end-users to be able to access their network file shares in their office PCs. Some folks don’t necessarily have all their files in the cloud, so this is still relevant.

     

    So if you’re already using Workspace ONE UEM for managing Windows 10 devices and wondering if you can get access to network file shares, you’ve come to the right place. Because yes, you CAN!

     

    Of course, we need a few things:

    1. Workspace ONE UEM
    2. VMware Tunnel deployed with per-app VPN enabled
    3. Enrolled Windows 10 machine with VMware Tunnel desktop
    4. Shared folder in your corporate network

     

    This post assumes you already have the per-app VPN configured working for Windows 10 in your environment. If you haven’t, refer to VMware documentation here and here. This guide by Pim Van de Vis is an easy one to follow and walks you through the steps.

     

    Steps

    1. In the Workspace ONE UEM console, go to Groups and Settings\ All Settings\ System\ Enterprise Integration\ VMware Tunnel. [Pro Tip: Going to Groups & Settings\Configuration\ Tunnel takes you to the same spot]

     

    2. Navigate to Device Traffic Rules and select Edit

     

    3. Click Add Windows or MacOS Application

    1-Device Traffic Rules.png

    4. Add SYSTEM as a Windows application. Refer to the screenshot below. Click Save.

     

    2-SYSTEM.png

    BONUS: You can also use Windows Remote Desktop, like in the screenshot below.

    3-RDP.png

    5. In the device traffic rules, add SYSTEM (and Remote Desktop, if you added it) in the application list with action defined as Tunnel. This means that if the you launch the application, traffic to the whitelisted destinations will be via Tunnel. Rest of the traffic will fall to the default rule. In this case, bypass. Any other apps with the VPN profile will also fall to the default bypass rule.

    4-DTR.png

    6. After you click Save and Publish, the updated rules will be pushed to the devices. Check that your Windows Tunnel Client app has updated rules. Note that you’ll only see it green/ connected when you try to launch an app in the device traffic rules that uses the tunnel.

    5-Client.png

    Testing Time!

    Open File Explorer and type the ip  (\\ip) or computer name in your corporate network. After authenticating with your Windows credentials (and assuming your account is allowed access to the folders) you will be able to view/ edit documents.

    6-Shared folder.png

    7-Edit File.png

     

    As with any VMware Tunnel implementation, ensure that you practice principle of least privilege. Consult with your network team to only allow the Tunnel appliance access to the resources your end-users are meant to get to.

    Credits to Alex Loh for helping test this out and providing all the screenshots.

    Identify VMDK sharing using powercli

    $
    0
    0

    connect-viserver vcentername

    $vmname=read-host -Prompt "ENter the VM Name"

     

    if (get-vm $vmname -erroraction SilentlyContinue)

        {

        $vdisk=get-vm $vmname |Get-VMHardDiskDrive

     

     

        $allvms=get-vm

           

     

            foreach ($allvm in $allvms)

                {

                if ($allvm.name -ne $vmname)

                    {

                    $vmdiskpath=get-vm $allvm |get-harddisk

                        foreach($disk in $vdisk)

                            {

                            if ($vmdiskpath.filename -eq $disk.filename)

                                {

                                $msg="There is a VMDKS sharing with "

                                $msg2=$allvm.name

                                $fullmsg=$msg+$msg2

                                Write-Host $fullmsg -ForegroundColor DarkGreen

                                }

                            }

                    }

     

                }

        }

        else

        {

        Write-Host "VM is not available in the vCenter, please check the vm name"

        }

    VMware TAM Source 11.20

    $
    0
    0



    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, welcome to the latest VMware TAM newsletter which is being sent to you during the VMworld 2019 event taking place in Europe. We have added as much news and updates from the event in this weeks newsletter and will continue that next week as well. If you are attending the event I hope you are having a really productive time at the event and enjoying all of the amazing sessions being presented.

    I wish you all a fantastic week ahead and look forward to bringing you the latest VMware and virtualization news next week from VMworld and beyond.

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

    -
    ANNOUNCEMENTS @ VMWORLD EUROPE
    VMworld 2019 Europe Day 1 General Session
    VMworld 2019 Europe Day 2 General Session
    Project Maestro Overview
    Announcing vRealize Network Insight 5.1
    VMware Introduces “Workspace ONE for Microsoft Endpoint Manager” to Enable Modern Management for Windows 10
    Announcing VMware NSX Distributed IDS/IPS
    VMware Tanzu Progress and Design Principles

    TAM WEBINARS @ VMWARE
    November 2019 – Bitfusion Technical Overview
    Date: Thursday, November 14th
    Time: 11:00am EST/ 10:00am CST/ 8:00am PST
    Duration: 1 Hour

    Synopsis:
    This session will provide a technical overview of the Bitfusion product along with a timeline of how the product will move forward at VMware. In the overview and demo, our tech team will discuss and show how Bitfusion’s FlexDirect provides network-attached GPUs (full or partial) in order to create an elastic infrastructure for today’s AI/ML workloads. VMware plans to integrate this technology into the core vSphere platform and use it as an entry point for integration with other hardware acceleration technologies (for example – FPGAs). Bitfusion was acquired by VMware in August of 2019.
    Guest speakers:
    Peter Buckingham – Director of Engineering, VMware (former VP Engineering at Bitfusion)
    Jim Brogan – Technical Marketing Manager, VMware (former Technical Engineer for Bitfusion)
    Registration Link:
    https://VMware.zoom.us/webinar/register/WN_AebrqlmCTHGycFJ8Ij5XKA

    New Version of Proactive Support with Skyline Now Available
    We’re excited to announce that a new release of Skyline Advisor is now available, with new features and functionality designed to improve customers’ proactive support experience.
    Highlights of the release include:

    • Consolidated Recommendations: Easily navigate through upgrade recommendations based on current product versions, and view the Findings that will be remediated with each upgrade.
    • Analysis Timestamp: Message at top of each page indicates when the most recent data analysis was performed.
    • Data Analysis Message Upon First Login: When customers log into Advisor for the first time after installing a Skyline Collector, they will see an alert message informing them that data analysis is in progress and they will begin seeing proactive Findings and Recommendations within 48-52 hours.

    To learn more about these updates, please visit:
    Skyline Blog
    Skyline Documentation

    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles

     

     

    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 11.22

    $
    0
    0

     

    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, we are in the final stretch of 2019 and our 22nd newsletter of the year. We have tried to bring you a newsletter almost every 2 weeks for this past year with all of the news and other interesting items from the world of VMware and virtualization in general. This year we have noticed a definite shift in the amount of content related to containers, Kubernetes and modernizing applications in general. This is a theme that will be key in 2020, and VMware is providing many tools and technologies in this area such as Tanzu and Project Pacific. With that in mind we will be adding a new section to the newsletter on "Modern Applications" to include these related articles so please look out for this new section. We begin this week with a TAM Webinar on Container Orchestration using Kubernetes, details below. As always we will continue to bring as much important news and information to you for the final newsletters of 2019 and onward in 2020.

     

    Until the final newsletter of 2019; please enjoy this weeks news and don't forget to check the new KB articles as well as upcoming TAM webinars and updates to Skyline below!

     

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

     

    -
    TAM @ VMWARE
    Upcoming TAM Webinar:

    Topic: Container Orchestration using Kubernetes
    Lots of exciting things are happening at VMware with regards to container orchestration using Kubernetes. In this session, Scott Lowe will peer into the future of Kubernetes at VMware and provide some tips for making the most of what VMware is doing in this space.

     

    Guest speaker:
    Scott Lowe - Staff Kubernetes Architect, VMware (Cloud Native Apps Business Unit)  https://blog.scottlowe.org

     

    TAM Webinar Series
    Thursday, December 12, 2019
    11:00am EST/ 10:00am CST / 8:00am PST
    Register Here!

     

    SKYLINE UPDATE
    New Skyline Release Includes Dell EMC Support Assist Integration
    We’re excited to announce a new VMware Skyline release that includes several new features designed to provide even more value to you and your organization. Be sure to update your Skyline Collectors to the latest version (2.3) to take advantage of the new features and functionality. If you have the “Auto Update” feature enabled, your Collectors will update approximately within the next seven days. Please continue reading the blog post here!

     

    ---
    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles

    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 11.21

    $
    0
    0

     

    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, we re heading towards the final month of 2019 with the conclusion of VMworld Europe last week, vFORUMs in other countries and many of the key announcements gaining in traction. One of the key items that VMware announced was "Project Pacific". From the blog post by Kit Colbert " - Today VMware announced Project Pacific, what I believe to be the biggest evolution of vSphere in easily the last decade.  Simply put, we are re-architecting vSphere to deeply integrate and embed Kubernetes. The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes.". In addition you can also sign up for the Beta here. Kubernetes and the related DevOps services and principles have become key for the IT organization to achieve. Getting started with VMware will provide an enterprise approach, which is more familiar to IT.

     

    If you haven't taken a look at VMware Skyline we have information on the latest edition below. Please take a look and see how Skyline can help with your VMware support. Skyline Documentation.

     

    I wish you a fantastic week ahead, please don't forget to check the new KB items below as well as the updates from VMworld including VMworld Europe videos on the VMworld.com website, which requires free registration to gain access to.

     

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

     

    -
    NEWS AND DEVELOPMENTS FROM VMWARE
    TAM @ VMWARE
    December 2019 – Container Orchestration using Kubernetes

    Date: Thursday, December 12th
    Time: 11:00am EST/ 10:00am CST/ 8:00am PST
    Duration: 1.5 Hour

    Synopsis:
    Lots of exciting things are happening at VMware with regards to container orchestration using Kubernetes. In this session, Scott Lowe will peer into the future of Kubernetes at VMware and provide some tips for making the most of what VMware is doing in this space.
    Guest speakers:
    Scott Lowe - Staff Kubernetes Architect, VMware (Cloud Native Apps Business Unit)
    Registration Link:
    https://vmware.zoom.us/webinar/register/WN_kmjffhX5Q46zdZHjPdhC5Q

    New Version of Proactive Support with Skyline Now Available
    We’re excited to announce that a new release of Skyline Advisor is now available, with new features and functionality designed to improve customers’ proactive support experience.
    Highlights of the release include:

    • Consolidated Recommendations: Easily navigate through upgrade recommendations based on current product versions, and view the Findings that will be remediated with each upgrade.
    • Analysis Timestamp: Message at top of each page indicates when the most recent data analysis was performed.
    • Data Analysis Message Upon First Login: When customers log into Advisor for the first time after installing a Skyline Collector, they will see an alert message informing them that data analysis is in progress and they will begin seeing proactive Findings and Recommendations within 48-52 hours.

    To learn more about these updates, please visit:
    Skyline Blog
    Skyline Documentation

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles


    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 12.01

    $
    0
    0



    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, welcome to the first newsletter to kick off 2020. I am really excited to be able to bring you all of the news from the world of Virtualization and VMware for the next 12 months. This is the 12th year of our newsletter and hopefully we will be able to deliver as much news and other information as possible to you that you will find useful. If you are new to the newsletter a quick recap on how it works. Every 2-3 weeks we collate as much useful news and other items of interest including upcoming TAM program webinars, VMware education updates and anything else that we feel is worthy of being in the newsletter. We also include the latest KB articles digest for review. All you need to do to receive this when it is sent is be a subscriber. If you received this from a colleague or via some other method feel free to subscribe yourself to the mailing list if appropriate, and of course pass this on to your fellow colleagues that might find it useful.

    We kick off the new year with lots of great news and information so please enjoy the first newsletter of 2020.

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

    TAM@VMWARE
    January 2020 - VMware TAM Customer Webinar

    Please join us for the first webinar of 2020 when Principal Cloud Solutions Architect Adam Osterholt will provide an update on VMware’s public cloud strategy. He will also touch on some tooling that will streamline this journey. This is a session you don’t want to miss!
    Speaker: Adam Osterholt – Principal Cloud Solution Architect focused on VMware Cloud on AWS, SME for Cloud-Native Apps, and member of the OCTO Global Field at VMware.
    Time: Jan 9, 2020 11:00 AM in Eastern Time (US and Canada)
    REGISTER HERE!

    KUBERNETES@VMWARE
    Backup and migrate Kubernetes resources and persistent volumes
    Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
    Get it here: https://velero.io

    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    • Looking Back: 2019 Project Report Card
      As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a li...
    • New Year, New Adventure
      I’ll skip the build-up and jump straight to the whole point of this post: a once-in-a-lifetime opportunity has com...
    • Technology Short Take 122
      Welcome to Technology Short Take #122! Luckily I did manage to get another Tech Short Take squeezed in for 2019, just so...
    • Technology Short Take 121
      Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be ...
    • Technology Short Take 120
      Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech ...

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles


    -

    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 11.23

    $
    0
    0



    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, this is the final newsletter for 2019 and we have had a fantastic year of news from VMware and virtualization. I think for me the highlights were of course VMworld in both the US and Europe as well as the many VForum and VMUG events around the world that keep us all together to collaborate on our chosen technology. There have also been some major updates from VMware specifically with the addition of various new organizations such as Carbon Black, Pivotal and Avi Networks to name a few. These have also provided a platform for our increased collaboration in the Open Source community particularly focused around Blockchain and Kubernetes.

    The newsletter has always tried to keep up with this ever changing landscape that has been shifting to a more DevOps centric service. Over the past 11 years we have tried to continue this trend and ensure that you are given the news and content that is relevant and matters to you. We are looking forward to doing the same in 2020 and I am sure there will be even more changes that we will be focusing on.

    This week as we end 2019 I want to introduce you to an exciting new feature - VMware TAM Labs. These are videos that our TAMs have created and continue to create on specific solutions based on their now home lab or work with their customers. This is a really exciting addition and I urge you to take a look at the intro here and the featured lab below.

    So with that; the end of 2019 is upon us, and I wish you all a Happy Holidays.

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI
    -
    TAM @ VMWARE
    TAM LABS
    VMware TAM Lab Program Overview - 5 minutes
    Quick overview video which outlines the VMware TAM Lab program. In the video we discussed the following items: Purpose/Mission, Benefits, Session Examples, Social Channels, Customer Facing, Call to Action.

    TAM Lab 004 - vRA 7.5 Deployment via Lifecycle Manager - 1 Hour
    Deployment of a minimal configuration of vRealize Automation 7.5 via vRealize Suite Lifecycle Manager in Steve Tilkens' (https://twitter.com/stevetilkens) lab.  The install did complete successfully despite some challenges getting through the pre-validation checks.

    TAM CUSTOMER WEBINAR
    Recording: December TAM Customer Webinar: Container Orchestration – Scott Lowe
    For those that were unable to attend December’s TAM Customer Webinar live, a recording link is now available at:https://vmware.zoom.us/recording/share/9r8Ow5i5jl_iPl2F7E8t5go91G7SHBwvEYGbhb-BAGiwIumekTziMw.

    VMware Public Cloud Strategy and Tools
    Presenter: Adam Osterholt
    Date: Thursday, January 9th
    Time: 11:00am EST/ 10:00am CST/ 8:00am PST
    Duration: 1.5 Hour
    Synopsis:
    Please join us for the first webinar of 2020 when Principal Cloud Solutions Architect Adam Osterholt will provide an update on VMware’s public cloud strategy. He will also discuss some tooling that will streamline this journey.  This is a session you don’t want to miss!
    Guest speaker:
    Adam Osterholt – Principal Cloud Solution Architect focused on VMware Cloud on AWS, SME for Cloud-Native Apps, and member of the OCTO Global Field at VMware.
    Registration Link:
    https://vmware.zoom.us/webinar/register/WN_4tgO6phXTm62h_nNhT_Fiw

    SKYLINE UPDATE
    New Skyline Release Includes Dell EMC Support Assist Integration
    We’re excited to announce a new VMware Skyline release that includes several new features designed to provide even more value to you and your organization. Be sure to update your Skyline Collectors to the latest version (2.3) to take advantage of the new features and functionality. If you have the “Auto Update” feature enabled, your Collectors will update approximately within the next seven days. Please continue reading the blog post here!

    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    • Technology Short Take 121
      Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be ...
    • Technology Short Take 120
      Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech ...
    • KubeCon 2019 Day 3 and Event Summary
      Keynotes Bryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rail...
    • KubeCon 2019 Day 2 Summary
      Keynotes This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed...
    • KubeCon 2019 Day 1 Summary
      This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thou...

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles


    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 12.02

    $
    0
    0

     

    FROM THE EDITORS VIRTUAL DESK
    Hi everyone and welcome to the second newsletter of 2020. Its already the middle of January and so much news and many updates to share with you. If you are not aware we also have a few social media options for you including Twitter, Facebook and LinkedIn to keep you connected to our TAM community. We regularly post to these sources and they are also a great way to keep up to date.

     

    In keeping with our theme of modernizing applications, my question this week is how are you managing containers in your organization? The proliferation of containers will continue to increase and our tools to manage and facilitate all that is great with containers also continues unabated, but connecting all of these together can be really hard. Organizations are finding that as soon as their container count grows, so does the complexity in orders of magnitude, often outpacing their ability to come up with a sufficient strategy, and oftentimes putting the brakes on. As a TAM we are ideally positioned to assist our customers and help them make the best possible use of your VMware ad related technology, so keep us informed and reach out to us for guidance, we are always there to help.

     

    I look forward to speaking to you in the next edition.

     

    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

     

    -
    KUBERNETES@VMWARE
    If you are just getting started with Kubernetes and looking for some easy to digest training head over to KubeAcademy. There are numerous Kubernetes and container sessions that you are able to take such as: Containers 101, Kubernetes 101, Kubernetes In-Depth, How to prepare for the CKA Exam, Cluster operations. These are all provided by VMware and are very easy to get started with.

     

    TAM@VMWARE
    WEBINAR
    Recording: January TAM Customer Webinar: VMware Public Cloud Strategy – Adam Osterholt
    For those that were unable to attend January’s TAM Customer Webinar live, a recording link is now available at:
    https://vmware.zoom.us/rec/share/6OtXFO2o9WlIf7Pr7GaYcI4FNJ3fT6a81yQcqPINn08-t_ErwC39648I7RVK4R2X
    -
    Upcoming: February 2020 – Deep Dive into vROps Self-Driving Operations and Troubleshooting
    Date: Thursday, February 13th
    Time: 11:00am EST/ 10:00am CST/ 8:00am PST
    Duration: 1.5 Hour
    Synopsis: This session will review the tenets of vRealize Operations Manager 8.0 and explore the Self-Driving functionality and features found in the new Troubleshooting Workbench.
    Guest speaker: John Dias – Senior Technical Marketing Architect, Cloud Management BU
    Registration Link: https://VMware.zoom.us/webinar/register/WN_YiUemqi5RUisqIC9zzHMOw

     

    VMWARE RUNS ON VMWARE
    Topic:
    “1-Click Happiness—How Automation Accelerates VMware IT Service Delivery”
    Date/Time: January 23, 2020, 9 – 10 a.m. PT
    Description: Service delivery at VMware IT involves the seamless deployment of application features and their associated support infrastructure into a production environment. Discover how—by using VMware vRealize® Automation™ (vRA) to automate end-to-end infrastructure delivery—our team reduced provisioning times from weeks to hours, while also lowering operating costs. In addition, you'll learn how VMware vRealize® Code Stream™ (vRCS) provides the ability to model and automate a complete application development pipeline with one button.
    Registration Link:Click here
    Our Blog:Click here

     

    UPDATES
    vRealize Network Insight 5.1.0:
    - vRNI 5.1 Download Page
    App Volumes 4.0:
    - Downloads
    - Release Notes
    - Documentation
    VMware Tools 11.0.5:
    - Direct Download
    - Release Notes
    - VMware Tools Documentation
    VMware Cloud Foundation 3.9.1:
    - https://docs.vmware.com/en/VMware-Cloud-Foundation/index.html
    - Cloud Builder Download Link:https://my.vmware.com/group/vmware/details?productId=945&downloadGroup=VCF391
    VMware Validated Design 5.1.1:
    - http://vmware.com/go/vvd
    vCloud Director For Service Providers 10.0.0.1:
    - Download link
    - Release notes

     

    Security Announcements!
    VMSA-2020-0002 - VMware Tools workaround addresses a local privilege escalation vulnerability (CVE-2020-3941)

     

    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    • Looking Back: 2019 Project Report Card
      As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a li...
    • New Year, New Adventure
      I’ll skip the build-up and jump straight to the whole point of this post: a once-in-a-lifetime opportunity has com...
    • Technology Short Take 122
      Welcome to Technology Short Take #122! Luckily I did manage to get another Tech Short Take squeezed in for 2019, just so...
    • Technology Short Take 121
      Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be ...
    • Technology Short Take 120
      Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech ...

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles

    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.

    VMware TAM Source 12.03

    $
    0
    0



    FROM THE EDITORS VIRTUAL DESK
    Hi everyone, third newsletter of 2020 already and almost the end of January, so much news! This week we have TAM Webinars, new KB articles, Security Advisories, Kubernetes updates and more. One of the questions I have been getting recently is how to get started with Kubernetes. There are so many resources and guides, the list is endless. To help getting started I always recommend the following.

    In addition you might want to take a look at the VMware offerings at cloud.vmware.com where you will find information on things such as PKS (Pivotal) and more related to Kubernetes. I hope you find these useful to get started and also suggest discussing with your TAM or VMware representative.

    Until the next edition...
    Virtually Yours
    VMware TAM Team

    Newsletter | SignUp | Archive | Twitter | FB | YT | LI

    TAM @ VMWARE
    February 2020 – Deep Dive into vROps Self-Driving Operations and Troubleshooting

    Date: Thursday, February 13th
    Time: 11:00am EST/ 10:00am CST/ 8:00am PST
    Duration: 1.5 Hour
    Synopsis:
    This session will review the tenets of vRealize Operations Manager 8.0 and explore the Self-Driving functionality and features found in the new Troubleshooting Workbench.
    Guest speaker:
    John Dias – Senior Technical Marketing Architect, Cloud Management BU
    Registration Link:
    https://VMware.zoom.us/webinar/register/WN_YiUemqi5RUisqIC9zzHMOw

    SECURITY ADVISORY

    VMSA-2020-0002 - VMware Tools workaround addresses a local privilege escalation vulnerability (CVE-2020-3941)
    Impacted Products: VMware Tools for Windows (VMware Tools)

    NEWS AND DEVELOPMENTS FROM VMWARE

    Open Source Blog

    Network Virtualization Blog

    vSphere Blog

    Cloud management Blog

    Cloud Native Blog

    EUC Blog

    Cloud Foundation Blog

    EXTERNAL NEWS FROM 3RD PARTY BLOGGERS

    Virtually Ghetto

    ESX Virtualization

    Cormac Hogan

    Scott's Weblog

    vSphere-land

    NTPRO.NL

    Virten.net

    vEducate.co.uk

    vSwitchZero

    vNinja

    VMExplorer

    KB Articles


    -

    DISCLAIMER
    While I do my best to publish unbiased information specifically related to VMware solutions there is always the possibility of blog posts that are unrelated, competitive or potentially conflicting that may creep into the newsletter. I apologize for this in advance if I offend anyone and do my best to ensure this does not happen. Please get in touch if you feel any inappropriate material has been published. All information in this newsletter is copyright of the original author. If you are an author and wish to no longer be used in this newsletter please get in touch.

    © 2019 VMware Inc. All rights reserved.
    Viewing all 3135 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>