Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3135 articles
Browse latest View live

vSANのUnmap機能を使ってみる

$
0
0

Unmapの必要性

vSANのサポートをしていると時々「容量が足りない!容量を空ける方法を教えてほしい」といった問い合わせを受けることがあります。

Storageのサポートをしていた時からたびたびこういう問い合わせを受けますが、ベンダーだけが知る秘密の方法、みたいなものはないので基本的にはお客様のデータを削除していただくか、Diskを追加して容量を拡張していただく、という形になります。

※お客様の使い方を効率化することで空き容量を捻出できる場合もあると思いますが、基本的にはサポートでなく有償のコンサルタントサービスになるのが一般的と思います。コンサルに金を出すくらいならその金でDiskを買うか、自分で勉強したほうがいい、というのは私の個人的な意見です。

 

もちろん多くのお客様は上記のようなことはご承知なので、データの削除をしてくれます。ただしVMwareやStorageをよくご存じでないお客様の場合、

「GuestOSでデータを削除したのにvSANの空き容量が増えない」

というお問い合わせを受けることがあります。

vSANの場合は基本的にはThin Provisioningとなりますので、vSAN上の仮想マシンが仮想ディスクを消費するたびに領域が少しずつ割り当てられていく、という仕組みになっています。

一度割り当てられた領域は割り当て済みとなるため、たとえその領域が使用されていなかったとしてもあとから回収することはできません。(※vSAN 6.7U1以前)

vSANにはThinではなくLazy zero-ed Thickのようにすることはできますが、デフォルトはThinなので特に指定しない限り上記のような動作になります。

具体的には以下のような感じになります。

 

最初に100GBの仮想ディスクを持つ仮想マシンをvSANデータストア上にThinで作成し、その上にOSをInstallしたとします。

Install直後は10GBほどしか使われてないとすると、対象の仮想ディスクのvSANデータストア上での容量も10GBほどになります。

その後、GuestOSにて50GBの書き込みを実施したとします。そうするとvSANデータストアで対象の仮想ディスクに追加で50GB分の容量が割り当てられるので、合計で60GBとなります。

その後、あとから書き込んだ50GBのデータを、GuestOS上で削除した場合、GuestOS上でのDisk使用率は低下しますが、vSANデータストアにおいてはすでに60GB分が割り当て済みなので、対象の仮想ディスクのvSANデータストア上のサイズは60GBのままです。(GuestOS上では10GB分のサイズしか使用してない)

この場合、対象の仮想ディスクに割り当て済みだが、実際には使用されていない容量が50GBあることになります。

この50GBはvSANデータストア上のほかの仮想ディスクですることもできず、かといって対象の仮想ディスクを使用するGuestOSからも使用されてないので完全に無駄な領域となってしまいます。

 

つまり、vSANデータストア上に作成した仮想マシンにて、上記のような作業を繰り返してしまった場合、実際に使っている量よりも明らかにvSANの使用容量が多いぞ?ということになってしまうわけです。

 

この問題を解決するためには、GuestOSで使用されなくなった領域をvSANデータストアに返却する仕組みが必要です。

その機能のことをUnmapといいます。

この機能はvSAN 6.7U1からサポートされました。(デフォルトでは無効)

この機能を利用することで、上記のような容量管理上無駄となってしまう領域を再利用することが可能になります。

 

Unmapの利用方法

Unmapは前述のとおりvSAN 6.7U1からの機能となりますので前提条件として、ESXi/vCenterがともにvSphere 6.7U1以降であり、vsan disk formatも7以上である必要があります。

https://kb.vmware.com/s/article/2145267

https://kb.vmware.com/s/article/2150753

 

またUnmap機能はデフォルトで無効になっていますが、使用方法はVMware Docから確認できます。

Reclaiming Space with SCSI Unmap

UNMAP/TRIM Space Reclamation on vSAN | vSAN Space Efficiency Technologies | VMware

 

細かい使用方法や前提条件(仮想ハードウェアのVersionなど)は上記の公式ドキュメントに譲りますが、大体の流れとしては以下のようになります。

    • 前提条件を満たしていることを確認
    • RVCにて機能を有効化する。
    • 各GuestOSごとの設定

 

GuestOS側での対応について

UnmapのオペレーションはvSAN側から開始するものではなく、GuestOS側から開始するものです。

というとも、vSANデータストア側からは、GuestOS内でどの領域を使用しているのかが不明なため、GuestOSがストレージ側(vSAN)にどの領域を解放してよいのかを教えてあげる必要があるためです。

Widowsサーバの場合、2012以降であればこのUnmapを開始する作業(Trim)がデフォルトで有効になっているため、特に問題はないと思いますが、Linux環境ではちょっとOSや環境に応じて個別の対応が必要になるかもしれません。

※あくまでも私の認識ではありますが、TrimとUnmapは同じ意味であり、OSの文脈ではTrimが使われ、Storageの文脈ではUnmapが使われることが多い印象です。

厳密には異なるかもしれませんが、少なくともこの記事ではTrim=Unmapとして記載してます。

※※WindowsサーバにおけるTrim設定の確認や動作検証などは以下のブログが参考になると思います。

https://www.idaten.ne.jp/portal/page/out/secolumn/vmware/column52.html

 

CentOS7 + LVM 利用の場合

私の環境ではLVMを利用したCentOS7でした。

その場合は以下の対応が必要となりました。

lvm.confの編集

LVMの場合は実際にDiskを管理しているのLVMサービスになりますので、LVMにてTrim/Unmapを有効化する必要があります。

具体的には、/etc/lvm/lvm.conf ファイルの

issue_discards = 0

の部分を

issue_discards = 1

にしてあげる必要があります。

 

fstabの編集

ファイルシステムでもTrim/Unmapを有効化してあげる必要があります。

具体的には、/etc/fstab を編集して対象のファイルシステムにdiscardオプションをつけてあげる必要があります。

私の環境の場合はLVMで作成した/dev/mapper/centos-home をXFSファイルシステムでフォーマットして、/homeとしてマウントしていましたので、fstabには以下のように設定されてました。

/dev/mapper/centos-home /home                   xfs     defaults

この部分に手を加え、

/dev/mapper/centos-home /home                   xfs     defaults,discard

として、discardオプションを足しました。

 

trimの実行

設定が終わったらTrimを実行してあげる必要があります。

VMware Docによるとfstrimコマンドで実施することが推奨とありました。

/homeのファイルシステムに対して実行したい場合は、以下のような形式で実行します。

※不要なファイルはあらかじめGuestOSレベルで削除しておいてください。

※※設定直後はコマンドが成功しないことがありました。その場合はいったんRebootしてあげてください。

 

(コマンド)# fstrim -v /home

(出力)/home: 1.7 TiB (1838816620544 bytes) trimmed

※ -v オプションはつけなくてもいいのですが、つけないと何も出力が返ってこないので、私はつけるようにしてます。(なんとなくの安心感があるので。)

 

 

Unmapの結果や進捗を確認する

fstrimコマンドを実行した場合、出力は比較的すぐに返ってきます。

もちろん、そんな短時間にUnmap処理自体が完了したわけではありません。

私の理解ではOSはUnmapの処理をStorageに指示するのみで実際の処理はStorage側で実施されているはずです。

vSANの場合もOS側から進捗や結果を確認することはできず、vSANの統計情報や、vSphereの消費容量から確認することになります。

 

Unmap IOを確認する

vSANのパフォーマンス情報にはUnmap IO情報が個別の項目として存在するのでそこから確認可能です。

Unmap IOは対象の仮想マシンが稼働するホスト上で記録されます。

つまり、Unmapを実行した仮想マシンAが、ESXi Bで稼働していた場合は、ESXi BのvSAN統計情報を確認する必要があります。

vSphere Client(HTML5)にログインし、ホストとクラスタのViewから、ESXi Bを選択し、監視→vSAN → パフォーマンスから確認できます。

 

Unmap前の情報

以下がUnmap前の統計情報です。UnmapのIOPSがずっとゼロであることがわかります。

1.PNG

 

Unmap後の情報

以下がUnmap後の情報です。Unmap IOPSが増えており、同じタイミングでTrim/Unmap スループットが記録されています。

Unmap IOがあるうちはUnmap進行中、Unmap IOがなくなったら完了、と言えそうです。

2.PNG

 

Unmap結果について

Unmapの結果については、vSphere Client(HTML5)から、vSANの空き容量が増えていることを確認したり、対象の仮想マシンがデータストア上で消費する容量が減っていることで確認できます。

今回はキャプチャをとり忘れてしまったのですが、実際にやってみる際は事前と事後の容量を比較してみるといいと思います。

 

 

いかがでしたでしょうか。vSANは70%程度の容量使用率が推奨されている関係もあって、いきなり容量が足りなくなる、ということはほとんどないとは思いますが、空き容量が少なくなってくると、vSANの動作に影響が出たり、パフォーマンスが出なかったり、障害時に可用性が維持できないなどの問題が生じます。

現在空き容量でお困りではないユーザであっても、使用していない領域を再利用できるに越したことはないと思います。

Unmapは地味に大事な機能だと思いますので、vSAN 6.7U1以前のVersionをお使いの場合は、ぜひともUpgradeをご検討ください。

このブログの内容が運用や管理の助けになれば幸いです。


vSphere WebClientからOVA/OVFをデプロイできない問題について

$
0
0

最近vSphere関連の以下の記事をフォローするようになりました。

VMware Support Insider

 

定期的に各製品に関する流行り(?)のKBが紹介されているのでサポートの人間としてはぜひとも押さえておきたいところです。

 

先日のUpdateにて、以下のKBを見つけました。

Transferring files through vSphere Client might fail (2147256)

 

vSphere Client経由でのデータストアへのファイルのUploadやOVA/OVFのデプロイが失敗する、という内容のKBで、サポート観点でいえば、問い合わせが来るたびに「あー、あれね。」となるような事象です。

 

今回このKBに注目した理由として、私自身がこのKBをしらなかったためです。

この事象自体はよく知っていて、何度も解決策を案内していたのですが、一度もこのKBにたどり着くことはできませんでした。

 

しかし、よくよく調べてみるとこの事象でエラーになった際のエラー画面にバッチリこのKBの番号が書かれてるんですね。。。

 

ovf error.PNG

 

エラー画面を送ってくれない人も多いので全く気付いてませんでした。完全に灯台下暗しでした

(というか、問い合わせる前にきづいて~)

 

 

自分のサポートエンジニアとしての未熟さを痛感したとともに、VMware Support Insider  をフォローしてよかった、と思った次第です。

 

 

### (オマケ) vmware KB 豆知識

vmwareのKBにはCreate DateとLast Update Dateが確認できますが、Create Date = Publish Date(External)ではないので注意が必要です。

Create Dateはあくまでも、KB自体が最初に作成されたタイミングのようで、外部向けに公開された日付とは限りません。

たとえば、 https://kb.vmware.com/s/article/70650はCreate Dateが2019/6/10になってますが、このKBはvSphere 6.7U3での改善を紹介しているものですので、vSphere 6.7U3の公開(2019/8/20)よりも前に公開されてはいませんでした。

ふつうはCreate DateやLast Modified Dateを気にする場面は少ないと思いますが、もし確認される際は↑の事実をご留意ください。

Getting VMware tools version and status

$
0
0

Sharing you my script display the VMware tools version and status

 

From a Host or cluster level

Get-VM -Location "pit-esx33.sj” | % { get-view $_.id } | 
Select Name, @{Name=“ToolsVersion”; Expression={$_.config.tools.toolsversion}}, @{ Name=“ToolStatus”; Expression={$_.Guest.ToolsVersionStatus}} | Sort-Object Name

 

From a VM list

$vmNames = Get-Content -Path D:\vmNames.txt
Get-VM -Name $vmNames | % { get-view $_.id } | select Name, @{Name=“ToolsVersion”; Expression={$_.config.tools.toolsversion}}, @{ Name=“ToolStatus”; Expression={$_.Guest.ToolsVersionStatus}} | Sort-Object Name

Dell EMC UnityVSA Deploy & Configure

Introduction - When Where What Why?

$
0
0

So, in 2020 I started this blog, my goal is to share some of my thoughts and predictions about where technology and the usage of that technology is going. I will try to share my thoughts on harnessing the technology and process currently available to fast track the delivery of services to become a strategic partner to business.

 

A little about me, I have been actively been working on some type of computer since 1991, while still at high school I developed a passion for blinking lights, if it be on a hard drive or a network router. Since then I have been developing skills to allow me to pursue any opportunity that presented itself. What I have found is that my skill set being extremely broad yet focused around process has allowed me to understand many aspects of Information Technology and allows me to adapt and learn as new technology comes to the fore.

 

During my career I have had the opportunity to act in many roles, these roles linked in some way to a business benefit that was either being developed, improved or being maintained. Customer Satisfaction and happiness in directly proportionate to how well these services meet expectations, you may have noticed I did not say requirements? Requirements are documented and measurable, what I am talking about is the perception of the service is meeting the business requirement. My experience has led me to cut though all the smoke and mirrors and evaluate if the expectation from a user can be met in a cost effective and efficient manner that will continue to deliver value to all parties in the future.

 

I will not get it right every time, but my goal is to provide a unique perspective on all things technology that we interface with and leverage to get us ahead of the pack. There will be some alignment with trends and technology where I see it adding value, but in my current role my perspective of where the industry is and what works or doesn’t is appreciated. Don’t take my perspective and forcefully apply it to your use cases, you need to develop your own perspective and understanding of your unique challenges.

高手盈利技巧幸运飞艇什么时候开特【Q3957785】!

$
0
0

To me 8 isn't enough a few more players would make the game more interesting and fun plus it'd be better to make the default win 3 rounds to 5 just to make it last longer.幸运飞艇什么时候开特 +Q,Q,【3957785】专业团队!!!The games I play are the ones where 1 person is always winning 3 round but everyone else is 3 also 5 rounds gives a chance for others to catch up and compete as well.3957785_QQ.jpg

幸运飞艇大特啥意思【Q3957785】!

$
0
0

幸运飞艇大特啥意思+【Q:3957785】【稳】【定】!【专】【业】【团】【队】【计】【划】【精】【通】【各】【种】【游】【戏】【玩】【法】!【助】【你】【提】【升】【胜】【率】!【一】【对】【一】【专】【业】【指】【导】【稳】【定】【盈】【利】【欢】【迎】【添】【加】【沟】【通】【交】【流】! shop and ask him an alternativ!3957785QQ.jpg

幸运飞艇大特和重码砸抓!!!

$
0
0

幸运飞艇大特和重码砸抓+【Q:3957785】【稳】【定】!【专】【业】【团】【队】【计】【划】【精】【通】【各】【种】【游】【戏】【玩】【法】!【助】【你】【提】【升】【胜】【率】!【一】【对】【一】【专】【业】【指】【导】【稳】【定】【盈】【利】【欢】【迎】【添】【加】【沟】【通】【交】【流】! shop and ask him an alternativ!3957785.QQ.jpg


高手盈利技巧幸运飞艇加减公式!

$
0
0

幸运飞艇加减公式+【Q:3957785】【稳】【定】!【专】【业】【团】【队】【计】【划】【精】【通】【各】【种】【游】【戏】【玩】【法】!【助】【你】【提】【升】【胜】【率】!【一】【对】【一】【专】【业】【指】【导】【稳】【定】【盈】【利】【欢】【迎】【添】【加】【沟】【通】【交】【流】! shop and ask him an alternativ!3957785.QQ.jpg

An Example of Importance of Management and Controlling Virtual Infrastructure Resources

$
0
0

In one of my projects I had a bad problem with vSphere environment . The issue had been occurred in following situation:

In the first episode VCSA server encountered with a low disk space problem and suddenly crashed. After increase size of VMDK files and fix the first problem, I saw one of the ESXi host belongs to the cluster is unreachable (disconnected and also vCenter cannot connect to it, but both of them is reachable by my client system. In a SSH access I checked the ESXi host is accessible but vCenter server couldn't connect only to this host.

All network parameters and storage zone settings, and all time settings and service configuration were same for each hosts. Sadly syslog settings was not configured and we didn't have access to scratch logs in duration of the issue had been occurred (I don't know why). Trying to restart all management agents of the host was suspended and suppressing to it by running services.sh restart process was stuck and nothing really happened. also trying to restart vpxa and hostd didn't fix the issue.

There was only one error in summary tab of disconnected host that described about the vSphere HA that is not configured and ask to remove and add the host again to the vCenter. But I couldn't reconnect it. My only guess is it's only related to startup sequence of ESXi hosts and storage systems because tech support unit restarted some of them after confronting to the problem, So HA automatically tried to migrate VMs of that offline hosts to other online hosts and this is the moment I want to call it "Complex Disaster". So was stuck decided to disable HA and DRS on cluster settings, nothing changed! problem still existed. After fixing the VCSA problem I knew if we restart that host, maybe the second problem will be solved but because of a VM operation, we couldn't do it. Migration did not work and we were confused.

Then I tried to shutdown some of not-necessary VMs belong to the disconnected host. after releasing some CPU/RAM resources, this time management agent restart was done successfully (services.sh restart operation)

So trying to connect VCSA to that problematic ESXi was possible and the problem was gone forever!

After that I wrote a procedure for that company's IT Department as the Virtualization Checklist:

1. Attend to your VI's assets logs. Don't forget to keep them locally in a safe repository and also in a syslog server.

2. Always monitor used and free process/memory resources of cluster. Never override their thresholds, because a host failure may cause to consecutive failures

3. Control status of virtual infrastructure management services include vCenter Server, NSX Manager and also their disk usage. Execute "df -h" in CLI or check status of their VMDKs in GUI. (I explained about how to do it in this post)

4. In critical situations or even maintenance operations always first shutdown your ESXi hosts and then storage systems and for reloading the system first start the storage, then the hosts.

5. In the end, please DO NOT disconnect vNIC of VCSA from associated Port Group if it is part of a Distributed vSwitch. They did it and it's made me to suffer a lot to reconnect VCSA. Even if you restore a new backup of VCSA, don't remove network connectivity of failed VCSA until the problem is not solve.

Link to my personal blog: Undercity of Virtualization: An Example of Importance of Management and Controlling Virtual Infrastructure Resources

 

Time differentiate between ESXi host & NTP Server

$
0
0

  Yes exactly, another post about NTP service and important role of time synchronization between virtual infrastructure components. In another post i described about a problem with ESXi v6.7 time setting and also talk about some of useful CLIs for the time configuration, manually ways or automated. But in a lab scenario with many versions of ESXi hypervisors (because of servers type we cannot upgrade some of them to higher version of ESXi) we planned to configure a NTP server as the "Time Source" of whole virtual environment (PSC/VC/ESXi hosts & so on).

   But our first deployed NTP server was a Microsoft Windows Server 2012 and there was a deceptive issue. Although time configuration has been done correctly and time synchronization has occurred successfully, but when i was monitoring the NTP packets with tcpdump, suddenly i saw time shifting has been happened to another timestamp.

   ntp-problem .PNGntpconf.PNG

At the first step of T-shoot, i think it's maybe happened because of time zone of vCenter server (but it worked correctly) or not being same version of NTP client and NTP Server. (to check NTP version on ESXi, use NTP query utility: n

tpq --version) and also change ntp.conf file to set exact version of NTP. (vi /etc/ntp.conf and add "version #" to end of server line) But NTP is a backward compatible s

ervice as and i thought it's not reason of this matter.

So after more and more investigation about cause of the problem, we decided to change our NTP server, for example a Mikrotik router Appliance. and after initial setup and NTP config on the

Mikrotik OVF, we changed our time source. So after setting again the time manually with "esxcli hardware clock" and "esxcli system time" configure host time synchronization with NTP. Initial manual settings must be done because your time delta with NTP server must be less than 1min.

  ntpdsvc.PNG

Then after restart NTP service on the host ( /etc/init.d/ntpd restart) i checked it again to make sure the problem has been resolved.

ntp-check2.PNG

link of post in my personal blog: Undercity of Virtualization: Time differentiate between ESXi host & NTP Server

Using the Okta RADIUS Agent for VMware Horizon

$
0
0

In this blog we are going to discuss adding Multi-Factor Authentication using Okta Verify with VMware Horizon by leveraging the Okta Radius Agent.

For more information on this integration, please see https://www.okta.com/integrations/mfa-for-virtual-desktops/vmware/

 

We are going to walk through 3 separate deployment options to leverage the Okta Radius Client:

 

  1. Using Workspace ONE Access (formerly known as VMware Identity Manager)
  2. Using Unified Access Gateway (UAG)
  3. Using Horizon Connection Servers

 

Let's start with installing and configuring the Okta Radius Agent.

 

Installing the Okta Radius Agent

For detailed instructions please see: https://help.okta.com/en/prod/Content/Topics/Directory/Agent_Installing_the_Okta_Radius_Agent.htm

 

  1. Download the Okta RADIUS Agent from the Okta Admin Portal by going to Settings -> Downloads
    Screen Shot 09-06-19 at 03.11 PM.PNG
  2. Once downloaded, launch the installer.
  3. On the intro screen, click next
  4. Click Next accept the license agreement:
  5. Select the correct installation patch and click Install.
  6. Create a Secret that will be used when configuring the radius clients.
    Screen Shot 09-06-19 at 03.13 PM 003b.png
  7. If you require a proxy complete this section otherwise click next
  8. Click Next
  9. Enter your tenant name (Note: Do not enter the full URL) with the appropriate instance
    Screen Shot 09-06-19 at 03.19 PM 001.PNG
  10. You will be redirected to your Okta tenant to Authenticate
    Screen Shot 09-06-19 at 03.19 PM.PNG
  11. Click Allow Access
    Screen Shot 09-06-19 at 03.20 PM.PNG
  12. You can then complete the installation.

 

Configure the Okta Radius Agent

 

The configuration for the Okta Radius Agent will be done within the Okta Admin Portal

 

  1. Click on Applications -> Applications
  2. Click New Application
  3. Search for "VMware Horizon View (RADIUS)" and Click Add
  4. Click Next
  5. Enter the UDP Port (1812)
  6. Enter the radius secret you used previously
  7. Select the correct username to match your environment.
    This is a very important step. For an optimal user experience, this should match your horizon credentials. If you have multiple AD domains in your horizon environment this should include the domain (ie. UPN or EMAIL).
  8. Click Done
  9. Click on the VMware Horizon View (RADIUS) application.
  10. Click Edit for the Advanced Radius Settings
  11. If you want to enable PUSH Notification, make sure the top two boxes are checked

 

Using Workspace ONE Access (formerly known as VMware Identity Manager)

 

  1. In the Workspace ONE Access Admin Console, go to Identity & Access Management -> Setup -> Connectors
  2. Click on your Worker to edit your connector configuration
  3. Click on Auth Adapters
  4. Click on the Radius Auth Adapter
  5. This will launch a configuration page running on your connector server.
    You will need connectivity to your connector server to complete this step.
    If you are presented an access denied page you might need to temporary change your policy to Password.
  6. Add your Radius Server Host name, Port and Shared Secret. (Leave the Authentication Type as PAP)
    Screen Shot 09-10-19 at 10.08 AM.PNG

  7. Click Save
  8. Return to the WS1 Access Admin Console and verify the Radius Auth Method is enabled. (You might need to refresh)
  9. Go to Identity & Access Management
  10. Click on Identity Providers
  11. Click on your Built-In Identity Provider
  12. Under Connector Authentication Methods, select Radius (Cloud Deployment)
  13. Click Save
  14. Click on Policies
  15. Edit your appropriate policy to include "Radius (cloud deployment)". In my example, I'm modifying the Win10 rule in the Default Policy.
    Screen Shot 09-10-19 at 10.15 AM.PNG
  16. Click Save, Next and Save.
  17. Open an Incognito Window and we'll test the configuration
    Note: If you ever lock yourself out, you can always go to: https://[TENANT].vmwareidentity.com/SAAS/auth/0 to login using your System Domain Account.
  18. You will be prompted to enter your Okta Credentials
  19. You should be prompted to approve the authentication on your Okta Verify Application
    Apowersoft_Screenshot_2019_09_10_13_30_12.jpg

Using Unified Access Gateway (UAG)

 

In environments where a Unified Access Gateway is deployed, most customers will typically want to configure MFA here as this appliance typically sits on the network edge. We can configure UAG to prompt for MFA using Okta Verify and then pass the credentials to Horizon to complete the authentication into the view client.

 

Note: If you have multiple AD domains, you will need to ensure your login through Okta contains the domain name (ie. UPN/Email).

 

  1. Log into your UAG Admin Console
  2. Under Authentication Settings, click the gear icon for RADIUS
  3. Enable RADIUS, Select PAP and enter the host name and port for the Okta Radius Agent.
  4. Click Save
  5. Expand Edge Service Settings and edit the Horizon Settings
  6. Click on "More" (at the bottom)
  7. Under Auth Methods, select radius-auth
  8. You will also need to enable "Enable Windows SSO" to prevent a subsequent login into the horizon client.
  9. Click Save
  10. Test your configuration by logging into the Horizon Portal. You will be prompted for your Okta username and password
    Screen Shot 09-10-19 at 02.09 PM.PNG
  11. You will then be prompted to approve the Okta Verify request on your device.

 

 

Using Horizon Connection Servers

 

Radius can be configured directly on the Horizon Connection Servers. This allows for MFA to be configured for both internal and external users (assuming internal users are not going through UAG).

 

Note: If you have multiple AD domains, you will need to ensure your login through Okta contains the domain name (ie. UPN/Email).

 

  1. Log into your Horizon Admin console
  2. Edit your Connection Server Settings
  3. Under Advanced Authentication, select Radius
  4. Select "Use the same username and password for RADIUS and Windows Authentication
  5. On the Authenticator drop down, select Create New Authenticator
  6. Enter your host name, port and secret for the Okta Radius Agent
    Screen Shot 09-10-19 at 01.41 PM.PNG
  7. Click OK
  8. Click OK.
  9. Test your configuration by logging into the Horizon Portal. You will be prompted for your Okta username and password
  10. You will then be prompted to approve the Okta Verify request on your device.

Migrate External PSC Deployment to Embedded PSC deployment Using Converge Tool

$
0
0

Here is the brief description of how Convergence tool works.

 

Converge Tool can migrate an External Deployment PSC to an Embedded deployment PSC,  it also has the ability to decommission the external PSCs after the migration.

 

Note: Your vCenter components must be upgraded to 6.7 Update1 appliance.

 

diagram.png

 

 

Prerequisites

 

1.Disable VCHA if its enabled on 6.5.

2.Reduce\disable DRS automation level.

3.Take the VMware snapshots for all vCenter components, if possible go for VCSA backup also

4.remove the secondary nic if its assigned before the upgrade

 

 

Using Tool:

 

Step 1-VCSA 6.7 Update 1 ISO will have the vcsa-convergence-cli tool so mount the ISO any windows/Linux machine.

 

   diagram1.png

Here converge folder will have converge JSON file and decommission folder will have decommission JSON file.

 

Step 2- Expand the vcsa-converge-cli directory and go to templates. Open the converge directory and copy the converge.json file to your local machine.

 

Step 3– open the converge JSON file in your fav editor and looks as below

  diagram2.png

 

Step 4 - Fill out the fields as below and save the file .

  1. Information about the Managing vCenter or ESXi Host.
  2. Information about the vCenter Server you wish to Converge to Embedded.
  3. (Optional) Active Directory Information if you wish to join the Embedded vCenter to AD.
  4. (Optional) please mention the other external PSC if you have .

  Step 5 :from CMD prompt run vcsa-converge-cli\win32>vcsa-util.exe converge –no-ssl-certificate –verbose C:location of your .json file and run

diagram3.png

 

Step 6: VCSA converged to  embedded PSC successfully .

diagram4.png

Verify the vCenter with Embedded platform service control from the VAMI page.

Note:   Before we head on to decommission please make sure there are no VCSA associated with external PCSs that you are going to decommission .

Decommissioning Steps:

Step 1: copy the decommission JSON file to the local machine and fill out the required field

  1. fill Information about the Managing vCenter or ESXi Host of the External PSC.
  2. Information about the Platform Services Controller you wish to Decommission.
  3. Information about the Managing vCenter or ESXi Host of an Embedded vCenter in the SSO Domain.
  4. Information about the Embedded vCenter in the SSO Domain.

 

Here is the screen shot reference .

diagram5.png

 

Repeat the above steps for second PSC if you have.

 

This completes the VCSA convergence .

VMTN blogで日本語タイトルを付けたときにURLがごちゃごちゃするのを防ぐ方法

$
0
0

いったいどれくらいの人がこのナレッジの恩恵を受けるのかは不明ですが、一応共有しておきます。

※VMTNはJiveと呼ばれるプラットフォームを使っているはずですので、同じプラットフォームを使っている場合は流用可能と思います。

 

問題定義

日々日本語でVMware関連のナレッジを発信している筆者ですが、ちょっと前までURLがごちゃごちゃするのが悩みでした。

具体的には以下のような感じになってしまいます。

ブログ名:【VxRail】既存環境からvSAN 環境へのMigration:その① 【イントロダクション】

URL:https://communities.vmware.com/people/nkaneda/blog/2019/09/24/vxrail-%E6%97%A2%E5%AD%98%E7%92%B0%E5%A2%83%E3%81%8B%E3%82%89vsan-%E7%92%B0%E5%A2%83%E3%81%B8%E3%81%AEmigration-%E3%81%9D%E3%81%AE-%E3%82%A4%E3%83%B3%E3%83%88%E3%83%AD%E3%83%80%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3

 

上記を見ればわかるように日本語のタイトルでブログを執筆するとタイトル=URLになる関係上、マルチバイト文字がURLエンコードされて、かなりごちゃごちゃします。

筆者はブログを仕事でも活用(ナレッジの紹介など)しているため、URLがごちゃごちゃするのが何となく嫌でした。URLだけみてどの記事なのかの判別も不可能ですし。。。

 

ちなみに上記の仕組みにはもう一つ難点があって、上の例ですとブログタイトルの「その①」の①の部分がどうやら環境依存文字として正しく認識されないらしく、②も③も同じタイトルして扱われてしまうようです。そのため、②や③を執筆すると同じURLになってしまう、という不具合があります。

 

解決方法

この問題の解決方法はいたってシンプルです。

最初にブログを作成するときにASCII文字だけでタイトルを設定し、Save Draftで保存します。

すると、この段階でのブログタイトルでブログのURLが作成されます。

そのあと、ブログタイトルを変えたとしてもURLは維持される仕様のため、URLがエンコードによってごちゃごちゃするのを防げます。

下記の図は、本ブログを作成したときの様子ですが、いったんASCII文字のみでブログタイトルを記述しているのがわかると思います。

そしてこの時のタイトルが、いまあなたがアクセスしているURLに反映されます。

1.PNG

 

大したことのないナレッジですが、これによってURLをコピペするときもすっきりしますし、うっかり間違った記事を案内してしまうミスも減らせます。

How to Manage Android Private Apps Deployment in Workspace ONE (Part 1)

$
0
0

This step-by-step guide shows how to upload internal apps (apks you’ve developed) to the Android Managed Play Store for your organization via Workspace ONE. Subsequent sections also show how to add other versions for Alpha/ Beta testing in the Google Play console, then manage assignment of those versions to specific devices/ users in Workspace ONE.

 

Pre-requisites:

  1. Workspace ONE environment already registered to Android EMM

1-1.png

  2. Apk file with an application ID that has not been published in the Android public play store.

 

A. Publish a New Application

1. Login to the Workspace ONE UEM console.

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”.

3. Select Platform: Android. Name can be kept blank. Click “Next”.

2-3.png

4. Select the private apps icon on the left.

2-4.png

5. Click the “+” button to add a new app.

2-5.png

6. Make sure to add a Name, then select “Upload APK”.

2-6.png

The “Create” button will be enabled if the app can be uploaded.

2-6-2.png

7. You will see the app in the Private apps section, and a notification that publishing in your store may take up to 10 minutes.

2-7.png

8. Close this screen. The app you just uploaded will be the app list under Public apps.

2-8.png

(Optional) To edit the logo shown in the console, click on the pencil icon beside the app.  Note that this only updates the icon in UEM, not in the Play store.

2-8-2.png

2-8-3.png

9. Save and assign the app.

2-9.png

10. Click Add Assignment

2-10.png

11. Pick the organization group/ smart group you would like to assign the app to. Click add.

2-11.png

12. Update Assignment pop-up will appear. Click “Save and Publish” to confirm. Then “Publish” at the Preview Assigned Devices page.

2-12.png

2-12-2.png

13. You will return to the app list screen. If the deployment is set to “Automatic”, app will get installed automatically on the device and show in both the Workspace ONE Hub/ Catalog and the  Google Play Store.

2-13.png

2-13-3.png2-13-2.png

 

(continues to Part 2)


How to Manage Android Private Apps Deployment in Workspace ONE (Part 2)

$
0
0

B. Add a New App Version

The steps below outlines steps to publishing apps to the alpha or beta testing tracks in Google Play console, then assigning those to Workspace ONE UEM smartgroups.

1. Login to the Workspace ONE UEM console

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”

3. Select Platform: Android. Name can be kept blank. Click “Next”

4. Select the private apps icon on the left.

3-4.png

3-4-2.png

5. Click “Make advanced edits” under Advanced editing options. This will take you to the Google Play console login page.

3-5.png

6. After logging in to the Google Play console using the google account tied to your Workspace ONE tenant, go to your app and navigate to Release management\ App release. You can select alpha or beta track. In this example, we will add an apk to the Alpha track. Click “Manage” in Alpha track.

3-6.png

7. In organizations, click “Edit”

3-7.png

8. Check the organization corresponding to the Workspace ONE organization group. Click “Done”.

3-8.png

9. Click “Edit Release”

3-9.png

10. Add the apk file. After adding the apk, you will see details about the version code and size of the file.

3-10.png

11. Click “Save” at the bottom of the page, then “Review”.

3-11.png

12. View any of the warning messages and make changes to the app, as necessary.

3-12.png

3-12-2.png

13. Click “Start Rollout”, then “Confirm” at the pop-up window.

3-13.png

14. In UEM console, select the app under Apps & Books\ Native\ Click “Assign”

3-14.png

15. Click “Add Assignment”

3-15.png

16. Select the Assignment Group who you want to get the new version (alpha) of the app. Enable Managed Access, select Alpha as Pre-release version. Click “Add”.

3-16.png

17. In the verification screen move up priority of your group where the pre-release is assigned.

3-17.png

18. Then click “Save and Publish”

3-18.png

19. Click “Publish” to confirm the assignment. This will make the alpha version of the app available to the devices belonging to the smart group chosen in Step 16.

3-19.png

Note: Same process can be done for Beta release

 

(continues to Part 3)

How to Manage Android Private Apps Deployment in Workspace ONE (Part 3)

$
0
0

C. Release Alpha/ Beta track app to Production

1. In the Google Play console, while your app is selected, go to Release Management\ App releases. In Alpha, select “Manage”.

4-1.png

2. Select “Release to Production” at the Release section.

4-2.png

3. You will see the new release to production page.

4-3.png

Scroll to the bottom and click “Save”, then “Review”.

4-3-2.png

4. Click “Start Rollout to Production”. This will release the Alpha/Beta apk to the Production track.

4-4.png

References

https://developers.google.com/android/work/play/emm-api/distribute#distribute_apps_for_closed_testing

https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/1912/Application_Management-for-Android/GUID-49EE45D2-A44A-4695-B0E5-E45BEFC8FDA9.html

How to Manage Android Private Apps Deployment in Workspace ONE (Part 1)

$
0
0

Part 1 of 3

Part 2

Part 3

 

This step-by-step guide shows how to upload internal apps (apks you’ve developed) to the Android Managed Play Store for your organization via Workspace ONE. Subsequent sections also show how to add other versions for Alpha/ Beta testing in the Google Play console, then manage assignment of those versions to specific devices/ users in Workspace ONE.

 

Pre-requisites:

  1. Workspace ONE environment already registered to Android EMM

1-1.png

  2. Apk file with an application ID that has not been published in the Android public play store.

 

A. Publish a New Application

1. Login to the Workspace ONE UEM console.

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”.

3. Select Platform: Android. Name can be kept blank. Click “Next”.

2-3.png

4. Select the private apps icon on the left.

2-4.png

5. Click the “+” button to add a new app.

2-5.png

6. Make sure to add a Name, then select “Upload APK”.

2-6.png

The “Create” button will be enabled if the app can be uploaded.

2-6-2.png

7. You will see the app in the Private apps section, and a notification that publishing in your store may take up to 10 minutes.

2-7.png

8. Close this screen. The app you just uploaded will be the app list under Public apps.

2-8.png

(Optional) To edit the logo shown in the console, click on the pencil icon beside the app.  Note that this only updates the icon in UEM, not in the Play store.

2-8-2.png

2-8-3.png

9. Save and assign the app.

2-9.png

10. Click Add Assignment

2-10.png

11. Pick the organization group/ smart group you would like to assign the app to. Click add.

2-11.png

12. Update Assignment pop-up will appear. Click “Save and Publish” to confirm. Then “Publish” at the Preview Assigned Devices page.

2-12.png

2-12-2.png

13. You will return to the app list screen. If the deployment is set to “Automatic”, app will get installed automatically on the device and show in both the Workspace ONE Hub/ Catalog and the  Google Play Store.

2-13.png

2-13-3.png2-13-2.png

 

(continues to Part 2)

How to Manage Android Private Apps Deployment in Workspace ONE (Part 2)

$
0
0

Part 2 of 3

Part 1

Part 3

B. Add a New App Version

The steps below outlines steps to publishing apps to the alpha or beta testing tracks in Google Play console, then assigning those to Workspace ONE UEM smartgroups.

1. Login to the Workspace ONE UEM console

2. Go to Apps & Books\ Applications\ Native\ Public\ Click “Add Application”

3. Select Platform: Android. Name can be kept blank. Click “Next”

4. Select the private apps icon on the left.

3-4.png

3-4-2.png

5. Click “Make advanced edits” under Advanced editing options. This will take you to the Google Play console login page.

3-5.png

6. After logging in to the Google Play console using the google account tied to your Workspace ONE tenant, go to your app and navigate to Release management\ App release. You can select alpha or beta track. In this example, we will add an apk to the Alpha track. Click “Manage” in Alpha track.

3-6.png

7. In organizations, click “Edit”

3-7.png

8. Check the organization corresponding to the Workspace ONE organization group. Click “Done”.

3-8.png

9. Click “Edit Release”

3-9.png

10. Add the apk file. After adding the apk, you will see details about the version code and size of the file.

3-10.png

11. Click “Save” at the bottom of the page, then “Review”.

3-11.png

12. View any of the warning messages and make changes to the app, as necessary.

3-12.png

3-12-2.png

13. Click “Start Rollout”, then “Confirm” at the pop-up window.

3-13.png

14. In UEM console, select the app under Apps & Books\ Native\ Click “Assign”

3-14.png

15. Click “Add Assignment”

3-15.png

16. Select the Assignment Group who you want to get the new version (alpha) of the app. Enable Managed Access, select Alpha as Pre-release version. Click “Add”.

3-16.png

17. In the verification screen move up priority of your group where the pre-release is assigned.

3-17.png

18. Then click “Save and Publish”

3-18.png

19. Click “Publish” to confirm the assignment. This will make the alpha version of the app available to the devices belonging to the smart group chosen in Step 16.

3-19.png

Note: Same process can be done for Beta release

 

(continues to Part 3)

How to Manage Android Private Apps Deployment in Workspace ONE (Part 3)

$
0
0

part 3 of 3

Part 1

Part 2

C. Release Alpha/ Beta track app to Production

1. In the Google Play console, while your app is selected, go to Release Management\ App releases. In Alpha, select “Manage”.

4-1.png

2. Select “Release to Production” at the Release section.

4-2.png

3. You will see the new release to production page.

4-3.png

Scroll to the bottom and click “Save”, then “Review”.

4-3-2.png

4. Click “Start Rollout to Production”. This will release the Alpha/Beta apk to the Production track.

4-4.png

5. The Alpha (or Beta) track will now be empty and show it was promoted to Production.

4-5.png

In UEM, all the devices that have the Prod version assigned to it will see that the update is available on the Managed Play store.

In the case where the Alpha(Beta) track is superseded, devices in the Alpha (Beta) track will get the Production version of the app.

UEM currently whitelists the track it sees the device is first assigned to in UEM (following the priority in Assignments of the app in UEM).

 

Note: It may take time for any new version of the app uploaded in Play console (or via Workspace ONE in iFrame) to get automatically installed on the work profile. Refer to this Google article. To manually install the available update, end-user can go to the Managed Play store and see updates available in My work apps\ Updates section.

References

https://developers.google.com/android/work/play/emm-api/distribute#distribute_apps_for_closed_testing

Manage app updates - Managed Google Play Help

https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/1912/Application_Management-for-Android/GUID-49EE45D2-A44A-4695-B0E5-E45BEFC8FDA9.html

Viewing all 3135 articles
Browse latest View live