[00:04:43] *** Ownage has quit IRC [00:22:53] <kdavy_> lol. http://i.imgur.com/8vEXL.jpg [01:09:27] *** Faithful has quit IRC [01:10:52] *** Faithful has joined #Citrix [01:15:58] *** jamesd_laptop has joined #Citrix [01:18:07] *** jamesd2 has quit IRC [02:14:19] *** jduggan has quit IRC [02:14:25] *** jduggan has joined #Citrix [02:57:07] *** gblfxt has quit IRC [02:57:44] *** gblfxt has joined #Citrix [03:17:03] *** neillom has quit IRC [03:17:12] *** unop has quit IRC [03:20:27] *** unop has joined #Citrix [03:20:36] *** neillom has joined #Citrix [03:27:36] *** jamesd__ has joined #Citrix [03:29:31] *** jamesd__ has joined #Citrix [03:29:51] *** jamesd_laptop has quit IRC [04:21:57] *** RidaGee has quit IRC [04:23:16] *** ScottCochran has joined #Citrix [04:28:09] *** RidaGee has joined #Citrix [05:52:56] *** fkreign has quit IRC [06:10:25] *** Jenius has joined #Citrix [06:30:13] *** Faithful has quit IRC [06:58:20] *** jamesd_laptop has joined #Citrix [06:58:24] *** Elias_Rus has joined #Citrix [07:01:44] *** jamesd__ has quit IRC [10:08:08] *** Zed`_ has joined #Citrix [10:16:22] *** Zed` has quit IRC [10:49:26] *** Trixboxer has joined #Citrix [10:51:24] *** jamesd_laptop has quit IRC [10:51:36] *** jamesd2 has joined #Citrix [11:02:21] *** jamesd2 has quit IRC [11:06:18] *** jamesd2 has joined #Citrix [11:23:25] *** [M]ax has quit IRC [11:51:25] *** [M]ax has joined #Citrix [11:55:40] *** jamesd_laptop has joined #Citrix [11:56:03] *** jamesd_laptop has quit IRC [11:56:42] *** jamesd_laptop has joined #Citrix [11:57:23] *** osfp has joined #Citrix [11:57:43] <osfp> hello can any one help to a XenDesktop poc ? [11:58:02] *** jamesd2 has quit IRC [12:01:40] <osfp> hello can any one help to a XenDesktop poc ? [12:21:26] *** osfp has quit IRC [12:47:00] *** Jenius has quit IRC [13:11:42] *** RaycisCharles has joined #Citrix [13:44:21] *** RidaGee has quit IRC [15:19:14] *** BWMerlin has quit IRC [15:30:57] *** jamesd_laptop has quit IRC [15:31:45] *** jamesd_laptop has joined #Citrix [15:33:14] *** jamesd_laptop has quit IRC [15:33:58] *** jamesd_laptop has joined #Citrix [15:35:40] *** jamesd_laptop has joined #Citrix [15:43:19] *** waynerr__ has joined #Citrix [15:46:51] *** waynerr has quit IRC [15:49:55] *** RaycisCharles has quit IRC [15:50:36] <waynerr__> hey there, does anyone know of a linux firewall application that can run as pv in citrix xen ? [15:51:14] <waynerr__> i just want to use it as transparent proxy/gateway to connect to the internet, so doesnt need to have nids functions, mainly just squid [16:27:30] *** lesrar has joined #Citrix [16:29:00] *** waynerr__ has quit IRC [16:29:54] *** Zed`_ has quit IRC [16:31:24] *** Zed` has joined #Citrix [17:11:25] *** lesrar has quit IRC [17:14:12] *** waynerr has joined #Citrix [17:19:01] *** waynerr has quit IRC [17:22:06] *** waynerr__ has joined #Citrix [17:32:19] <Trixboxer> Hi, how can I get total CPU utilization with xe ? [17:44:50] <kdavy> Trixboxer: cpu utilization of the host or a specific VM? [17:44:59] <Trixboxer> of VM [17:45:30] <Trixboxer> I can do the sum by fetching but can I get a total value ? [17:45:31] <Trixboxer> VCPUs-utilisation (MRO): 0: 0.179; 1: 0.194 [17:45:56] <kdavy> there is a xe-vm-stats command (don't remember exact name, but if you do xe-vm- and then tab-complete it you should be able to find it [17:46:10] <kdavy> ah, yeah, that'll only give you per-vcpu stats [17:46:22] <kdavy> you could also use xentop, which should give you total [17:46:32] <Trixboxer> hmm [17:47:07] <Trixboxer> ok [17:48:59] <Trixboxer> xentop is not useful as I want to use xe [17:49:21] <Trixboxer> searching for RRD fetch.. do you have any good doc on RRD Xenserver ? [17:54:16] <kdavy> Trixboxer: i remember it being extremely confusing to deal with RRF [17:54:19] <kdavy> *RRD [17:54:38] <kdavy> saw a doc about a year ago, but probably won't be able to find it now [17:54:42] <Trixboxer> yeah, XML RPC or python.. just reading [17:54:47] <Trixboxer> nvm [18:58:16] *** fkreign has joined #Citrix [18:58:26] *** caimlas__ has joined #Citrix [18:58:35] *** fkreign has quit IRC [19:01:17] *** vcg has joined #Citrix [19:01:46] <vcg> anyone familiar with the new dvs options in 5.6 sp1? [19:02:06] <kdavy> vcg: the dvs isn't production-ready for the most part [19:02:51] <vcg> fair enough [19:03:32] <vcg> what I am looking for is an alternative to vlans (my environment has the potential to exceed the limit of support vlans at 4095) [19:03:58] <vcg> i had read about the cross server private network and was wondering if that could be a viable option [19:04:50] <kdavy> vcg: maybe... 4095 vlans?! that's crazy [19:05:06] <kdavy> cloud hosting, each customer gets own vlan? [19:05:28] <vcg> yeah [19:07:07] <vcg> essentially I am looking an isolation mechanism to keep individual clients separate [19:08:43] <kdavy> that must be a pain... i only have 20 VLANs or so, though i'm doing partial shared resource model and therefore use other means of isolation [19:10:12] <vcg> how are you implementing your isolation? [19:12:38] <kdavy> complete lockdown, windows-level security, ACLs for dedicated resources, plus we're doing fully managed IT (clients don't have their own IT personnel for the most part) [19:13:58] <vcg> we may be in a similar boat - MSP here as well [19:14:49] <kdavy> and hub-and-spoke network architecture, where communication is denied between spokes unless explicitly allowed (like multiple sites of the same client) [19:15:17] <kdavy> yeah, MSP and CSP [19:15:57] <vcg> do all users exist in the same AD domain or are you providing individual domains/ad controllers for each client? [19:16:07] <kdavy> same domain, separated by OUs [19:16:17] <vcg> same here [19:16:39] *** gblfxt has quit IRC [19:17:01] <kdavy> when clients want fat clients in the domain, we may do RODC for larger sites, but normally don't [19:17:28] *** gblfxt has joined #Citrix [19:18:14] <vcg> i havent really considered isolating clients using a mechanism other than vlan but this 12 bit limit of the vlanid is going to cause a major issue - hence the need to isolate using a different method [19:18:36] <kdavy> vcg, how big is your environment in terms of seats? [19:19:10] <vcg> designing for scalability over 10k [19:20:00] <vcg> i'm trying to build my reference architecture so it scales as flat as possible [19:20:51] <kdavy> yeah, similar idea here. i'm focusing on 5-10k per colo site, with N+1 colo sites [19:21:24] <kdavy> idea is to deploy several racks per datacenter and scale out, instead of scaling within a single site [19:21:42] <vcg> same idea here [19:22:00] *** RaycisCharles has joined #Citrix [19:22:35] <kdavy> vcg: damn, we could have a looooong discussion on architecture :) [19:22:57] <vcg> oh I don't doubt that one bit [19:23:25] <vcg> are you a lone gun or work for/own an msp? [19:23:57] <kdavy> work for an MSP, one level below president/CTO/CEO [19:24:23] <kdavy> (those are 3 different people) [19:24:56] <vcg> right on [19:25:55] <vcg> in regards to isolation ... are you assigning a subnet to each customer and then driving acls to the subnet? [19:26:05] <kdavy> yes, precisely that [19:26:25] <kdavy> subnet per customer site, not within the colo site [19:26:53] <vcg> so customer A can have virtuals that span colo sites? [19:27:37] <kdavy> vcg, not yet - colo sites are designed for failover of a customer at a time, but the customer cannot span colos [19:28:52] <kdavy> everything in the colo is as multi-tenant as possible, dedicated resources are used very sparingly because they're harder to support [19:29:24] <vcg> are you using shared storage or local storage? [19:29:30] <kdavy> shared [19:30:20] <kdavy> the only things that use local storage are management servers, domain controllers and certain appliances [19:31:10] <vcg> there is some shared storage in my environment but 99% of the vms don't require it [19:31:18] <kdavy> we use diskless IBM blades [19:31:58] <vcg> what type of switching gear? [19:32:17] <kdavy> at the core? Cisco 6500 series [19:32:37] <vcg> the green american workhorse [19:32:54] <kdavy> yep. each BladeCenter then has its own switches that connect to the 6509 [19:33:15] <vcg> and what about the aggregation layer? [19:33:29] *** GentileBen has joined #Citrix [19:34:04] *** GentileBen has joined #Citrix [19:34:48] <kdavy> by aggregation layer you mean the connectivity and routing between client sites? [19:35:07] <vcg> yeah [19:35:17] * kdavy is not much of a network guy, more focused on infrastructure, storage and virt [19:35:31] <vcg> lol fair enough [19:35:53] <kdavy> routing blades in the 6500, as well [19:36:05] <vcg> gotcha [19:36:26] <kdavy> and PIX/ASA/Juniper SRX for security [19:36:36] <vcg> how many blades does each of your ibm chassis support? [19:36:42] <kdavy> 14 blades [19:36:57] <kdavy> single-width, we don't use any double-width blades [19:37:00] <vcg> so does each blade chassis make a "pool" [19:37:01] <kdavy> in 7U [19:37:28] <kdavy> no, we distribute workloads across multiple chassis (the opposite approach) [19:37:52] <kdavy> loss of one chassis is tolerated for any workload combination [19:37:59] <vcg> how many physicals per pool? [19:38:25] <kdavy> 5 for XenServer [19:38:39] <kdavy> 2 for ESXi (don't have the $$$ for vSphere enterprise) [19:38:56] <vcg> still less than the supported 16 [19:39:03] <kdavy> a lot of the stuff is still on bare metal with failover clustering [19:39:40] <vcg> are you guys providing any desktop services or is it all server vms [19:40:03] <kdavy> desktop services are out of contract, case-by-case basis [19:40:20] <kdavy> same for on-site support [19:40:39] <kdavy> or you mean desktop services as in XenApp? [19:40:51] <vcg> daas via xendesktop [19:41:34] <kdavy> we do daas via xenapp for the most part - that's our primary front-end. xendesktop used very sparingly when there is no other option [19:42:32] <kdavy> MS needs to hurry up with reasonable VDI licensing for SPLA providers... currently it is a joke [19:42:32] <vcg> so no end user customization then [19:42:44] <vcg> HA! your telling me [19:43:25] <kdavy> no, there is end user customization on the desktop/app level, but users are not allowed to install anything obviously. all installations have to go through us [19:43:53] <vcg> using roaming profiles and folder redirection for customization? [19:44:05] <kdavy> er, rather, there is customization on the user profile level. yes, both [19:44:23] <kdavy> moving to Citrix UPM from roaming profiles [19:44:41] <vcg> any instances where users from client A and client B would connect to the same xenapp server? [19:45:00] <kdavy> 85% of xenapp servers shared between multiple tenants [19:45:30] <vcg> now there is one I had not even considered [19:45:54] <kdavy> xenapp servers only host apps commonly used by everyone (office, adobe reader, etc) - everything else is launched from the network [19:46:19] <vcg> meaning customer specific apps? [19:46:22] <kdavy> yes [19:47:51] <kdavy> we've made a couple modifications to the windows kernel to get around some common catchas (some apps refuse to launch from a network drive; some apps require full access to HKLM registry, etc) [19:48:35] <vcg> so for this use case do you think xenapp is better than providing individual "desktop" vms via provisioning services? [19:48:41] <kdavy> similar to Application Virualization, but apps don't require any packaging like with AppV or ThinApp [19:49:20] <kdavy> xenapp is way better for this use case in my opinion, but dealing with it requires more know-how [19:49:39] <vcg> it would sure keep hardware requirements down [19:49:50] <vcg> and licensing cost [19:50:37] <vcg> i always thought of xenapp as TS ... I had never considered applying personalization to it [19:53:59] <kdavy> we started applying personalization to pure TS before even shifting to XenApp - main reason we switched was because of the ICA protocol [19:54:34] <vcg> how many concurrent users are you getting out of your xenapp servers? [19:55:00] <kdavy> 20 users on XenApp 5 on w2k3 32bit [19:55:27] <kdavy> sometimes less depending on users, but everything is load-balanced. 20 users is the sweet spot [19:56:51] <vcg> and for licensing all you need is xenapp via ctp and an rds cal through spla? (and obviously the outsource license on the xenapp server) [19:56:57] <vcg> *csp [19:57:48] <kdavy> yes, plus server CALs for other MS services like Exchange and SQL [19:58:06] <vcg> right [19:58:23] <vcg> so are your xenapp servers physicals or virtuals? [19:58:31] <kdavy> virtual [19:59:04] <kdavy> virtual 32bit provides better user density than physical 64bit, and much less compatibility headache [19:59:36] <vcg> but your physicals are x64? [19:59:48] <kdavy> yea, of course [20:00:12] <vcg> why in the hell had I not even considered xenapp for this ... [20:00:39] <kdavy> no idea :) [20:01:42] <vcg> your clients using thin clients to connect in via access gateway or do they vpn in from offices? [20:02:33] <kdavy> thin or fat clients, either via internet or via cisco vpn (we deploy managed routers for the tunnels in every client site) [20:04:32] <vcg> your providing exchange as an add on as well? [20:04:57] <kdavy> that's part of base service, since everyone needs e-mail [20:06:36] <vcg> i've got some clients who use plain pop/imap or google apps so will likely be offering it as an add on [20:07:34] <kdavy> that's up to you [20:08:51] <kdavy> Exchange + Blackberry Enterprise is hard to beat in terms of features and reliability, and allows for support of all major phones out of the box [20:08:53] *** GentileBen has left #Citrix [20:09:37] <kdavy> major benefit is the fact that all data is stored in one place and the clients don't have to rely on multiple vendors [20:09:49] <vcg> agreed ... [20:10:00] <kdavy> since a lot of business apps also tie into Exchange one way or the other [20:10:23] <vcg> are you guys part of the intuit hosting program for quickbooks? [20:10:40] <kdavy> nope, though we do host quickbooks [20:10:54] <vcg> client supplies license? [20:10:57] <kdavy> yes [20:11:45] <vcg> so are you running quickbooks over a network mapped drive? [20:12:04] *** RaycisCharles is now known as OmNomDeBonBon [20:12:17] <kdavy> yes :) [20:13:51] <vcg> so do all of your clients require a virtualized server in their environment? [20:14:30] <kdavy> nope, Quickbooks runs from a network drive directly on the hosted XenApp servers (the ones that are multitenant) [20:14:52] <kdavy> don't ask me how we got it to work in this fashion - that's a trade secret [20:15:49] <vcg> fair enough [20:16:23] <vcg> so the only time you really need to work about network isolation and subnetting is when a given client does need 1 or more virtual servers in addition to their xenapp users? [20:16:30] <vcg> need to wory* [20:16:37] <vcg> i give up ... can't spell today [20:17:02] <kdavy> essentially [20:18:43] <vcg> this has been more than insightful ... I sure appreciate it [20:19:08] <kdavy> though i've got one VLAN/subnet for dedicated hosted boxes, where all network communication between hosts in the subnet is explicitly denied - that covers most of the isolation need for dedicated boxes [20:20:12] <vcg> but a dedicated box is reachable from your xenapp servers? [20:24:14] <kdavy> yes [20:25:46] <vcg> are you using the xenapp base edition from csp or the premium edition? [20:25:53] <kdavy> that's where windows security and DFS come in [20:26:04] <kdavy> premium edition - base doesn't cover XenServer licensing [20:26:45] <vcg> so your hypervisor of choice is XS then? [20:27:40] <kdavy> for xenapp VMs, yes, since they are under predictable high load during business hours and don't benefit much from memory overcommit or higher density [20:28:39] <kdavy> for the rest - depends on the VM [20:30:54] <kdavy> i have yet to find an economical use for Hyper-V [20:34:04] <vcg> so do you colo all your own hardware or have you looked into leasing dedicated hardware from any of the dedicated providers? [20:34:20] <kdavy> no, we colo everything ourselves [20:35:05] <kdavy> leasing dedicated hw might be easier when you use local storage, but with shared storage that is hardly an option [20:36:21] <vcg> my xendestop architecture used local storage but a xenapp deployment would require me to redesign that portion [20:38:22] <kdavy> with xenapp it's still feasible to use local storage and file servers, but shared storage is easier [20:38:54] <vcg> are you powering xenapp with provisioning services? [20:42:41] <kdavy> yes and no. i am at the DR site, but have to still work out the logistics of PVS at the primary site - it's mostly a matter of procedures and training [20:44:00] <vcg> so does xenapp support the same kind of usb redirection support that xendesktop does? [21:01:39] *** Trixboxer has quit IRC [21:10:32] *** draygo has quit IRC [21:11:39] *** draygo has joined #Citrix [21:28:51] *** waynerr has joined #Citrix [22:03:10] *** kaffien has quit IRC [22:03:15] *** kaffien has joined #Citrix [22:03:15] *** kaffien has joined #Citrix [22:23:53] <kdavy_> vcg, unfortunately not [22:29:59] <vcg> well now theres a bummer [22:43:36] <kdavy_> vcg, what kind of USB redirection do you need? VoIP USB headsets? [22:43:53] <kdavy_> or just data storage devices, etc [22:45:49] <vcg> printers, usb devices, potential headsets and webcams [22:46:10] <kdavy_> vcg, printers and data storage devices - no problem with XenApp [22:46:19] <kdavy_> headsets and webcams are not supported [22:46:34] <vcg> what version of xenapp are you using? [22:46:37] <kdavy_> 5.0 [22:46:54] <kdavy_> 6.0 has the same limitations [22:47:37] <vcg> ahhh ... i thought I had read that they were supported [22:47:44] <kdavy_> but for a variety of reasons using USB redirection with anything VoIP or Video is a bad idea, even with Xendesktop. Use standalone SIP compliant handsets and video conference phones instead [22:48:13] <kdavy_> anything that introduces an extra latency-inducing hop in the raw device protocol is a bad bad idea [22:49:04] <vcg> makes sense [22:49:27] <vcg> what are the hardware specs on your xenapp servers that are supporting 20 users? [22:49:41] <kdavy_> 3 vCPUs, 4 gigs of ram [22:50:08] <kdavy_> host servers are IBM HS22 blades with dual quad Nehalem Xeons and 48 or 64Gb RAM [22:50:09] <vcg> do you have a bottleneck somewhere else or could you get more users with a bigger xenapp server? [22:50:38] <kdavy_> RAM is the main bottleneck in 32bit deployments - can't add more users per virtual server [22:51:06] <kdavy_> but it's the cheapest one to scale, so i'm fine with that [22:51:25] <vcg> ahhh I forgot you are using x86 for the virtuals [22:51:32] *** pyrofallout has quit IRC [22:52:34] <vcg> are your roaming profiles served out of a file server or as a direct san attachment? [22:52:43] <kdavy_> a cluster of file servers [22:53:28] <vcg> physicals or virtuals? [22:53:35] <kdavy_> physical for now [22:54:09] <kdavy_> direct SAN attachment is unfeasible when it comes to serving a filesystem, you'll instantly corrupt your data [22:54:36] *** pyrofallout has joined #Citrix [22:54:55] <vcg> spot on without using a cluster aware filesystem [22:56:09] <vcg> have you ever looked into zfs for your shared storage? [22:58:03] <kdavy_> hehe. i am using ZFS (Nexenta Enterprise) [22:58:20] <kdavy_> but i'm exporting block-level storage from it via Fibre Channel [22:59:20] <vcg> why enterprise vs core or openindiana? [22:59:43] <kdavy_> HA Cluster + Target FC plugins are commercial only [23:00:23] <vcg> you've got more capital than I have :) [23:00:45] <vcg> i'm thinking openindiana with 10ge [23:03:44] <kdavy_> not a bad option, though harder to manage [23:05:16] <vcg> have you been pretty happy with nexenta? [23:06:17] <kdavy_> yes, although dedup is not usable on the current version - there are a couple critical bugs in it [23:06:53] *** Elias_Rus has quit IRC [23:06:56] <vcg> are you using sata + ssd cache? [23:07:48] <kdavy_> sas + fc hdd's, sata ssd cache with interposers [23:09:11] <vcg> what kind of chassis is it in? [23:09:59] <kdavy_> Xyratex 1600FC, LSI 620-J [23:11:34] <vcg> no supermicro? [23:11:55] <kdavy_> supermicro is used for the clustered controllers, not for storage chassis [23:12:23] <vcg> do you have just the 2 drive shelves? [23:12:47] <kdavy_> no, i have more than a rack of drives alone [23:13:00] <kdavy_> those are the only two models i'm using [23:13:39] <vcg> wow ... how many controllers? [23:14:31] <kdavy_> two, you cant have more with HA cluster [23:15:27] <kdavy_> plus i also have a Compellent SAN [23:19:39] <vcg> so how many drive shelves do you power with just the 2 controllers? [23:20:25] <kdavy_> 8 shelves currently - the rest are on Compellent [23:21:08] <vcg> so do you prefer the compellent or the nexenta san? [23:23:19] <kdavy_> i use compellent for space-intensive stuff; Nexenta for IOPS-intensive [23:23:29] *** OmNomDeBonBon is now known as epiphite [23:23:48] <kdavy_> both are good for what they do, but Nexenta is much cheaper to scale [23:25:35] <vcg> so did you blog or otherwise publicly document your build out? [23:28:29] <kdavy_> i did blog about the first portion of it, with dual controllers and one shelf. then just added more shelves [23:28:30] <kdavy_> http://itcrashes.blogspot.com/2010/10/on-building-sans-from-scratch.html [23:28:57] <kdavy_> don't have time to write an update :-/ [23:30:05] <vcg> ha ... i've read that post [23:30:38] <kdavy_> nice, i didn't know people read my blog :-P [23:32:10] <vcg> so were you with the msp when they first rolled out this hosted environment? [23:32:41] <kdavy_> no, the hosted environment was around since 2000; i came onboard in 2008 [23:33:34] *** gblfxt has quit IRC [23:34:35] <kdavy_> the environment started with NT4 Terminal Server Edition :) [23:34:38] *** gblfxt has joined #Citrix [23:35:15] <vcg> wow [23:38:26] <kdavy_> back in 2000, the company was started by converting a former dance school into a datacenter floor, with front desk offices being used for the support department. we're still in the same building, though now using colocation [23:40:01] <vcg> i've got all the colo space I need here in town ... i'm just trying to come up with a cost effective way to roll out this service and scale as needed [23:41:03] <vcg> the initial deployment will be relatively small but I want to make sure that I can scale without having to re-architect [23:41:41] <kdavy_> heh, you'll always have to re-architect, there is no way around that [23:42:20] <kdavy_> you can delay that process by thinking as many details through the first time, but you'll miss something - guaranteed [23:42:42] *** epiphite is now known as OmNomDeBonBon [23:43:45] <vcg> what I am affraid of is paralysis by analysis ... i've yet to actually rack up a piece of hardware in the dc [23:43:59] <vcg> and I can't even tell you how much time I have into the design [23:46:43] <kdavy_> what's even worse is, even if you have the perfect design and a perfect platform, you still have to sell it [23:48:30] <vcg> ive got a handful of managed service clients that want it yesterday [23:48:55] <vcg> just not enough to justify such a large capex [23:50:05] <vcg> but I am evaluating leasing dedicated hardware and building a solution that way [23:51:44] <kdavy_> what kind of per-seat net profit are you targeting? we may want to shift this conversation to e-mail and see if there is potential for partnership [23:52:13] <vcg> gchat? [23:53:04] <kdavy_> sure, that works too [23:53:19] <vcg> jonas .at. vertexcg.com [23:53:47] <kdavy_> invite sent [23:54:43] <[M]ax> any of you guys use vyatta for routing and serving ips assigned to the host box using dhcp to vm mac address?