[00:06:53] *** gnoze5 has joined #Citrix [00:06:56] <gnoze5> ello [00:07:07] <gnoze5> I dont like the letter h.. [00:07:08] <gnoze5> lol [00:11:27] <IcePee> gnoze5, I shorten it even further "LO" [00:12:02] <gnoze5> I was just giving a cheap excuse for my bad typing skills [00:12:19] <gnoze5> IcePee, good advice nonetheless! [00:13:35] <IcePee> hey, do you know much about XenClient? [00:15:10] *** Meson has quit IRC [00:15:36] <kdavy_> XenClient is essentially still in beta for all practical uses [00:15:50] *** OmNomDePlume has quit IRC [00:15:56] <kdavy_> i havent heard of anyone using it in production [00:15:57] *** Meson has joined #Citrix [00:16:07] <gnoze5> kdavy_, really? [00:16:32] <kdavy_> pretty much [00:17:18] <kdavy_> i've tried it on a Latitude E6410 - it works for the most part, but i wouldn't give it to a customer [00:20:24] <gnoze5> anyone tried xendesktop 5 yet btw? [00:20:28] <IcePee> I've been playing with it. But I feel it's hiding it's light under a bushell [00:21:56] <gnoze5> considering desktop virtualization for a 100 user environment, not sure yet if citrix is the way to go [00:22:25] <IcePee> Hmm, do you have an alternative? [00:23:05] <gnoze5> well everything leads me to believe I dont [00:23:20] <gnoze5> but then again i feel like im biased [00:23:41] <gnoze5> i played around with xendesktop 4 a few months ago [00:23:54] <gnoze5> the whole deployment and configuratio seemed a bit messy tbh [00:24:07] <gnoze5> im wondering if things really changed with 5 [00:26:31] <IcePee> I don't know about Xendesktop as I'm a newbie to all things xen. But with Xenclient they seem to be taking the Apple stance. Lock it all down. [00:26:53] <IcePee> There's hidden power there and they are hiding it away. [00:27:05] <IcePee> A little frustrating. [00:27:17] <gnoze5> hm [00:27:31] <gnoze5> I do use XenClient [00:27:34] <IcePee> I've been mucking around in the terminal. [00:27:38] <gnoze5> but a lot of times i dont need it [00:28:23] <IcePee> you mean, you're fully connected? [00:30:06] <gnoze5> I mean its me being lazy [00:30:18] <gnoze5> xenclient is just a fancy UI [00:33:06] <gnoze5> for some reason i thought you wrote xencenter [00:33:08] <gnoze5> im tired sorry [00:33:29] <gnoze5> i didnt connect kdavy_'s comment to your comment [00:33:43] <gnoze5> ive only played around with xenclient [00:33:46] *** Meson has left #Citrix [00:43:24] *** rev78 has quit IRC [00:44:31] <IcePee> they need to make it easier to do more. [01:13:30] *** The_Machine has joined #Citrix [01:30:19] *** The_Machine has quit IRC [02:16:03] *** IcePee has quit IRC [02:16:25] *** IcePee has joined #Citrix [02:32:57] *** IcePee has quit IRC [02:34:12] *** IcePee has joined #Citrix [02:40:42] *** IcePee has quit IRC [02:41:54] *** IcePee has joined #Citrix [02:47:58] *** IcePee has quit IRC [02:55:10] *** The_Machine has joined #Citrix [02:57:59] *** IcePee has joined #Citrix [03:04:17] *** IcePee has quit IRC [03:04:47] *** IcePee has joined #Citrix [03:05:54] *** IcePee has quit IRC [03:10:23] *** IcePee has joined #Citrix [03:42:20] *** katano has joined #Citrix [04:05:43] *** The_Machine has quit IRC [04:20:22] *** The_Machine has joined #Citrix [04:47:50] *** IcePee has quit IRC [04:47:51] *** The_Machine has quit IRC [06:16:00] *** sanket has joined #Citrix [07:33:19] *** Jenius has quit IRC [07:45:24] <sanket> hello... I m compiling openmotif 2.2.3 .....required for Citrix ICA client... but I m getting an error of.........../usr/bin/ld: cannot find -lXp [08:54:11] *** Elias_Rus has joined #Citrix [09:29:19] *** Gaelfr has joined #Citrix [09:49:09] *** Jenius has joined #Citrix [09:51:28] *** sanket has quit IRC [10:13:28] *** JohnBergoon has joined #Citrix [10:14:51] *** JohnBergoon has left #Citrix [10:19:15] *** finnzi has joined #Citrix [10:27:13] *** Jenius has quit IRC [10:56:16] *** Trixboxer has joined #Citrix [11:15:24] *** dimmieh has joined #Citrix [11:34:09] *** derwayne has joined #Citrix [11:51:26] *** katano has quit IRC [11:55:50] *** Gaelfr has quit IRC [12:26:06] *** sanket has joined #Citrix [12:35:17] *** sanket has quit IRC [12:42:04] *** AlasAway is now known as Alasdairrr [13:32:53] *** kprojects has joined #Citrix [13:35:21] *** gnoze5 has quit IRC [13:40:51] *** gnoze5 has joined #Citrix [13:58:01] *** gnoze5 has quit IRC [14:36:34] *** The_Machine has joined #Citrix [14:37:54] *** The_Machine70x7 has joined #Citrix [14:38:49] *** The_Machine70x7 has quit IRC [14:41:12] *** The_Machine has quit IRC [14:54:17] *** extor has quit IRC [15:20:38] *** Gaelfr has joined #Citrix [15:26:55] <tabularasa> kdavy_: epic fail on day 1 of helicopter flying. [15:27:03] <tabularasa> might have helped if i had the transmitter manual... :( [15:29:22] <jduggan> you fly helicopters [15:29:22] <jduggan> ? [15:29:30] <tabularasa> RC helicopters [15:29:34] <jduggan> oh [15:29:42] <tabularasa> i wish i flew real helicopters... :) [15:29:47] <jduggan> hehe [15:31:33] <jduggan> on a different subject [15:31:42] <jduggan> having a real problem with a dl385 g6 lockign up [15:32:10] <tabularasa> xenserver? [15:32:13] <jduggan> yea [15:32:16] <jduggan> no errors to console [15:32:22] <tabularasa> c-states disabled? [15:32:24] <jduggan> just hard locks up [15:32:25] <jduggan> nope [15:32:29] <jduggan> its opteron [15:32:33] <jduggan> but all those power features are off [15:32:33] <tabularasa> oh... heh [15:32:55] <jduggan> i think in HP land dl380 is intel, dl385 is amd [15:33:28] <tabularasa> yeah, don't konw much about xenserver though... :-/ [15:33:47] <jduggan> no idea whats going on with it [15:34:00] <jduggan> ive just ordered a new g7 [15:34:15] <jduggan> wont go into teh pool so will have to run vms standalone [15:34:20] <jduggan> atleast storage is backed up [15:36:18] <tabularasa> hurtin [15:36:30] <jduggan> :\ [15:37:33] <tabularasa> did you call H [15:37:37] <tabularasa> HP? sounds hardware related [15:38:16] <jduggan> not yet - im running non HP memory in it, i ordered straight from kingston in that box so need to swap back in HP only memory and test [15:38:27] <jduggan> ive ran memtests on it though, no errors [15:38:29] <jduggan> but who knows [15:39:10] <tabularasa> yeah, thats tough [15:42:29] *** bark0de has joined #Citrix [15:52:44] *** vhdllvhdll has joined #Citrix [15:53:04] *** vhdllvhdll has quit IRC [15:59:03] <derwayne> i do my first tests with 10gbit nics, raid10 citrix and a software iscsi target server atm, do i have to use paravirtualised vms to get full speed out of it ? [16:00:22] <tabularasa> wow, nice [16:10:11] <derwayne> tabularasa, here you can read abit about it: http://forums.openfiler.com/viewtopic.php?pid=24460#p24460 [16:51:24] *** Tenju has joined #Citrix [16:52:03] *** ScottCochran has joined #Citrix [16:54:20] <kdavy_> derwayne, are you using commercial openfiler or the free/community version? [16:56:11] <derwayne> kdavy_, i use the free/community version, i didnt know that their is a commercial variant, from my understand you can just get commercial support [16:59:41] <kdavy_> derwayne, have you taken a look at Nexenta? [17:00:02] <derwayne> yes i had but there license model is just awefull [17:00:02] <kdavy_> community edition also supports iScsi targets, and it's free up to 18Tb [17:00:21] <derwayne> but not ha features for free which you can set up your own [17:00:48] <kdavy_> derwayne, i have two Enterprise Gold deployments of nexenta so far (one for primary site, one for DR site); primary one has clustered head nodes [17:01:01] <derwayne> its still to expensive for what you get imop [17:01:06] <kdavy_> it's not free, but well worth it compared to an enterprise san [17:01:39] <kdavy_> my licensing was $15k for a fully redundant setup [17:02:03] <derwayne> yeah and then citrix isnt supported so both vendors blame each other when something goes wrong ? [17:02:31] <kdavy_> citrix is supported. nexenta even has storagelink [17:03:25] <kdavy_> but anyway, i had different requirements - all my storage needs to be Fibre Channel, and you cant do that with openfiler [17:03:51] <derwayne> i use sfp+ / ethernet connections atm [17:04:43] *** Gaelfr has quit IRC [17:05:49] <kdavy_> yeah, makes sense [17:06:39] <kdavy_> half the stuff in nexenta is broken anyway, though the stuff i use it for works great [17:07:04] <derwayne> lol and then these insane license costs ... [17:07:29] <derwayne> i just hope for you ha-cluster does work when a disk fails on the active node :o [17:07:46] <kdavy_> CIFS is broken, dedup is FUBARd, reporting on a per-LUN basis has never shown any signs of life, and neither has the SMART plugin :) [17:08:24] <kdavy_> "when a disk fails on the active node" - huh? you mean the root volume or one of the data drives? [17:09:14] <derwayne> doesnt matter really, when i do the same with a linux machine and heartbeat v1 clusters the passive machine doesnt take the services over as the active node still answer pings [17:09:51] <derwayne> another contrapoint for nexenta was the used underlying distro [17:09:58] * derwayne has to go, i have more time later [17:10:05] <kdavy_> derwayne, there are 3 types of quorum in HA Cluster - disk, network and serial. disk quorum is primary [17:10:11] <derwayne> iam allready 10min to late my gf gonna kill me :p [17:10:14] <kdavy_> hehe [17:10:48] <derwayne> iam able to continue to read at home, as my system there is logged on [17:10:59] <derwayne> kdavy_, catch ya later [17:11:01] *** derwayne has quit IRC [17:32:27] *** gblfxt has quit IRC [17:32:48] *** gblfxt has joined #Citrix [17:47:40] *** Gaelfr has joined #Citrix [17:56:48] <kdavy_> lol wtf. the new Intel 510 ssd is worse than the old X25M G2 at nearly everything [17:58:11] <kdavy_> #fail [17:59:44] <Appiah> less space? less speed? higher cost? [18:00:26] <kdavy_> less IOPs, higher cost [18:00:34] <Appiah> is not G2 another series [18:00:41] <Tenju> more space? [18:00:54] <kdavy_> the only thing it's good at is streaming workloads - useless for anything enterprise [18:01:00] <kdavy_> nope, less space too [18:01:11] <Tenju> zomg [18:01:43] <Tenju> i would think they would try for more space just to get people to buy it for those who have no idea what IOPs is [18:01:44] <Tenju> ya thats major fail [18:01:47] <kdavy_> wait no, it has a 250gb version, so more space. but still useless [18:02:23] <Appiah> ye ... a SSD disk , so useless [18:02:27] <Tenju> i don't think i'll be diving into SSD until it gets to $1 per gig [18:02:39] <kdavy_> and it costs the same per gig as OCZ Vertex 3 Pro, which is 10x faster at least [18:03:07] <Appiah> What's the diff between the Intel xxx SSD and Intel X## G# ? [18:03:16] <Appiah> G3 is out soon right [18:03:42] <kdavy_> no, the 510 Series is technically the G3 [18:03:48] <Appiah> oh [18:03:53] <Tenju> don't say that davy [18:04:00] <Tenju> don't crush the dream! [18:05:54] <Appiah> =( [18:07:39] <kdavy_> :) [18:08:51] <Trixboxer> Hi, in xenserver is it possible to have two management IP's ? [18:08:58] <Trixboxer> one public and one private ? [18:08:58] <kdavy_> nope [18:09:26] <kdavy_> the management IP is like Highlander: there can be only one [18:09:33] <Trixboxer> :) [18:09:35] <Trixboxer> ok [18:09:56] <tabularasa> heh [18:10:20] <Trixboxer> so I must keep a monitoring server in same rack to connect cloud using xencenter [18:10:30] <kdavy_> or you could NAT it [18:10:59] <Trixboxer> yeah [18:11:08] <Trixboxer> I think thats better [18:11:11] <waynerr> just get a monitoring server, its all opensource and when something goes wrong get a notification :o [18:11:21] <waynerr> email, sms, instant message [18:12:06] <Trixboxer> waynerr: my problem is not monitoring server, its both cloud and monitoring server being in same rack to have a pvt connectivity [18:12:21] <Trixboxer> using NAT I can separate the racks [18:13:58] <waynerr> why not use site-to-site vpns over different racks/locations ? [18:14:13] <waynerr> instead of services on public ips [18:15:33] <Trixboxer> public ips with proper firewall is better for me.. VPN is an alternative [18:40:41] *** Gaelfr has quit IRC [19:00:23] *** Tenju has quit IRC [19:27:46] <waynerr> me was bored: http://img713.imageshack.us/img713/1070/xen50vmsonenode.png [19:32:47] <kdavy_> waynerr: nice [19:33:09] <kdavy_> i should try the same with one of my 64Gb hosts [19:33:28] <kdavy_> see how many Windows 95 instances i can create :) [19:33:33] <waynerr> :D [19:35:12] <waynerr> all the vms are on a software iscsi server ( openfiler ) that only has a small raid-10 ( 4x250gb disks ) [19:35:58] <waynerr> and most amazing the offload engine works [19:36:26] <waynerr> so instead of having high cpu load on the dualcore, i get max around 1.xx 1.xx 1.xx load on the machine [19:36:39] <waynerr> most work is done by raid-controller and the 10gbit nic [19:46:40] *** bark0de has quit IRC [19:47:15] *** gnoze5 has joined #Citrix [19:47:16] <gnoze5> yellow [19:48:01] <gnoze5> for a hosted vdi solution with xendesktop can windows 7 professional licenses be used as long as they are through volume licensing? [19:52:47] <gnoze5> or is it SA that makes the difference? [19:54:16] <kdavy_> gnoze5, your customer has to own the Windows 7 licenses and they have to be running on dedicated hardware [19:54:58] <kdavy_> Microsoft does not allow service providers to use the SPLA volume licenses or even SA for that purpose [19:55:01] <gnoze5> my customer owns windows 7 licenses [19:55:03] <gnoze5> but its a non profit [19:55:09] <gnoze5> they were given the licenses [19:55:19] <gnoze5> im just wondering if they have software assurance or not [19:56:18] <kdavy_> as long as they own them they don't need software assurance. but they won't be able to update to Windows8 without it once it is released [19:58:25] <gnoze5> to user xendesktop for vdi they wont need SA? [19:59:24] <gnoze5> the update i dont think is a problem [19:59:41] <gnoze5> because apparently whenever they ask for new licenses microsoft donates them [19:59:51] <gnoze5> my question is if we need to talk to MS or not [20:00:00] <kdavy_> Hed6eH0g [20:00:00] <gnoze5> or just use the current licenses [20:00:10] <kdavy_> oops [20:00:13] <gnoze5> lol [20:02:26] <kdavy_> no i dont think you need to talk to MS [20:03:37] <gnoze5> good [20:03:44] <gnoze5> because thats always an unpleasant experience [20:03:56] <gnoze5> lolo [20:05:58] <gnoze5> hm [20:06:01] <gnoze5> but now im wondering [20:06:05] <gnoze5> if hosted vdi is the best solution [20:06:27] <gnoze5> because i need a license for the vm plus a license for the desktop that is accessing that vm [20:08:04] *** Gaelfr has joined #Citrix [20:12:01] <gnoze5> they do have windows xp [20:13:05] <gnoze5> old licenses [20:13:05] <gnoze5> hm [20:14:02] <tabularasa> Can't wait to check out Windows Thin PC [20:30:38] *** OmNomDePlume has joined #Citrix [20:34:51] <gnoze5> kdavy_ if i want to repurpose a desktop for desktop virtualization, if I want the user to be able to access the same vm from that desktop and from ay other device using the citrix receiver [20:34:56] <gnoze5> what is my best option? [20:35:00] <gnoze5> hosted vdi? [20:37:20] <tabularasa> yeah [20:38:13] <gnoze5> but then I need an extra license for that desktop, meaning a license for the desktop to be able to access the hosted vm and one for the vm [20:38:35] <gnoze5> if there was a receiver for linux or something [20:38:52] *** Trixboxer has quit IRC [20:38:57] <gnoze5> i could just install linux on all the desktops and lock them to the browser or whatever and user the receiver [20:39:02] <tabularasa> just get SA on the desktop [20:39:23] <gnoze5> and then i can use the same license for both? [20:39:26] <tabularasa> yes [20:39:33] <gnoze5> hm interesting [20:39:38] <tabularasa> windows 7 with SA allows you to run as a desktop and a VDI [20:40:08] <gnoze5> and can I somehow lock windows7 to once the os boots only show the xendesktop xenapp resources? [20:40:39] <tabularasa> 12:52 < kdavy_> gnoze5, your customer has to own the Windows 7 licenses and they have to be running on dedicated hardware [20:41:16] <tabularasa> gnoze5: sure, i have mine launch an IE kiosk to the CSG login screen [20:41:20] <tabularasa> or AGEE screen, if you wish [20:41:39] <gnoze5> hm [20:41:46] <gnoze5> that would simplify things greatly [20:41:56] <gnoze5> do you recycle the hosted vdi each time? [20:42:00] <gnoze5> or do you keep state? [20:45:16] <tabularasa> depends on the situation [20:45:22] <tabularasa> most of my users are 1 to 1 mappings [20:46:24] <gnoze5> hm [20:46:41] <gnoze5> im just wondering what i would do with the current fileserver [20:50:06] <gnoze5> tabularasa are you using high availability? [20:50:41] <tabularasa> nope [20:50:59] <gnoze5> do you use network storage? [20:51:06] <gnoze5> for the vms [20:51:22] <tabularasa> yeah [20:51:25] *** e3e3e3e has joined #Citrix [20:52:02] *** e3e3e3e has quit IRC [20:52:29] <gnoze5> for a small solution, like 70 users, would you consider using local storage? [20:52:43] <tabularasa> 70 users is a lot for VDI [20:53:40] <kdavy_> gnoze5, i wouldnt recommend local storage for anything more than 10-20 users [20:54:32] <gnoze5> hm i was looking into the dell r710 [20:56:16] <gnoze5> kdavy_, even if im not using HA? [20:56:38] <jduggan> the problem with local storage is backusp [20:56:46] <jduggan> backing up lots of vms is slow when on local [20:56:59] <kdavy_> gnoze5, do you really want 70 angry users to call you if the server dies, and you having no course of action that wouldn't take hours/days to fix? [20:57:31] <tabularasa> and throughput for 70 VDIs on a 6 drive raid set? [20:57:53] <gnoze5> yeah i guess the real argument here is the throughput [20:57:59] <kdavy_> plus for 70 users you'd need at least 700 IOPs in active spindles - that's 8 15k drives in RAID10 or 10 RAID5 drives [20:58:29] <gnoze5> hm [20:58:35] <gnoze5> any storage rcommendations? [21:00:16] <gnoze5> as in [21:00:19] <gnoze5> brand model [21:01:41] *** waynerr__ has joined #Citrix [21:02:00] <tabularasa> you'll get all over the board in here [21:02:05] <tabularasa> we use EqualLogics.. [21:04:49] *** waynerr has quit IRC [21:07:24] *** Tenju has joined #Citrix [21:12:15] <Tenju> kdavy_, I love those IOPs numbers you put up. I know someone who wouldn't listen and is running 500+ users on 28 15kdisk [21:15:20] <kdavy_> Tenju, heh did that bite them in the ass? [21:15:41] <kdavy_> well, i bet it did [21:16:03] <kdavy_> i'm running 500+ users on 120 10k disks and 24 SSDs :) [21:16:33] <kdavy_> overkill for the current count, but you get the idea [21:18:19] <kdavy_> what was Splatone running for his SAN - fujitsu? i think that was a pretty cost-effective setup [21:19:05] <tabularasa> yeah, he has fujitsu [21:22:25] <Tenju> haha yes, it runs but he occasionally locks up i'm really ruprised it even runs [21:22:33] <Tenju> suprised.* [21:22:45] *** kreign has quit IRC [21:23:24] *** The_Machine has joined #Citrix [21:26:38] <jduggan> hmmm, i have about 40 vms running fine on 12 drives 7200rpm [21:26:46] <jduggan> raid 6 [21:27:04] <Tenju> i love walking into situations where they are like....ya we got raid X [21:27:13] <Tenju> the whole SAN is being used [21:27:29] <Tenju> with 15k drives [21:28:33] <Tenju> kdavy_, really 24SSDs [21:28:41] <kdavy_> ya we Tenju, yep [21:28:58] <Tenju> you must have talked the crap outta purchasing to get that [21:29:02] <Tenju> like we need the IOPS! [21:29:06] <kdavy_> ssds are used for read+write cache [21:29:08] <Tenju> what if u users want to run CAD [21:29:41] <Tenju> cool that must run at crack like speeds [21:29:44] <Tenju> whats your boot times? [21:29:48] <kdavy_> Tenju, my entire new SAN (based on Nexenta), SSDs included, cost less than an extra 16-disk shelf for the old SAN [21:30:03] <jduggan> actually its 14 disks [21:30:23] <kdavy_> VMs boot in under 15 seconds - Win2003, 2008 R2, anything [21:30:41] <kdavy_> i havent measured it exactly, but they are flying [21:31:00] <Tenju> I bet they are you probably have alot of headroom for expansion [21:31:05] <kdavy_> pretty much the entire working set is cached in SSD; some of it in RAM even [21:31:20] <kdavy_> yeah, i designed it with headroom in mind [21:31:38] <jduggan> is it a zfs setup? [21:31:42] <kdavy_> yep [21:32:41] <jduggan> how much did it cost you to build ballpark? [21:32:44] <jduggan> and how many tb? [21:32:53] <kdavy_> right now i'm seeing 5k read IOPs going to it on average, in real time. out of those ~70% are cache hits from RAM, ~20% are cache hits from SSD, the rest goes to spindles [21:34:08] <kdavy_> 12Tb usable, ballpark cost $50k [21:34:34] <kdavy_> but i got really lucky with the FC storage shelves, got them almost for free [21:34:47] <jduggan> how many vm's do you plan to use on 12TB? [21:35:09] <Tenju> I wonder how much the EMC equivalent cost haha [21:35:30] <kdavy_> a couple hundred VMs easy [21:35:48] <kdavy_> plus expanding storage size is trivial [21:35:54] <jduggan> yea [21:36:29] <kdavy_> right now i only have ~50 XenApp VMs on there, plus an Exchange 2010 DAG for 800 mailboxes [21:36:41] *** cimplan has joined #Citrix [21:38:13] *** ruinah has joined #Citrix [21:38:23] <kdavy_> and the old SAN (80 spindles, 35Tb usable) didn't go anywhere - i could passthrough LUNs from the old SAN into the Nexenta in order to utilize all zfs goodness [21:39:02] <jduggan> we spend about 8.5k USD on 11TB useable which we budget 30 virtual machines [21:39:33] <kdavy_> jduggan, that'd be 7.2k rpm though, correct? [21:39:38] <jduggan> yea [21:42:53] *** cimplan has quit IRC [21:42:59] *** eastz0r has joined #Citrix [21:46:43] *** GrimdinWork has joined #Citrix [21:47:16] *** Grimdin has quit IRC [21:56:15] *** Gaelfr has quit IRC [22:03:00] *** draygo has quit IRC [22:03:51] *** kprojects has quit IRC [22:08:42] *** ruinah has quit IRC [22:14:56] *** draygo has joined #Citrix [22:17:54] *** Tenju has quit IRC [22:20:16] *** GrimdinWork has quit IRC [22:32:20] *** GrimdinWork has joined #Citrix [22:40:51] *** The_Machine has quit IRC [22:47:53] *** Grimdin2 has joined #Citrix [22:51:13] *** GrimdinWork has quit IRC [23:05:56] *** GrimdinWork has joined #Citrix [23:06:35] <kdavy_> has anyone played with RES VDX yet? [23:06:42] <kdavy_> the reverse seamless thing [23:08:02] <gnoze5> hm [23:08:06] <gnoze5> i got scared with the storage bit [23:08:17] <kdavy_> what storage bit? [23:08:27] *** Tenju has joined #Citrix [23:08:32] <gnoze5> i cant spend 50k on storage [23:08:36] <gnoze5> 50k you mean usd btw right? [23:08:37] <kdavy_> hehe [23:08:39] *** draygo has quit IRC [23:09:01] <kdavy_> you can got a decent storage system for 70 users for under $10k, $50k is overkill for you [23:09:04] *** Grimdin2 has quit IRC [23:09:22] <kdavy_> s/got/get [23:09:23] *** Grimdin2 has joined #Citrix [23:09:31] <Tenju> Hey if i want a Thin client to login and automatically launch a specific desktop. I have to join it to the domain and have the Online Plugin point to the DDC's Services Site? [23:09:37] <Tenju> Trying to configure some Auto Logs [23:10:03] *** draygo has joined #Citrix [23:10:24] *** bhodgens has joined #Citrix [23:10:28] <gnoze5> hm [23:10:31] <gnoze5> under 10k? [23:10:32] *** bhodgens is now known as kreign [23:10:34] <gnoze5> hm [23:10:36] <kreign> tabularasa, you around? [23:10:36] <gnoze5> thats doable i guess [23:10:47] *** GrimdinWork has quit IRC [23:10:49] *** ScottCochran has quit IRC [23:10:52] <gnoze5> thats like 140usd per user [23:10:59] <gnoze5> hm [23:11:29] <gnoze5> 100 euros [23:11:51] <gnoze5> what brand has a good quality/cost ratio? [23:11:53] <gnoze5> emc? [23:13:01] *** Elias_Rus has quit IRC [23:13:18] <Tenju> kdavy_, think you could help me out on a XD question? [23:13:34] <Tenju> or anyone if available ;) [23:14:25] <gnoze5> Tenju i wish i could, i just started exploring the idea of virtualizing desktops [23:15:15] <Tenju> NP its one of those i think i got it right but just never had to use it before type questions [23:15:31] <Tenju> XD is always fun customer to customer [23:15:53] <kdavy_> Tenju, yeah you need the services site and passthrough authentication [23:16:12] <Tenju> Gracias senor, no way around the domain join right? [23:16:55] <kdavy_> you can skip domain join but then user will have to enter credentials twice - first when logging into thin client, then when logging into XD [23:17:18] <Tenju> nah the whole thing is to have auto logs [23:17:38] <Tenju> coo i'll just enforce the " you have to join it to the domain there is no other way " status on it :) [23:17:58] <Tenju> its a hand full of auto logs [23:17:59] <kreign> my god freebsd sucks. [23:18:09] <gnoze5> kreign no it does not lol [23:18:09] <kreign> suuuuuucks. [23:18:28] <Tenju> no Windows 3.1 sucks but thats another subject [23:18:41] <gnoze5> kdavy_ EqualLogic series good idea? [23:18:45] <kreign> gnoze5, sorry - in certain incarnations, partially disenfranchized from the QA process of the freebsd project, freebsd is awesome (freenas, pfsense) [23:18:55] <kreign> gnoze5, maybe I should have said the fbsd developers suck. [23:19:09] *** gblfxt has quit IRC [23:19:34] *** gblfxt has joined #Citrix [23:19:38] <gnoze5> kreign, lol i know a couple guys who commit often... they dont suck! [23:19:55] <kreign> gnoze5, dealing with an inherrited fbsd machine that hasn't been rebooted in a year. 7.2-release-p4, using zfs pool 6 [23:20:08] <Tenju> awesome [23:20:53] <kreign> gnoze5, it hangs directly after "ZFS storage pool 6" and sits there indefinately. [23:21:25] <gnoze5> kreign zfs, brave, we handle quite a few freebsd servers, BSD in general in fact, excelent. [23:21:29] <kreign> gnoze5, and the opensolaris module isn't in loader.conf, either (well, it is - it was just commented out by the predecessor) [23:21:37] <gnoze5> you just need a good sysadmin [23:21:38] <kreign> gnoze5, yeah, I'm not the 'brave' one. [23:21:53] <kreign> gnoze5, the previous guy was a 'good' sysadmin [23:21:55] <gnoze5> in fact you need a very good sysadmin [23:21:59] <kreign> gnoze5, that's me. :) [23:22:06] <gnoze5> lolol [23:22:06] <kreign> gnoze5, just tired of dealing with these cuckold systems. [23:22:16] <kreign> "fuck it up and leave" [23:22:24] <gnoze5> but 7.2 hm... [23:22:33] <gnoze5> if you want i can direct you to a guru [23:22:37] <gnoze5> im sure he can help [23:22:56] <kreign> gnoze5, it'd be greatly appreciated. I'm not a fbsd guru, though I like to think I'm at least more competent than my predecessor. :P [23:23:27] <kreign> been here a year and I've been doing nothing but going from one wtf to the next... really frustrating. [23:23:45] <kreign> no idea how i've found the time to 'fix it' architectually to the degree that I have. :| [23:31:27] <kdavy_> hmm this RES VDX thing actually works [23:31:28] <kdavy_> kind of [23:31:37] <kdavy_> it seems buggy and raw [23:31:40] *** Grimdin2 has quit IRC [23:31:51] <kreign> kdavywhat is it? [23:33:03] <kdavy_> reverse seamless VDI tool [23:33:05] *** Grimdin2 has joined #Citrix [23:33:46] <kdavy_> it redirects your local application windows via a virtual channel in RDP or ICA so they appear in your full-screen published XenApp/XenDesktop session [23:38:40] <kdavy_> only problem is, it doesnt reverse-redirect your drives [23:39:12] <kreign> heh [23:39:15] <kdavy_> it's kinda like a circlejerk actually [23:39:22] <kreign> yeah [23:39:28] <kreign> that sounds almost useless. :| [23:39:31] <kdavy_> indeed [23:39:41] <kdavy_> they want $15/user for it [23:39:44] <kreign> rdp has done that since, when? [23:39:51] <kreign> 2000? [23:40:08] <kreign> kinda a basic requirement imo [23:40:09] <kdavy_> kreign, no, this is the opposite [23:40:12] <kreign> gnoze5, you disappear on me? [23:40:17] <kdavy_> it's REVERSE SEAMLESS [23:40:37] <kreign> kdavy_, right, so your local app shows up on the server. but making your local drive available on the server? [23:40:41] <kreign> that's been around forever. [23:40:57] <kdavy_> kreign, no, the local drive still appears [23:41:50] <kdavy_> what i've been complaining is, if you're running local apps seamlessly in a remote session, they're kinda useless if they don't have access to the drives on the server itself (network drives or local server drive) [23:42:07] <kdavy_> or at least that option would be nice [23:42:16] <gnoze5> kreign still here, trying to wake the guy up lol [23:47:40] <kreign> kdavyohhhh [23:47:43] <kreign> yeah [23:47:50] <kreign> kinda... sorta... what's the point w/o that? [23:48:03] <kreign> on the other hand the whole idea seems a bit silly in general [23:48:16] <kreign> anything you're running locally isn't gong to probably run as well remotely [23:48:28] <Tenju> The Onion has gotten into Virtualization kinda [23:51:39] <kdavy_> true, but some apps are a pain to get to even run remotely [23:52:01] <kdavy_> like ones that use LPT license keys for example [23:52:54] *** Tenju has quit IRC [23:53:40] <kreign> kdavy_ yeah. yet another indication that windows isn't yet a real server OS. :) [23:53:50] <kdavy_> or graphic intensive ones. or ones that *cough* Adobe CS *cough* prohibit running them on a terminal server [23:55:17] <kreign> gnoze5, don't suppose you know if it's possible for me to override /boot/defaults/loader.conf in any way... even if i do an 'unload ; load kernel ; boot' it still keeps pulling the modules listed in that file, even if there's no indication in the loader's 'lsmod' that they will be. [23:55:21] <gladier> cad software... autocad in particular [23:55:27] <gladier> i foudn to be rather nasty [23:55:30] <kdavy_> gladier: that too [23:55:54] <kdavy_> gladier: have you played with RemoteFX for MS RDS Session Hosts yet? [23:56:00] <kdavy_> in R2 SP1 [23:56:29] <gladier> i dont get time to play with new stuff anymore :( [23:56:54] <kdavy_> aww [23:57:06] <gladier> i still haven't had time to touch xa6 properly [23:57:15] <kdavy_> i've built a new management server on R2 SP1, and using it with RemoteFX now. it's neat [23:57:46] <gladier> is it a equivelent for HDX? or not up to scratch yet [23:57:55] <kdavy_> i can almost play flash games in it :) and VNC works much better in vSphere client or XenServer over it [23:58:23] <kdavy_> no, it's an equivalent for PCoIP - does full screen video capture and encoding instead of operating with GDI sprites