February 14, 2011  
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28

[00:02:12] *** smemp has joined #Citrix
[00:09:21] *** smemp has quit IRC
[00:47:15] *** OmNomSequitur has quit IRC
[00:52:48] *** joshii has joined #Citrix
[00:55:05] <joshii> Hello, we have sam problem with Citrix Xenserver http://forums.citrix.com/thread.jspa?threadID=282016&tstart=0 , can you help me please?
[00:59:21] <kdavy> joshii: try this: http://support.citrix.com/article/CTX126986
[01:05:52] <joshii> thx ,but when I use xe sr-scan or rescan in xencenter system remove all my recover LV
[01:08:39] <kdavy> joshii: i didnt understand what you just said
[01:09:58] <joshii> ok wait pls
[01:10:13] <joshii> http://forums.citrix.com/message.jspa?messageID=1520700 this
[01:11:02] <kdavy> hm... no idea
[01:11:11] <kdavy> i havent had to recover storage before
[01:11:20] <joshii> :-)
[01:11:59] <joshii> ok, thanks
[01:16:26] *** gladier has quit IRC
[01:29:42] *** gladier has joined #Citrix
[01:44:41] <waynerr__> joshii, first do a backup off your iscsi storage
[01:45:04] <joshii> yes i have.
[01:45:09] <waynerr__> depending on what you using there should be tools for this
[01:46:12] <waynerr__> then why should the steps from the post be risky ? you can just restore your iscsi storage to the point before you started with that
[01:46:48] <waynerr__> atleast it would be a try dont you think ?
[01:47:54] <waynerr__> i dunno how you can access your iscsi storage, probably you can copy around the lvm devices there even
[01:50:01] <joshii> i first post slovet , maybe but this is the problem http://forums.citrix.com/message.jspa?messageID=1520700
[01:54:19] <waynerr__> i think we really have problems in understand each other
[01:54:34] <waynerr__> you can see your iscsi storage in xencenter or not ?
[01:56:14] <joshii> yes i see it
[01:57:06] <waynerr__> and you miss a vdi on it, i was totally wrong then before :p
[01:57:20] <waynerr__> i thought you cant see your iscsi storage anymore in xencenter
[01:58:06] <waynerr__> and you deleted the single virtual disks on them in xencenter ?
[01:59:30] <joshii> yes, and i lose /etc/lvm/backup
[02:00:28] <waynerr__> i can just make a try and see if the lvm device gets destroyed on the iscsi storage ( i have a testlab here so dont worry :p )
[02:00:38] <waynerr__> but i think so
[02:01:44] <waynerr__> but its allready utterly stupid that i can delete a disk that is attached to a vm ...
[02:05:52] <waynerr__> the devices on the iscsi storage site get destroyed, i have no idea how to restore them really ^^
[02:06:55] <waynerr__> xen creates a volumegroup from the iscsi lun you use and uses logical volumes for the vms inside of it
[02:07:11] <waynerr__> and the logical volumes get removed
[02:07:25] <waynerr__> when you delete a virtual disk in xencenter
[02:09:18] <waynerr__> http://pastebin.com/Re5KsurB
[02:09:36] <waynerr__> http://pastebin.com/u59JyRfg
[02:12:06] <joshii> yes that is very stupid
[02:12:16] <joshii> now i restored LVM
[02:12:32] <waynerr__> i will now try to restore the lvm on the software iscsi server
[02:12:33] <joshii> i found good seqno
[02:12:42] <waynerr__> how you restored it ?
[02:13:25] <joshii> i get offsets metadata from LVM with dd
[02:13:53] <joshii> and i found best backup
[02:15:16] <joshii> but now i have problem when I use xe sr-scan uuid... i found only 3 partitions
[02:15:39] <joshii> other i see only with lvscan /lvdisplay
[02:20:58] <joshii> i know, my englis is very bad :-(
[02:22:46] <waynerr__> try a rescan in xencenter on the storage tab of the iscsi storage would be my only idea atm
[02:23:40] <joshii> that is same xe sr-scan uuid...
[02:35:01] *** Meson has joined #Citrix
[02:50:19] *** jamesd2 has joined #Citrix
[03:05:22] *** smemp has joined #Citrix
[03:13:15] *** smemp has quit IRC
[03:17:49] *** smemp has joined #Citrix
[03:29:03] *** smemp has quit IRC
[04:16:07] *** lesrar has joined #Citrix
[04:18:26] *** waynerr__ has quit IRC
[04:18:52] *** waynerr__ has joined #Citrix
[04:22:33] *** lesrar has quit IRC
[05:18:26] *** lesrar has joined #Citrix
[05:22:11] *** waynerr__ has quit IRC
[05:35:49] <joshii> hello i have bigg problem, local LVM is crashed http://forums.citrix.com/thread.jspa?threadID=282145&tstart=0
[06:10:30] *** waynerr__ has joined #Citrix
[06:12:02] *** lesrar has quit IRC
[06:45:13] *** lesrar has joined #Citrix
[06:47:59] *** waynerr__ has quit IRC
[07:23:51] *** jamesd2 has quit IRC
[07:35:12] *** _bradk has quit IRC
[07:47:50] *** waynerr__ has joined #Citrix
[07:49:36] *** lesrar has quit IRC
[08:19:22] *** lesrar has joined #Citrix
[08:22:33] *** waynerr__ has quit IRC
[08:53:57] *** Patric has joined #Citrix
[09:57:14] *** echelog-2` has joined #Citrix
[10:09:54] *** Trixboxer has joined #Citrix
[10:36:08] *** tang^ has quit IRC
[12:06:43] *** jamesd2 has joined #Citrix
[13:12:07] *** denon_ has joined #Citrix
[13:21:32] *** Patric has quit IRC
[13:47:40] *** MSilva01 has joined #Citrix
[13:48:05] *** kprojects has joined #Citrix
[14:08:11] *** Faithful has joined #Citrix
[14:47:40] *** cathederal_ has joined #Citrix
[14:58:41] *** Gio^ has quit IRC
[14:59:46] *** Faithful has quit IRC
[15:21:06] *** denon_ is now known as denon
[15:21:06] *** denon has joined #Citrix
[15:28:58] <tabularasa> morning peeps
[15:34:25] <Meson> Morning
[15:35:05] *** Jenius has quit IRC
[15:35:43] <tabularasa> Missed you last week... sucks
[15:35:59] <Meson> Yeah. I was down with a bad cold last week.
[15:40:26] <tabularasa> Yeah, makson told me.  Sorry man
[15:50:39] *** guest___ has joined #Citrix
[15:52:17] *** guest___ has quit IRC
[15:57:56] *** lloyja01 has joined #Citrix
[15:59:33] *** lloyja01 has quit IRC
[15:59:57] *** guest2 has joined #Citrix
[16:00:17] *** guest2 has quit IRC
[16:04:02] *** guest2 has joined #Citrix
[16:04:35] *** guest2 has quit IRC
[16:05:22] *** guest2 has joined #Citrix
[16:05:38] *** guest2 has quit IRC
[16:20:19] <gladier> evening folks
[16:20:35] *** gazzo has quit IRC
[16:20:56] *** Jenius has joined #Citrix
[16:22:45] *** Jenius has left #Citrix
[16:23:46] <gladier> i may or may not be slightly loopy
[16:23:53] *** tang^ has joined #Citrix
[16:25:56] *** gazzo has joined #Citrix
[16:33:21] *** blood has joined #Citrix
[16:33:52] <blood> Having issues getting XenTools working with CentOS 5.5. After I install XenTools and reboot XenCenter still says "Tools not installed". Any ideas?
[16:51:26] *** rev78 has joined #Citrix
[16:57:49] *** deshantm has quit IRC
[17:01:24] <kreignj> morning folks
[17:01:36] <kreignj> have a couple 'pool upgrade' questions for people
[17:03:35] <tabularasa> gladier: whats up?
[17:07:17] *** tom_wurm has joined #Citrix
[17:08:12] <kreignj> blood, are the tools showing installed on ghe guests?
[17:08:25] <kreignj> blood, I'd try completely removing the tools, rebooting guest, and then trying again.
[17:08:30] <kreignj> oh
[17:08:34] <kreignj> sorry, I misread what you'd said
[17:09:21] <blood> I even checked 'rpm -qa \*xen\*'
[17:09:25] <blood> it shows up
[17:09:30] <kreignj> blood, I suspect that whichever version of the centos kernel you've got does not have the xenserver/xen kernel modules. can't recall the exact name of the package, but you're looking to install the kernel image for Xen.
[17:09:32] <kreignj> oh
[17:09:33] <kreignj> hmm
[17:09:42] <blood> im using CentOS 5.5
[17:09:44] <kreignj> yeah
[17:09:49] <blood> i thought 5.4 included xen requirements
[17:09:50] <kreignj> I got that part moments ago ;)
[17:09:56] <blood> 5.4+
[17:10:05] *** deshantm has joined #Citrix
[17:10:47] <kreignj> blood, hmm well I've got a xs + cent 5.5 server setup here, I just can't check it for you right now.
[17:10:54] <kreignj> blood, but I'm pretty sure the tools 'installed'
[17:11:02] <blood> # rpm -qa \*xen\* xe-guest-utilities-xenstore-5.6.100-647 , # uname -r 2.6.18-194.32.1.el5
[17:11:07] <blood> that's what I have
[17:11:12] <kreignj> how'd you install?
[17:11:18] <blood> used the install.sh
[17:11:21] <blood> on the CD
[17:11:37] <kreignj> wha'ts uname -a say?
[17:11:43] <blood> sec
[17:11:58] <blood>  2.6.18-194.32.1.el5 #1 SMP Wed Jan 5 17:52:25 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
[17:12:10] <kreignj> hmm
[17:12:13] <kreignj> yeah that's not a xen kernel
[17:12:40] <blood> got a link for the xen kernel?
[17:12:43] <kreignj> 2.6.18-194.32.1.EL.xs5.5.0.43xenU
[17:12:45] <kreignj> no, not off hand
[17:13:03] <blood> ah
[17:13:07] <blood> so I need to upgrade to that
[17:13:13] <kreignj> something like it, sure
[17:13:24] <kreignj> I'm not sure how I built that one w/o digging up docs
[17:13:34] <kreignj> don't think I used 'the template'
[17:13:47] <blood> yea template wasnt used here either
[17:13:50] <blood> since it was a VM appliance
[17:14:46] <kreignj> huh
[17:14:53] <kreignj> blood, let me guess yo uconverted a vmware appliance?
[17:14:59] <blood> no
[17:15:11] <blood> booted from the ISO and let it install
[17:15:23] <kreignj> ah
[17:15:29] <kreignj> gotcha
[17:15:39] <blood> I may just install CentOS 5.5 fresh
[17:15:45] <blood> then install the software myself
[17:16:15] <blood> who knows what other issues this appliance may have due to custom changes
[17:16:50] <kreignj> yeah
[17:16:51] <kreignj> not a fan
[17:23:44] <blood> yea just called support and had them send me the instructions to just install the software on CentOS without using their prebuilt VM appliance
[17:24:20] <cathederal> morning all
[17:25:57] <kreignj> hi
[17:26:02] <tabularasa> howdy
[17:27:06] <kreignj> tabularasa, I've got some XS hosts in a pool, which I need to upgrade and no longer want in a pool. you familiar with what might happen if I ran the installer on these hosts? ie would I get the 'upgrade' option, would it give me the option to save local + raw storage, etc.?
[17:27:16] <tabularasa> i know nothing of XS
[17:27:23] <kreignj> tabularasa, huh what are you doing in here? :P
[17:28:46] <tabularasa> don't make me ban you
[17:28:48] <tabularasa> :D
[17:29:08] <tabularasa> I started this channel before there WAS a XenServer.  :p
[17:29:34] <kreignj> hah
[17:29:44] <kreignj> tabularasa, so which citrix stuff do you use?
[17:29:59] <tabularasa> XenApp / XenDesktop
[17:30:19] <tabularasa> I started this channel around the XP FR3 days
[17:31:26] <kreignj> huh
[17:31:32] <kreignj> that was a while ago.
[17:31:59] <tabularasa> Yeah, i think i started it like 8 years ago or something
[17:32:26] <tabularasa> though, i only registered it 3 years ago
[17:32:39] <tabularasa> .. /msg Chanserv info #Citrix
[17:34:16] <rev78> if you no longer want them in the pool i would recommend doing an export of all your VMs on each host prior.
[17:35:42] <rev78> you might be able to upgrade them no problem, but i'm pretty sure adding to a pool and removing from a pool destroys the data. i had to export all my vms last week when i combined 3 hosts into an existing pool, then re-import them once the hosts were brought in
[17:36:16] <rev78> i believe it even details that in the message you receive when you try to add a machine to a pool or pull one out.
[17:44:41] <kreignj> rev78, thanks.
[17:44:57] <kreignj> rev78, i'm looking for accurate info on shared storage in XS, only finding the KB articles from 2008 :|
[18:05:41] <kreignj> anyone know of a matrix of 'features supported' under the different licensing options for XS?
[18:07:06] <kreignj> trying to compare/contrast what's needed for shared storage
[18:09:01] *** joshii has left #Citrix
[18:20:42] <rev78> what are you looking at? i'm extremely green but have had a large chunk of learning experience over the past week
[18:21:14] <rev78> i have shared storage running as well as direct attached
[18:22:22] <kreignj> rev78, just wondering what the limits are wrt shared storage + migration + etc. in xenserver free. the docs are all seemingly a bit out of date/ conflicting data between versions.
[18:22:56] <rev78> agreed, i've been playing around to see what i can do and what has caused problems. i don't have a whole lot of documentation i went off of
[18:23:27] <rev78> i know one of the nice things with the shared storage in a pool is the easy migration of vms from one host to another
[18:24:38] <rev78> as far as limitations i think it depends on the connection you have going to the shared storage and how many hosts you're planning on running per chunk of luns
[18:24:55] <kreignj> rev78, right, was thinking in terms of licensing.
[18:25:33] <kreignj> rev78, you using the free version?
[18:25:35] <rev78> oh i never found any constraints, the licensing for pauid version is mostly for things like thin-provisioning of ram and storage
[18:25:36] <rev78> yes
[18:25:52] <kreignj> gotcha.
[18:27:00] <rev78> on my shared storage model i have 4 blades connecting via 10gb iscsi to an equallogic 6510 sharing storage right now for just 4 xs hosts with a  total of 10 VMs but i'm adding more storage to split other VMs off onto
[18:27:11] <rev78> and using jumbo frames
[18:27:19] <kreignj> rev78, have you messed w/ the performance of iscsi vs. nfs?
[18:27:47] *** Gio^ has joined #Citrix
[18:28:01] <rev78> not a lot, i kind of snuck the 10gb into the deal in order to slip it by the higher ups because we seem to always fail at qualifying for the hardware otherwise.
[18:28:36] <rev78> so the 10gb fabric for the blade chassis was "included"
[18:28:44] <rev78> if you know what i mean
[18:29:14] <kreignj> yeah, I can relate
[18:30:30] <rev78> we have an older emc clarion system that has older hardware attached to it for 1 box, versus 1GB iscsi to multiple hosts though and honestly we're happy with iscsi
[18:30:55] <rev78> but it's not in use for this environment, only for our old email archiving system
[18:31:01] <jduggan> in my tests NFS was faster on the same hardware, but i think thats appliance specific
[18:31:15] <jduggan> there are some really shit iscsi implementations out there, specifically if theyre based on linux
[18:31:18] <jduggan> :P
[18:31:22] <rev78> lol
[18:32:11] <Trixboxer> iSCSI is much better than NFS.. NFS is highly not recommended for 15+ VM cloud
[18:32:38] <Trixboxer> non idle VM*
[18:32:47] <jduggan> i think it's pretty much swings and roundabouts - i have about 40 vms on NFS without issue at the moment
[18:33:37] <kreignj> jduggan, swings and roundabouts? those are techincal terms I'm not familiar with.
[18:33:48] <rev78> lol
[18:34:01] <jduggan> i like being able to do filesystem snapshots on my SAN and importing the snapshotted vhd into windows to pull out old data if needed, you dont get that with iscsi - youd have to have inhost backups or revert to snapshot or export a snapshot with iscsi
[18:34:06] <tabularasa> 6 of one.. half a dozen of the other
[18:34:12] <rev78> i think he just means it depends on the implementation and how it works for each
[18:34:14] <jduggan> kreignj: its a brit term i think, it means what you win with one you lose with another
[18:34:17] <jduggan> each to their own
[18:35:39] <kreignj> jduggan, what's your backend storage?
[18:36:15] <kreignj> jduggan, trying to find the best 'shared storage' for flexibility and it's pretty much either iscsi or NFS...
[18:36:32] <Trixboxer> I had 15 (heavy usage) VMs but once the storage got stuck and all the VDI's lost over NFS... iSCSI running quite stable.. and yeah I miss my NFS folder backup and now have to maintain snapshots
[18:36:51] <kreignj> I'm confident that NFS is probably a better option in some regards, but I'm not comfortable with the misc. online "iscsi is better"
[18:37:10] <jduggan> its basically a linux box with adaptec card, ssd expansion card for cache and 12x 2tb disks in raid 6 with hot spare
[18:37:48] <jduggan> i max a gig for sequential read/write and havent yet hit problems with iops but im willing to accept that i probably will
[18:38:11] <kreignj> jduggan, ah.
[18:38:12] <jduggan> i'll hit iops problems before i fill space, but then i'll just add another san - cheap storage, performance is fine
[18:38:24] <kreignj> yeah.
[18:38:34] <jduggan> san/nas
[18:39:03] <jduggan> i then snapshot replicate nightly to an onsite backup which also offsite backups nightly
[18:39:47] <rev78> so you did all that with linux?
[18:40:03] <rev78> i envy you, i am willing to bet you spent a lot less than me :(
[18:40:25] <kreignj> jduggan, huh you doing zfs or something else?
[18:40:38] <jduggan> yea lvm volumes with XFS filesystem, use lvm volume for snapshot and mount snapshot+rsync nightly
[18:40:42] <kreignj> ah
[18:40:44] <jduggan> kreignj: zfs is next project
[18:40:50] <kreignj> jduggan, I've got that working, fwiw
[18:40:54] *** scsinutz has joined #Citrix
[18:41:12] <jduggan> i want to look at saving the money we spend on adaptec card with ssd expansion and just do it with zfs...
[18:41:16] <kreignj> jduggan, I've got a couple boxes with a raidz1 pool + zil + virtualbox
[18:41:37] <kreignj> jduggan, which, IMO, beats the hell out of XenServer in terms of cost-to-scale at the low end
[18:41:56] <jduggan> the solution i work with bonnie++ sequential reads are like 700MB/s with writes of over 500MB/s
[18:42:35] <kreignj> jduggan, (fwiw that's zfs on linux... )
[18:42:50] <kreignj> perf is ~ BSDs, a little better
[18:42:54] <jduggan> kreignj: oh right - interesting, any reason you didnt just do osol or fbsd?
[18:44:09] <kreignj> jduggan, 1) osol is dead, 2) nexenta package management/change control isn't as mature as any one linux distro/virtualbox isn't natively packaged and I didn't want to mess with that archaic toolset 3) I hate freebsd; I've had very hit/miss stability with it.
[18:44:51] <jduggan> osol is dead? :)
[18:44:58] <kreignj> jduggan, well, it's old.
[18:45:00] <kreignj> ;P
[18:45:11] <jduggan> it has the latest zfs implementation
[18:45:32] <jduggan> im just eager for btrfs to get in a useable state
[18:45:41] <kreignj> jduggan, the single-sentence answer is "as a platform for what I'm trying to do, osol and osol derived are significantly more time- and resource- intensive to implement"
[18:45:54] <jduggan> ive seen some benchmarks and looks like it'll be sweet once the features and stability are there
[18:46:35] <kreignj> jduggan, eh the kqinfotech zfs implementation has a fairly recent and complete zfs implementation - much better than freebsds.
[18:48:27] <kreignj> jduggan, when/if virtualbox ever gets something similar to xencenter, I suspect it'll take off in a major way.
[18:48:48] <kreignj> jduggan, it's much more adaptable than xenserver, with a better native command interface.
[18:50:17] <jduggan> i'll take a look at it
[18:50:48] <kreignj> jduggan, shared storage should work on non-pooled VM hosts, correct?
[18:51:16] <kreignj> jduggan, aside from 'manual management' there shouldn't be any 'gotchas'?
[18:51:29] <kreignj> (eg. not starting it on multiple hosts)
[18:51:43] <jduggan> yep, it just creates its own SR on the NFS mount
[18:52:06] <jduggan> so you have /path/to/NFS/UUID-OF-POOL-SR/
[18:52:22] <jduggan> then you have /path/to/NFS/UUID-OF-VM-HOST-SR/
[18:52:28] <jduggan> it just creates its own folder
[18:52:30] <kreignj> jduggan, so how do you convert those over to VHDs?
[18:52:33] <jduggan> folder/directory
[18:52:37] <kreignj> or are those VHDs?
[18:52:46] <jduggan> they are vhds within the SR
[18:52:51] <kreignj> ah ok.
[18:52:52] <kreignj> cool.
[18:53:03] <kreignj> only caveat there sounds like it'd be on something like ZFS
[18:53:03] <jduggan> you have a unique SR per pool or per host, you can have multiple SR's on a single NFS share
[18:53:26] <jduggan> or you can just create multiple nfs shares one per SR
[18:53:33] <jduggan> makes no difference really
[18:53:48] <kreignj> jduggan, what about moving an SR/VMs from one host to another?
[18:53:59] <kreignj> just remove from the original and add ?
[18:54:15] <jduggan> well there's two main ways
[18:54:23] <jduggan> export to .xva and import on new host...
[18:54:25] <kreignj> just trying to figure out how that'd work in a non-pooled situation.
[18:54:27] <jduggan> or do what i do :)
[18:54:49] <jduggan> stop the VM, rsync/copy the VHD to your new hosts storage
[18:54:50] <kreignj> copy the snapshot + start on new host?
[18:54:57] <kreignj> ah
[18:54:59] <jduggan> create identical vm
[18:55:02] <kreignj> cool.
[18:55:03] <jduggan> attach the virtual disk (VHD)
[18:55:10] <jduggan> its much quicker than the export process
[18:55:13] <kreignj> right
[18:55:22] <jduggan> ive never had issues doing it this way
[18:55:26] <kreignj> pidgeons with magnets and specialized training are faster than export/import.
[18:55:46] <jduggan> yea
[18:56:08] <jduggan> thats something they need to work on...
[18:57:13] <kreignj> jduggan, ever do local -> NFS VHD storage moves?
[18:58:52] <jduggan> kreignj: nope, the local storage i did have were all lvm based, i never converted local into ext3
[18:59:40] <kreignj> I'm confused.
[19:00:04] <kreignj> 'into ext3' -> SR based VHDs on NFS?
[19:01:23] <jduggan> sorry - in xenserver 'local storage' stored on the physical vm uses LVM to do the storage, so you dont get access to the VHD - means i cant just rysnc them so your only option is either export and import or use the 'vm move' or 'vm copy' option which is a bit slower
[19:02:06] <jduggan> on command line its xe vm-copy
[19:02:21] *** OmNomSequitur has joined #Citrix
[19:02:41] <jduggan> which is a bit slower than just rsync, i mean :)
[19:05:56] <kreignj> jduggan, ahh yes.
[19:06:08] <kreignj> jduggan, yeah, I am not a fan of LVM, personally.
[19:06:42] <kreignj> particularly when used in conjunction with xenserver
[19:06:50] <kreignj> but in general I've not liked it so much.
[19:07:32] <jduggan> it has useful features... some downsides
[19:07:37] <jduggan> again - swings and roundabouts :)
[19:10:03] <kreignj> yeah
[19:10:25] <kreignj> i suppose that terminology wouldn't exist if it wasn't for poor british vehicle engineering :P
[19:17:02] <jduggan> poor? have you looked at american cars? :|
[19:17:09] <rev78> lol
[19:17:44] <rev78> can't knock ford a whole lot, them teaming with ms on sync has put the japanese in catchup mode
[19:18:01] <rev78> chrysler just might be gone though, they can't do anything but bleed financially
[19:18:43] <kreignj> jduggan, thinking 50+ years ago.
[19:19:02] <kreignj> but yeah
[19:19:06] <kreignj> US autos are in a sad state.
[19:19:17] <kreignj> I'm honestly not liking much of anything produced modernly, tbh
[19:19:36] <jduggan> buy german
[19:19:37] <jduggan> :)
[19:19:47] <kreignj> eh
[19:20:26] <kreignj> I'd rather have a minimalist, functional vehicle (eg. no 'extras' which are now 'standard') with fewer parts to break than have a 5-10-year old vehicle with a dozen 'little problems'
[19:20:36] <jduggan> heh
[19:20:42] <jduggan> i used to say the same thing
[19:20:45] <kreignj> I wish BMW made small pickups in the '70s :P
[19:20:49] <jduggan> would prefer an old carb engine
[19:20:54] <jduggan> with no electronics
[19:20:59] <jduggan> but these days...
[19:21:00] <kreignj> jduggan, used to?
[19:21:03] <rev78> agreed
[19:21:12] <kreignj> jduggan, what makes 'these days' special?
[19:21:23] <jduggan> electronic keys... electronic handbreaks
[19:21:29] <jduggan> everything is electric and breaks
[19:21:40] <jduggan> :)
[19:22:17] <jduggan> i have a vw and if i didnt have warranty on it id be skint wiht all the little things ive had to take it in for... latest one is airbag fault displaying on dash
[19:22:21] <jduggan> atleast i dont pay for it :)
[19:30:04] *** draygo has quit IRC
[19:30:24] *** draygo has joined #Citrix
[19:31:19] *** draygo has joined #Citrix
[19:31:30] <kreignj> jduggan, no, I mean, why did you "used to" say the same thing?
[19:32:03] <kreignj> jduggan, the newest vehicle I've owned is a 2000 model year, which has had more problems than any other vehicle i've owned.
[19:32:34] <kreignj> jduggan, little electronic things...thankfully it's the 'base' model of the 'economy' car (ford focus) so there isn't much in that regard.
[19:33:08] * kreignj drives an '84 diesel van.. MFI baby
[19:33:37] <kreignj> kdavy, the only thing it's needed done to it in the past 5 years is brakes + filters + oil
[19:34:31] <jduggan> kreignj: i used to say the same thing until it became hard to not buy anything else - and despite the faults, they do make safer and more comfortable driving
[19:38:02] <kreignj> jduggan, true. :|
[19:38:15] <kreignj> jduggan, california is bleeding classic vehicles in good condition right now...
[19:38:34] <kreignj> jduggan, but being in the UK I suspect you don't have that 'problem'
[19:38:39] <kreignj> all your stuff is probably rusted :)
[19:47:17] *** Elias_Rus has joined #Citrix
[19:52:38] <tabularasa> How many users do you think you could get on a Citrix server that just uses IE ?
[19:52:46] <tabularasa> think i could get 50 on a system with 8 gigs of RAM ?
[19:52:56] <tabularasa> maybe 50 with 16 gigs of ram ?
[19:53:51] <ele> depends on what they are using
[19:54:10] <tabularasa> you mean, inside of IE ?
[19:54:22] <tabularasa> just some intranet web app
[19:54:35] <tabularasa> New World Systems, Logos...  its a commercial web app
[19:54:51] *** kaffien has joined #Citrix
[19:55:05] <kaffien> is there a seperate channel for xenserver?
[19:55:05] <ele> i'd test it basically :)
[19:57:15] <Ownage> kaffien: what do you want to know
[19:57:42] <kaffien> Can you make a VM  or a ISO library on local storage?
[19:57:52] <Ownage> yes
[19:57:57] <Ownage> both
[19:58:05] <kaffien> Im having issues figuring it out
[19:58:13] <kaffien> all the options seem to point at nfs
[19:58:17] <kaffien> or cifs
[19:58:49] <rev78> i haven't been able to do local iso storage personally
[19:58:49] <Ownage> you will do it from the cli
[19:58:59] <rev78> oh that would be why
[19:58:59] <Ownage> I've successfully done both
[19:59:01] <Ownage> http://greg.porter.name/wiki/HowTo:XenServer#Add_a_new_storage_repository_on_local_disk
[19:59:08] <Ownage> that shows you the basic idea there
[19:59:17] <Ownage> xe sr-create is what you want
[19:59:18] <rev78> thanks for that one
[19:59:37] <Ownage> you can also find extensive docs in the pdfs available free at citrix.com
[19:59:54] <Ownage> long story short the xencenter interface doesn't give you all the available options
[20:00:01] <Ownage> only the most common really
[20:00:39] <kaffien> ahhh ok  nm i was going about it all wrong my issue was the iso library
[20:01:00] <kaffien> if i get past that i can put the server on local storage no problemo
[20:01:09] <Ownage> you can also mount yourself of course with nfs/cifs but this is relatively pointless
[20:02:54] <kaffien> yeah i just wanted a local storage area to put some iso's up
[20:11:37] <tabularasa> you can't do local ISO storage.. you have to make VM, on local storage, and then share it out over CIFS or what not
[20:11:40] <tabularasa> annyoing...
[20:13:24] *** smemp has joined #Citrix
[20:15:23] <draygo> not really
[20:15:30] <draygo> you can created a local iso sr
[20:15:38] <draygo> there just isn't much space on dom0 to do it
[20:15:46] <tabularasa> draygo: how?
[20:15:55] <tabularasa> cli command?
[20:15:59] <draygo> yep
[20:16:03] <tabularasa> gotcha
[20:16:09] <draygo> there was an old article for 4.1 that showed you how to do it
[20:16:15] <draygo> but it still applies to 5.x
[20:16:17] *** echelog-2` is now known as echelog-2
[20:17:00] <draygo> might just be able to use xe-mount-iso-sr
[20:17:05] <draygo> with the local path in dom0 it looks like
[20:18:40] <draygo> http://www.tillett.info/2009/09/23/adding-iso-repository-under-xenserver-5-5/
[20:18:50] <draygo> that article explains it in more detail
[20:23:37] <kaffien> can't seem to make another sr via the prompt says its in use by the hose
[20:23:40] <kaffien> host even
[20:24:29] <draygo> a new sr?
[20:24:35] <draygo> or are you re-using the same uuid?
[20:32:12] *** kdavy_ has joined #Citrix
[20:32:20] <kdavy_> afternoon all
[20:33:04] <kaffien> you also have to have more room for that repository
[20:33:29] <kaffien> i remade my local storage partition and made it smaller. now there can be a /dev/sda4 for ISO's
[20:34:04] <kaffien> It seems rather silly they left this feature out  ... especially in the free server xencenter version
[20:36:20] *** MSilva01 has quit IRC
[20:38:03] <draygo> kaffien: the reason for this is that nothing should be running in dom0 unless it absolutely has to
[20:38:56] <kaffien> well it has to ...  not all small businesses have a NFS server kicking around.
[20:39:24] <draygo> most should have some kind of fileserver though...even a windows one
[20:39:24] <kaffien> well i do but its currently out of commission. waiting on a raid expander.
[20:39:31] <draygo> there is support for CIFS iso
[20:39:45] <Ownage> this is a bit ironic I think
[20:39:46] <kaffien> i suppose that would be a viable alternative.
[20:39:52] <Ownage> if you have xenserver, you have an nfs server
[20:39:57] <Ownage> spin up a vm if you want to
[20:40:04] <Ownage> and have it serve via nfs
[20:40:09] <draygo> Ownage: i've never liked doing that
[20:40:10] <kaffien> lol
[20:40:31] <draygo> you end up getting errors like "substrate not available" ifyou start a vm with an iso loaded andy our iso vm is offline
[20:40:32] <kaffien> its monday nothing above grade 3 math   please.
[20:40:39] *** krisnijs has joined #Citrix
[20:40:59] <kaffien> you know ..... my dns-323 does support nfs .. hrrrrm
[20:41:02] <Ownage> who cares? you have problems if you can't get around that
[20:41:13] <Ownage> if you can't figure out to start up your vm server
[20:41:18] <Ownage> or click off that iso
[20:41:21] <draygo> true, but the error it throws out is cryptic at best
[20:41:26] <Ownage> you've got HUGE problems in your office
[20:43:03] *** krisnijs has quit IRC
[20:44:27] <kaffien> well it didnt like that
[20:44:38] <kaffien> storage won't replug haha
[20:45:29] <Ownage> pastebin your session
[20:45:53] <Ownage> it's not something that 'doesnt work', it works for sure. so you're missing something most likely
[20:48:19] <kaffien> i deleted the partition /dev/sda3   and created  /dev/sda3  and sda4,  wrote to disk reboot xenserver.   mkfs.ext3  for each of them reboot again.
[20:50:32] <kaffien> using storage repair gives me the message logical volume mount /activate error.
[20:51:05] <kaffien> ah well lesson learned ..  nuking and using CFS / NFS
[20:51:47] *** Elias_Rus has quit IRC
[20:52:54] <Ownage> what the hell?
[20:53:00] <Ownage> why are you rebooting xenserver
[20:53:14] <jduggan> i actually serve NFS via a virtual machine
[20:53:16] <jduggan> :S
[20:53:21] <jduggan> er, isos on NFS
[20:55:49] <kaffien> Ownage:  because it specified i must do so for the partition table that was newly written to be accessable.
[20:57:53] <Ownage> hard to tell without the pastebin
[20:57:59] <Ownage> but sounds like you're mucking about
[20:58:45] <tabularasa> kaffien: i've seen you in another channel... ##windows-server ?
[20:59:08] <kaffien> i'm currently there
[20:59:18] <tabularasa> i just recall talking to you before
[20:59:20] <kaffien> Ownage:  you are correct I'm mucking about
[20:59:30] <kaffien> but what exactly should i be pasting.
[20:59:40] <kaffien> the logs only tell me the drive failed to plugin
[21:00:01] <kaffien> no error numbers etc
[21:01:26] <Ownage> your session
[21:01:30] <Ownage> all the commands, your history
[21:01:32] <Ownage> what you are doing
[21:03:01] <Ownage> your fdisk -l
[21:03:04] <Ownage> your xe sr-list
[21:03:07] <Ownage> DATA
[21:03:12] <kaffien> i just told you   what i did.   i fdisk 'd  the drive  with local storage,  removed the local storage partition.   recreated it smaller and added a 4th.
[21:04:02] <Ownage> why wouldn't you just resize the lvm and add another
[21:04:37] <kaffien> makes sense but i do not know how to do that.  if i did I would have.  and that is what I will try this time.
[21:04:54] <kaffien> i find it odd the install doesn't have an option to choose how much of the drive to use.
[21:05:07] <Ownage> it assumes you want all of it, which is the supported configuration
[21:05:32] <Ownage> you won't be supported by having xenserver master being your nfs server, or iso repository or anything else off the books like that
[21:06:00] <kaffien> i'm using the free version the only support i would expect to find is irc and google.
[21:06:06] <kaffien> they offer support on the free version?
[21:06:08] <Ownage> yes
[21:06:24] <Ownage> just like every other software company in the world
[21:06:33] <Ownage> you pay for support, they don't care what product you're using
[21:06:42] <kaffien> good point
[21:06:58] <Ownage> but in your example if you had an issue related to this, they would basically say you gotta undo that first
[21:07:22] <blood> So i'm trying to create a new CentOS 5.5 VM using the templates yet XenServer 5.6 FP1 only goes up to CentOS 5.3. Anyone know how to get 5.5?
[21:07:26] <Ownage> and that's why you don't get the option during install, it's not a typical usage and they don't want to support every config imaginable
[21:07:37] <Ownage> blood: that's a lie and you know it
[21:07:50] <blood> ?
[21:07:52] <Ownage> XenServer 5.6 FP1 doesn't have centos 5.3 at all
[21:08:08] <Ownage> or centos 5.5
[21:08:10] <Ownage> or 5.4
[21:08:12] <Ownage> or 5.2
[21:08:22] <blood> http://community.citrix.com/display/xs/Configuring+CentOS+5.5+Guests+on+XenServer+5.5+and+5.6
[21:08:28] <blood> read that, it says they do have 5.3
[21:08:30] <Ownage> I don't need to click your link
[21:08:35] <Ownage> YOU read it
[21:08:40] <blood> already did
[21:08:42] <Ownage> you see FP1 in that title?
[21:08:43] <Ownage> NO
[21:08:51] <Ownage> read the release notes for FP1
[21:09:07] <Ownage> hell I'm looking right at several FP1 machines: NO CENTOS 5.3
[21:09:17] <Ownage> so why don't you start over
[21:09:19] <blood> odd my admin sent an email saying he saw it
[21:09:19] <blood> =)
[21:09:37] <blood> 5.3X64
[21:09:44] <Ownage> doesn't exist in 5.6 FP1
[21:09:53] <kaffien> what is fp1?
[21:09:56] <Ownage> communication breakdown
[21:10:08] <Ownage> 5.6 FP1 is the current release of xenserver
[21:10:13] <kaffien> ah
[21:10:17] <Ownage> and it does not have centos 5.3 _anything_
[21:10:47] <Ownage> tell your admin I said thanks for making all my freelance clients so happy when I help them, since they are used to people like him
[21:11:05] * Ownage nicotine raging
[21:11:15] <blood> Generic Red Hat Enterprise Linux (RHEL) 5.x support. RHEL / CentOS / Oracle Enterprise Linux versions 5.0 to 5.5 support with a generic "RHEL 5" template.
[21:11:21] <blood> just saw that in Release Notes
[21:11:24] <blood> guess I need to use that
[21:11:29] <blood> I don't admin the Xen Server
[21:11:36] <Ownage> there is no centos 5.3 template in fp1
[21:11:41] <blood> yea i know
[21:11:42] <Ownage> only 5.6 and older
[21:11:48] <blood> RHEL 5 template then?
[21:11:58] <Ownage> CentOS 5(64-bit)
[21:12:04] <Ownage> that's the one you want if you're on centos
[21:12:17] <Ownage> they have similar for rhel
[21:12:31] <Ownage> but to answer your original question
[21:12:35] <blood> I wonder if FP1 was even installed
[21:12:46] <Ownage> how you would do this with an older version of xs, like 5.6 and before
[21:12:53] <Ownage> is you get the iso for the version listed
[21:13:01] <Ownage> for example 5.3 64 and install with that template
[21:13:07] <blood> gotcha
[21:13:07] <Ownage> then you yum update the vm itself
[21:13:21] <blood> but with 5.6 FP1 I just use the CentOS 5(x64) template
[21:13:21] <Ownage> centos/rhel updates are good for the major point release
[21:13:27] <Ownage> exactly
[21:13:37] <blood> so it looks like his FP1 upgrade failed then
[21:13:40] <blood> or didn't upgrade correctly
[21:13:49] <Ownage> that's why it's pimp.. you don't need stupid old centos isos laying around anymore
[21:14:02] <blood> You know if they added Ubuntu support in FP1?
[21:14:11] <Ownage> they did
[21:14:17] <Ownage> "experimental" support
[21:14:24] <blood> ah yea
[21:14:30] <blood> was it in 5l.6 too?
[21:14:35] <blood> 5.6*
[21:14:36] <Ownage> however, I've successfully gotten ubuntu working just fine back in the days of 5.0
[21:14:46] <Ownage> with PV even ;P
[21:15:00] <blood> yea I was told to choose another distro since they told me Citrix doesn't support it
[21:15:05] <blood> so i'm going with CentOS now
[21:15:21] <Ownage> ubuntu is garbage anyways so it's a good move for you
[21:15:31] <Ownage> keep ubuntu on the desktop where you can enjoy it's ease of use
[21:15:36] <blood> yea it's looking that way=)
[21:15:43] <Ownage> let the enterprise software do the enterprise tasks
[21:16:26] <Ownage> centos is garbage for desktop
[21:16:31] <Ownage> so it balanced out I guess
[21:21:56] <blood> Ownage: just asked him again and he says we are at FP1 yet it still shows CentOS 5.3
[21:22:07] <blood> what should I tell him to check now lol
[21:24:29] *** Trixboxer has quit IRC
[21:26:26] <kaffien> apparently HVM is required for this operation  (to start a vm)
[21:30:42] <Ownage> blood: cat /etc/redhat-release
[21:30:51] <Ownage> kaffien: for what operation
[21:31:03] <kaffien> i just said for powering up a vm.
[21:31:20] <kaffien> anyhow ... tis my own fault .... i was hoping to tinker with xenserver via vmware.
[21:32:08] <Ownage> you need hvm support to power up an hvm vm yes
[21:38:19] *** smemp has quit IRC
[22:10:20] *** kprojects has quit IRC
[22:13:49] <jduggan> god, vm host crash
[22:13:55] <jduggan> s/crash/freeze/
[22:22:47] <draygo> c-state bug?
[22:23:01] <tabularasa> seriously
[22:32:07] <jduggan> theyre amd opterons
[22:32:16] <jduggan> is there such issues with amds version?
[22:32:30] <jduggan> i think actually i disabled all the power saving stuff
[22:50:56] *** gm1959 has joined #Citrix
[22:52:56] <gm1959> anyone around?  I'm trying to get multipath iscsi working with multiple session per target.  The target host is a solaris zfs box which supports mc/s and mpxio.  All I see is one session, so do I have to switch the kernels over to MPP from DMP?
[23:00:54] *** blood has quit IRC
[23:04:11] <jduggan> is there any kind of support with the free version of xenserver?
[23:14:49] <gm1959> jduggan - only on the free xenserver forums at citrix.
[23:18:14] <gladier> tabularasa: got up at 4am to drop gf at the airport.... went to bed at 3am after working all night on our hp d2d
[23:21:34] <kaffien> ug ... why did you bother sleeping
[23:21:49] <kaffien> between 3am and 4am that is
[23:23:09] *** kdavy has quit IRC
[23:26:51] *** OmNomSequitur has quit IRC
[23:29:58] *** kdavy has joined #Citrix
[23:32:38] <gladier> as in got up at 4am .. worked all day and went to bed at 3am
[23:32:46] <gladier> the next night
[23:39:43] *** kdavy has quit IRC
[23:42:10] *** mete has joined #Citrix
[23:42:11] <mete> hi
[23:42:27] <mete> is it possible to map a raid card direct into a vm?
[23:42:51] <kaffien> is there an educational version of  xenserver enterprise?
[23:42:57] <kaffien> or a demo of enterprise featuers?
[23:43:02] <mete> on xen I mean :)
[23:43:57] <kaffien> blah .. found it ... silly me.
[23:45:59] <mete> is it possible to map a raid card direct into a vm on xenserver?
[23:46:27] <jduggan> mete: not really
[23:46:30] <jduggan> i know of no way
[23:46:42] <mete> thats bad :(
[23:47:00] <mete> so I think I need to get an esx compatible board -.- shit
[23:47:30] <jduggan> :(
[23:47:41] <mete> yep xD
[23:47:48] <mete> don't want to create a 8TB vhd file :P
[23:48:17] <mete> [23:42:11] <swente> mete:  i think this should be possible. xen calls this 'passthrough' of devices. [xen, xenserver's  base, is capable of this. but i've never touched xenserver i have to admit..]
[23:48:19] <jduggan> i dont think xen lets  you create more than 2tb anyway
[23:48:21] <mete> :)
[23:48:25] <kreignj> mete, it is, but the disks aren't transitory across reboots, as near as I can see.
[23:48:45] <mete> what? sorry, my english isn't that good... kreignj
[23:49:17] <kreignj> mete, you can attach "raw storage" to xenserver. as far as I know/can tell, the 'configuration' does not survive a reboot of the xenserver host.
[23:49:29] <kreignj> mete, eg. an NTFS formatted RAID5
[23:49:46] <mete> ok
[23:49:52] <mete> yep, this is just what I want :)
[23:50:00] <mete> 8TB raid5 xD
[23:50:09] <kreignj> cd /dev/xapi/block && ln -s /dev/<your_array>
[23:50:12] <kreignj> then scan 'removable devices'
[23:50:17] *** kdavy has joined #Citrix
[23:50:22] <kreignj> can't recall the exact command to do that.
[23:50:37] <kreignj> but then you can use that for the whatever disk on a VM, if you want.
[23:50:40] <gm1959> I'm trying to get multipath iscsi working with multiple session per target.  The target host is a solaris zfs box which supports mc/s and mpxio.  All I see is one session, so do I have to switch the kernels over to MPP from DMP?
[23:50:46] <kreignj> (I advise against this approach.)
[23:51:02] <mete> kreignj: haven't installed xenserver on this host atm :)
[23:52:37] <kreignj> mete, aha. so why are you?
[23:53:05] <kreignj> kdavy, hey, question for you. what's a realistic transfer speed for xe vm-[export|import] over gigE, in your experience?
[23:53:12] <kreignj> to NFS or local storage
[23:53:23] <jduggan> ive never had over 50MB/s
[23:53:32] <kreignj> jduggan, :| at all? shit
[23:53:32] <jduggan> its SLOOOOOW
[23:53:47] <mete> I've a test machine :) xenserver is OK for my needs, only want to know about the raid "issue", so I will do a test install on the "big" host :)
[23:54:56] <kreignj> mete, again, any 'added disks' will be 'forgotten' on xenserver host reboots.
[23:55:14] <mete> I think this could be done with a simple script on boot :)
[23:55:34] <mete> I'm familiar with linux, so this should be an option (I hope) :)
[23:56:16] <kreignj> mete, I'm not intimate with the init process for xenserver, but I'd recommend against it, personally.
[23:56:49] <kreignj> http://i.imgur.com/p3XBH.png
[23:57:52] <mete> if it's not possible to do that "clean", so I will use ESXi for that...
[23:58:49] <kreignj> mete, ... I don't know if ESXi is any better.
[23:59:08] <kreignj> mete, I've nfi why you'd even want to do that. it defeats the purpose behind virtualizing your shit in the first place (at least half of it)
[23:59:11] <mete> kreignj: in esxi it works fine (tested with attached sas tape)
[23:59:52] <mete> kreignj: the point is, I only want to run ONE physical server at home... and I ned some VM's for some test things...

top