[00:34:56] *** newlix_ has joined #smartos
[00:37:00] *** ins0mnia has quit IRC
[00:46:17] *** newlix_ has quit IRC
[01:36:11] *** wolstena has quit IRC
[01:59:01] *** masked has quit IRC
[01:59:05] *** CapP1CARD has quit IRC
[01:59:19] *** CapP1CARD has joined #smartos
[02:05:32] *** masked has joined #smartos
[02:05:35] *** masked has joined #smartos
[02:54:06] *** masked has quit IRC
[02:56:29] *** daniel_wu has quit IRC
[03:19:54] *** miine__ has joined #smartos
[03:20:54] *** miine has quit IRC
[03:20:54] *** miine__ is now known as miine
[03:23:44] *** darjeeli_ has quit IRC
[03:30:20] *** Vod has quit IRC
[03:52:25] *** scubasteve has joined #smartos
[04:09:40] *** rodgort has quit IRC
[04:10:52] *** rodgort has joined #smartos
[04:21:27] *** masked has joined #smartos
[04:26:09] *** masked has quit IRC
[04:26:09] *** masked has joined #smartos
[04:30:00] *** masked has quit IRC
[04:30:25] *** masked has joined #smartos
[04:31:53] *** jamesd has quit IRC
[04:42:58] *** masked has quit IRC
[04:43:21] *** masked has joined #smartos
[04:43:21] *** masked has joined #smartos
[04:44:20] *** sachinsharma has joined #smartos
[04:46:54] *** newlix_ has joined #smartos
[04:51:48] *** newlix_ has quit IRC
[04:57:47] *** darjeeling has joined #smartos
[05:04:27] *** Tabrenus has joined #smartos
[05:11:43] *** wolfeidau has quit IRC
[05:56:09] *** darjeeli_ has joined #smartos
[05:56:42] *** sachinsharma has quit IRC
[05:59:07] *** darjeeling has quit IRC
[06:23:33] *** sachinsharma has joined #smartos
[06:47:14] *** newlix_ has joined #smartos
[06:51:41] *** newlix_ has quit IRC
[06:55:02] *** spray has quit IRC
[06:57:44] *** Tabrenus has quit IRC
[07:50:49] *** Cpt-Oblivious has joined #smartos
[07:51:25] *** scubasteve has quit IRC
[08:05:22] *** Livid has joined #smartos
[08:05:42] <Livid> How can I check current available memory on SmartOS?
[08:12:38] *** alucardX has joined #smartos
[08:22:52] *** Livid has quit IRC
[08:23:59] *** Livid has joined #smartos
[08:34:02] *** Livid has quit IRC
[08:44:16] *** zr0 has quit IRC
[09:12:10] *** cjones_ has quit IRC
[09:23:22] *** texarcana has quit IRC
[09:24:37] *** texarcana has joined #smartos
[09:33:39] *** darjeeli_ has quit IRC
[09:53:51] <bsdguru> livid: you mean in zone or from the global zone?
[10:02:24] *** bens1 has joined #smartos
[10:11:00] *** sjorge has quit IRC
[10:13:06] *** sjorge has joined #smartos
[10:47:50] *** newlix_ has joined #smartos
[10:52:09] *** newlix_ has quit IRC
[10:52:34] *** dysinger has quit IRC
[10:56:35] *** masked has quit IRC
[10:58:26] *** masked has joined #smartos
[10:58:27] *** masked has joined #smartos
[11:00:45] *** cjones_ has joined #smartos
[11:02:09] *** cjones__ has joined #smartos
[11:02:09] *** cjones_ has quit IRC
[11:06:29] <lundh> I'm trying to set up a data area/zfs volume that is accessible from two different VM:s, one SmartOS Zone and one Ubuntu installation. what is the best way to do that?
[11:07:01] <linuxprofessor> probably nfs
[11:07:09] <lundh> nfs from the global zone?
[11:07:19] <lundh> or from the smartos zone?
[11:07:30] <linuxprofessor> i'd do it from the zone
[11:07:43] <linuxprofessor> keep as much as possible out of the gz, as intended =)
[11:07:48] <lundh> read somewhere that nfs doesnt work from a zone
[11:08:06] <linuxprofessor> right, i've read something about that too
[11:08:08] <linuxprofessor> hmm
[11:09:22] <jperkin> yes, you cannot currently serve NFS from a zone.
[11:09:28] <jperkin> you can serve samba though
[11:11:28] <lundh> what I'm trying to do is to have a zone that will share the area throgh AFP (using netatalk) and have another VM populate the share with data
[11:14:32] <lundh> wuld it be a very bad idea to share nfs from the global zone and let the VM:s connect to it that way?
[11:14:47] *** khushildep has joined #smartos
[11:15:43] <linuxprofessor> if you werent using linux you could use lofs
[11:16:40] <lundh> does that really work smartos to linux?
[11:21:54] <linuxprofessor> no, like i said. if you werent using linux =)
[11:22:14] <lundh> yeah, it works smartos to smartos as well :)
[11:39:46] *** bens1 has quit IRC
[11:48:11] *** wega3k has joined #smartos
[11:50:40] *** KermitTheFragger has joined #smartos
[11:57:04] *** khushildep has quit IRC
[12:18:39] <lundh> If i create two vnics, each one on a separete VM and want those to communicate (within the same subnet). does that happen locally? at what speeds?
[12:21:51] <KermitTheFragger> lundh: you can use dladm show-vnic to see the speed of the virtual nics
[12:22:43] <lundh> what if it says 0?
[12:23:01] <lundh> (It does for the vm vnic)
[12:24:51] *** sachinsharma has quit IRC
[12:36:26] <KermitTheFragger> lundh: it always says that for physical NIC's iirc
[12:37:04] <KermitTheFragger> lundh: i guess that means that crossbow doesn't limit it and the speed is just what the real link speed is
[12:37:15] <lundh> ok
[12:37:59] <KermitTheFragger> lundh: you should take a look at the crossbow doc's if you want to know more about this. Most of it is for Solaris / OpenIndiana but its the same for SmartOS
[12:38:21] <lundh> The core issue is that I have data that I want to share between two VMs but I dont know how to do that
[12:38:44] <KermitTheFragger> they should just be able to reach each other if they have an IP in the same subnet?
[12:39:09] <lundh> sure but a VM cant be an NFS server
[12:39:22] <lundh> which would mean that I have to use the Global Zone as NFS server
[12:39:27] <lundh> not sure thats clean
[12:39:56] <KermitTheFragger> lundh: i assume were talking about a joyent branded zone?
[12:40:23] <lundh> both joyent and kvm unfortunally, lofs would have worked great otherwise
[12:40:55] <lundh> the real issue is that I have to process the data using tools that wont compile in smartos; they only work on linux
[12:41:42] <lundh> so I have to keep a linux VM just for that
[12:41:48] <KermitTheFragger> lucky you :)
[12:42:12] <lundh> yeah...
[12:42:35] <KermitTheFragger> it allows you to have an ipkg branded zone (ie. openindiana)
[12:42:44] <KermitTheFragger> i guess that would allow you to run a NFS server in a zone
[12:43:29] <KermitTheFragger> i don't know what the quality of it is, but so far i like it and haven't bumped into any major problems
[12:43:37] <KermitTheFragger> so the quality seems good
[12:43:56] <KermitTheFragger> but ymmv
[12:44:17] <lundh> lots of new terminology there, might have to read up a little before I understand what it is
[12:44:37] <lundh> openindiana without kvm?
[12:45:02] <KermitTheFragger> yes
[12:45:07] <KermitTheFragger> so really in a solaris zone
[12:45:19] <lundh> mm
[12:45:20] <KermitTheFragger> = no performance overhead
[12:46:37] <lundh> that might be a bit to complicated for me
[12:48:04] *** newlix_ has joined #smartos
[12:49:13] <KermitTheFragger> only you can be the judge of that :-) But it will require some manual tweaking, etc. so it might not be the right solution for you
[12:50:14] <lundh> It could be an option later on. I just got my smartos instance running yesterday. I have alot to learn!
[12:51:21] <lundh> will I end up with locking issues if I use lofs to the joyent zone and nfs to the linux zone?
[12:52:16] *** alucardX has quit IRC
[12:52:35] *** newlix_ has quit IRC
[12:53:30] <KermitTheFragger> im no expert but most filesystems won't like it they are mounted on different systems at the same time
[12:53:35] <KermitTheFragger> same goes for ZFS over iscsi
[12:54:01] <lundh> iscsi is block level, I can understand that that wont work :)
[12:54:45] <KermitTheFragger> it does if the filesystem on top of iscsi can handle it; like ocfs2
[12:55:01] <lundh> oh
[12:55:55] <KermitTheFragger> but im really not an expert on this so i dont know if i should be handing out advice on this :)
[12:56:37] <lundh> I want taking it that seriously. not like I would try it on a production evnrionment :)
[12:57:02] <lundh> ...or at all, I have no need to make this that complicated
[12:58:21] <KermitTheFragger> i think most people handle communication between zones as nothing special
[12:58:37] <lundh> what do you mean?
[12:59:02] <KermitTheFragger> in the sense they handle it the same as two physical separate boxes would need to communicate
[13:00:31] <lundh> I would like to do that too. guess I could try by exporting from the global zone
[13:02:40] <KermitTheFragger> yeah i would give that a shot; it seems the easiest
[13:02:54] <lundh> it probably is
[13:02:56] <KermitTheFragger> i guess you could also turn it around; have the linux KVM be the NFS server and have the zone be the client
[13:03:04] <KermitTheFragger> if the global thing doesnt work
[13:03:27] <lundh> problem with that is that I hide the data within the linux file system instead of keeping it exposed on zfs
[13:04:35] <KermitTheFragger> your not making this easy, are you? ;-)
[13:05:19] <lundh> maybe I shuld go back to running everything on a single FreeBSD instande ;)
[13:42:50] *** sachinsharma has joined #smartos
[13:46:40] *** darjeeling has joined #smartos
[14:01:16] *** sachinsharma has quit IRC
[14:38:51] *** ins0mnia has joined #smartos
[14:42:26] *** wega3k has quit IRC
[14:46:52] <lundh> nfs between the the zone and the global zone is incredbily slow. I get 148 MB/s write locally on the global zone and less the 9 MB/s thrugg nfs from the vm
[14:46:56] <lundh> any idea why?
[14:48:24] *** newlix_ has joined #smartos
[14:49:23] *** jelmd has quit IRC
[14:53:04] *** newlix_ has quit IRC
[15:01:06] <nahamu> if it's a zone, why not just lofs mount it into the zone?
[15:01:24] <nahamu> there's even a setting that vmadm recognizes for doing that.
[15:02:08] <nahamu> (though the zfs being slow is still concenring...)
[15:02:10] <KermitTheFragger> nahamu: can lofs handle being mounted/accessed in multiple zones?
[15:02:13] <nahamu> s/zfs/nfs)
[15:02:37] <nahamu> KermitTheFragger: I haven't tried it myself, but I don't see why not.
[15:02:49] <nahamu> a linux VM wouldn't be able to do that, of course.
[15:03:33] <nahamu> ah, is it writes that are slow? and is your pool made of spinning platters with no SLOG device?
[15:04:38] <nahamu> NFS does indeed suck because all writes are synchronous writes which is painful if your pool can't provide lots of low-latency synchronous IOPS...
[15:05:10] <nahamu> definitely worth trying the lofs method.
[15:05:47] <KermitTheFragger> nahamu: since zones share the same kernel does that mean that locking etc would work? (in contrast to for example sharing a ZFS file system over ISCSI to multiple hosts)?
[15:06:12] <nahamu> KermitTheFragger: that's a good question. I don't know the answer.
[15:06:46] <nahamu> if you have an easy test case you could run I'd love to know what you find out, but otherwise we'd have to wait until the smart people wake up...
[15:07:15] <KermitTheFragger> the problem with locking and all other concurrency issues is that they are hard to spot
[15:07:26] <KermitTheFragger> a test case might work, but that might just be a fluke
[15:07:56] <nahamu> right. I meant that if you had some code that could lock a file for 10 minutes in one place, and demonstrate that another system can't access the file in the dangerous way
[15:08:26] <nahamu> verify that the locking is working over NFS, then test it with the lofs mounts
[15:09:15] <KermitTheFragger> ah yes, sorry i misunderstood; that would work as a test
[15:44:21] <lundh> nahamu: cause i want the same data on a kvm vm
[15:47:33] <nahamu> lundh: for the KVM vms you'd have to use NFS or CIFS.
[15:47:53] <lundh> yeah :/
[15:48:18] <lundh> I'll try lofs first and benchmark that
[15:48:24] <nahamu> and for NFS to not suck for writes you need low-latency writes on the pool.
[15:49:02] <nahamu> either a pool that's made of SSDs, or a pool that has SSD for SLOG(s).
[15:50:04] <lundh> right now I only have a l2arc on ssd
[15:50:29] <lundh> this will be a read heavy system
[15:50:42] <lundh> but still, 148 MB write compared to 9 MB...
[15:52:41] <nahamu> what were you seeing for reads over NFS?
[15:52:48] <lundh> havent tried
[15:55:24] <lundh> 289 MB/s
[15:55:37] <lundh> so pretty good ;)
[15:57:47] <lundh> hey! might have had another issue there, got 120 MB/s write now
[15:58:43] <lundh> I ran a crude benchmark last time "time yes > testfile" might have had throughput issues with yes...
[15:59:06] <lundh> this time with dd turned out much better
[16:01:47] *** wega3k has joined #smartos
[16:16:54] *** bsdguru has quit IRC
[16:32:04] *** Sachiru has joined #smartos
[16:48:40] *** newlix_ has joined #smartos
[16:49:40] *** cjones__ has quit IRC
[16:49:53] *** cjones_ has joined #smartos
[16:53:02] *** newlix_ has quit IRC
[16:56:45] *** neophenix has joined #smartos
[16:56:59] *** daleg has joined #smartos
[17:12:15] <konobi> morning
[17:20:49] *** tonyarkles has joined #smartos
[17:20:57] <nahamu> morning, konobi
[17:33:01] *** daleg has quit IRC
[17:34:46] *** ins0mnia has quit IRC
[17:47:27] *** daleg has joined #smartos
[17:50:10] *** Tabrenus has joined #smartos
[17:55:24] *** enmand_ has quit IRC
[17:57:22] *** des2 has quit IRC
[17:58:52] *** KermitTheFragger has quit IRC
[18:04:33] *** newlix_ has joined #smartos
[18:08:01] *** masked has quit IRC
[18:08:08] *** newlix has quit IRC
[18:14:44] *** wega3k has quit IRC
[18:33:38] *** masked has joined #smartos
[18:33:39] *** masked has joined #smartos
[18:36:28] *** rmustacc has quit IRC
[18:39:09] *** Tabrenus has quit IRC
[18:39:25] *** Seze has joined #smartos
[18:43:20]
<Seze> I'm really confused on which Xeon CPUs support KVM functionality. According to the WIK theI Xeon 54XX, 55XX, 56XX, 74XX, 75XX, 76XX are supported, but on Intel's page, the vast majority of those show EPT not supported http://ark.intel.com/Products/VirtualizationTechnology
[18:44:05] *** rmustacc has joined #smartos
[18:49:20] <Zigara> Seze: get something with EPT then
[18:49:37] <Seze> thats what I'm trying to do
[18:49:39] <Zigara> I booted up smartos without EPT and it seemed run fine but it gave me a lot of warnings
[18:50:07] <Seze> the issue is I see "works for" listings with CPUs that are marked as not having EPT support
[18:50:10] *** Red_Devil has quit IRC
[18:50:26] <konobi> ark lies
[18:50:36] <konobi> check the wikipedia page instead
[18:50:47] <Seze> which wiki page?
[18:53:16] *** ins0mnia_ has joined #smartos
[19:00:53] *** tonyarkles has quit IRC
[19:06:12] *** Red_Devil has joined #smartos
[19:06:22] *** xmerlin has joined #smartos
[19:06:35] *** Red_Devil is now known as Guest40178
[19:14:27] *** Guest40178 has quit IRC
[19:17:02] *** jamesd has joined #smartos
[19:19:13] <xmerlin> hi to all ...just a 1k question ;) ...zfs have end to end integrity ...so ...it's safe to have a virtualmachine with linux and an xfs filesystem mounted with nobarrier ...is it right?
[19:19:56] *** Red_Devi1 has joined #smartos
[19:32:30] <konobi> 1000 questions!? you're out of luck for that
[19:33:32] <xmerlin> 1k euros :)
[19:37:56] <konobi> no idea what nobarrier is
[19:40:48] <xmerlin> tipically you can disable barriers if you have an hw controller with a battery backuped cache
[19:41:59] <konobi> the blocks are checksummed to disk
[19:42:33] <xmerlin> I know
[19:44:04] <konobi> so that's the world view as far as linux is concerned
[19:46:16] <xmerlin> ok
[19:46:28] <Seze> I thought with the copy on write aspect of ZFS, it was impossible to get your FS into a corrupted state if the server crashes mid write
[19:46:34] <Seze> not sure if that helps you or not
[19:50:42] <nahamu> xmerlin: I'd ask rmustacc when he's around...
[19:50:57] <xmerlin> nahamu, thank you very much
[19:52:49] <nahamu> but my gut says that if the global zone crashes, having not used barriers could lead to data loss.
[19:53:10] <nahamu> I'd imagine that the granularity of guarantee that ZVOLs provide is at the block level.
[19:53:14] <Triskelios> Seze, xmerlin: an application on top of ZFS is responsible for maintaining its own transactions
[19:53:40] <nahamu> right. what Triskelios said. :)
[19:54:07] <nahamu> (forgot you were in here...)
[19:54:15] <Triskelios> (as long as all writes hit the log and not some cache in the VM layer, you should be fine)
[19:54:26] *** dysinger has joined #smartos
[19:55:23] <nahamu> "the log" being the ZFS intent log, right?
[19:56:04] <Triskelios> right. which entails a sync. write or an explicit cache flush
[19:57:28] <xmerlin> Triskelios, to sum up ...it's not safe to use nobarrier ...because zil cannot "manage" it ...because zfs is not informed about this transaction ...right?
[20:00:03] <nahamu> hmmmm. on the other hand, in theory the underlying ZVOL shouldn't report back to the VM that the block was written unless it's safe to do so...
[20:00:28] <nahamu> I guess I don't fully understand the difference in behavior by XFS when barriers are or aren't enabled...
[20:00:47] <Triskelios> whether writes are treated as synchronous depends on whether the zvol has write caching enabled, I think
[20:01:39] <Triskelios> zvols default to synchronous, but I don't know if KVM or the VM itself can change that
[20:02:19] <nahamu> I know that QEMU can theoretically provide a write cache. I don't think SmartOS uses that though.
[20:06:13] <xmerlin> some minutes ago I've seen also another interesting value from a vmadm get ... the default blocksize is 8k ...if I remember correctly in a nexenta course ...one of the trainers pointed that tipically they use 64k for storing VMs
[20:06:24] <xmerlin> what do you think about it?
[20:07:58] *** dysinger has quit IRC
[20:10:28] <konobi> xmerlin: that may be something more about the fact it's a SAN than anything to do with VMs
[20:11:50] <wesolows> bigger blocks = more read-modify-writes. whether something is a VM is irrelevant; what matters is what size writes are being done.
[20:12:10] <wesolows> if your guest writes in 64k blocks, 64k is perfect
[20:13:41] <nahamu> wesolows: any thoughts on the barriers question?
[20:14:47] <xmerlin> wesolows, ok
[20:16:01] <xmerlin> wesolows, probably they pointed 64k blocksize because they are talking about vmware images ...and vmdk has 64KB blocks
[20:16:06] <wesolows> not any that are specifically informed. I do know that Linux people usually say "barrier" when they mean "correctness" in a filesystem, so I strongly suspect that turning them off results in data corruption. Certainly that's true for extX filesystems.
[20:16:39] <xmerlin> barrier are disabled in extX by default
[20:16:47] <wesolows> yes I know. awesome, isn't it?
[20:16:53] <wesolows> "it's faster!"
[20:17:03] * wesolows really wishes he were joking
[20:17:11] <xmerlin> :)
[20:17:31] <konobi> iirc lots of databases are 8k record sizes
[20:18:06] <nahamu> In some ways I find that a feature. Most of my extX filesystems are just the root filesystems of VMs I can recreate. Critical data goes on the ZFS based NFS servers...
[20:18:30] <jaakkos> ext4 enables barriers by default
[20:18:57] <wesolows> nahamu: data loss is never a feature, sorry. if you want data loss on a correct filesystem, just do async writes.
[20:19:23] <wesolows> the kernel should never cause data loss unless it's requested by userland
[20:19:51] <Triskelios> filesystems on zvols with write cache enabled must use write barriers, no need for external sync. if write cache is disabled. although I believe XFS can use a separate log device as well which would move the synchronisation to there instead
[20:19:53] <nahamu> wesolows: I didn't mean the data loss was a feature. I meant the speed.
[20:20:13] <wesolows> they are a package deal though
[20:20:42] <wesolows> you can have fast and correct, but then you have to spend more.
[20:20:54] <wesolows> fast, cheap, reliable; choose 2
[20:21:06] <xmerlin> lol
[20:21:34] <nahamu> Triskelios: when you say write cache, if I "zfs get all <zvol>" am I looking at the "sync" property?
[20:22:04] <xmerlin> someday have to create an inexpensize zil ;)
[20:22:08] <xmerlin> someone
[20:22:10] <xmerlin> sorry
[20:22:33] <wesolows> intel's new devices might qualify. we'll see.
[20:22:39] <Triskelios> nahamu: it's a toggle by the application through an ioctl (same as for real disks)
[20:22:52] <Triskelios> I don't know if there's an easy way to check (format?)
[20:23:13] <xmerlin> wesolows, what model?
[20:23:32] <wesolows> not released yet
[20:23:48] <wesolows> the successor to the 710
[20:24:21] <xmerlin> mmm interesting
[20:25:28] <xmerlin> wesolows, slc with supercap inside?
[20:25:52] <wesolows> not sure slc vs mlc, but yes on power failure protection. and supposedly fast and much cheaper than the 710
[20:26:13] <wesolows> I think I have one somewhere but haven't tested it yet
[20:26:24] <wesolows> (710 == SLOWWWWWW)
[20:26:33] <xmerlin> heheh ;)
[20:32:49] *** Sachiru has quit IRC
[20:34:28] <nahamu> Triskelios: so if QEMU never does the ioctl, then in theory it should be safe for the VM to no use barriers??
[20:34:39] <nahamu> s/no /not /
[20:34:44] <konobi> the ocz revodrive seems like it would be interesting to try
[20:36:37] <Triskelios> nahamu: yes
[20:36:37] <AlainODea> Has anyone had any luck getting networking to work on an R720xd? I have an Intel(R) Ethernet 10G 2P X520 Adapter. I haven't tried onboard. I don't have physical access and I'm trying to avoid running Hands & Feet for non-critical stuff.
[20:36:58] <xmerlin> konobi, it doesn't have supercaps
[20:38:05] <rmustacc> Every write that qemu handles as part of a write is not returned until an fsync ihas been done.
[20:38:41] <rmustacc> However, if you don't enable barriers, or just do async writes in a guest, then the guest kernel will not write to the virtual disk and thus on crash have data loss or corruption.
[20:38:45] <konobi> xmerlin: the enterprise models do
[20:39:04] <rmustacc> AlainODea: Do you have the right optics?
[20:39:59] <rmustacc> We never emulate a write cache because it's just too risky with mmost guest filesystems. They'll probably do the wrong thing.
[20:40:07] <xmerlin> konobi, how much does it costs?
[20:40:29] <rmustacc> So we might as well not make corruption easier.
[20:41:20] <xmerlin> rmustacc, ok
[20:41:37] <konobi> xmerlin: not sure, not available at my supplier
[20:41:54] <rmustacc> nahamu: QEMU opens the files O_DSYNC.
[20:42:21] <konobi> rmustacc: some of the lab machines have intel 10/40g nics... not sure what version though
[20:42:37] <rmustacc> There is no Intel 40 gig nic
[20:42:51] <rmustacc> I'm familiar with the nics, hence the questino of optics.
[20:42:56] <konobi> well, 10g
[20:43:56] <AlainODea> rmustacc: using Cisco twinax. Switch and iDRAC both show up 10Gbps at their ends.
[20:44:19] <nahamu> rmustacc: thanks. it's a bit confusing given how many layers there are.
[20:44:29] <wesolows> we've never seen a -DA2 card
[20:44:55] <wesolows> we almost accidentally ordered some, but thankfully didn't. maybe danmcd has though.
[20:45:04] *** masked has quit IRC
[20:45:07] <AlainODea> rmustacc: iDRAC shows driver not loaded on the NICs. I've never seen this before.
[20:45:31] *** khushildep has joined #smartos
[20:45:37] <AlainODea> rmustacc: both NIC ports I should say
[20:46:14] <nahamu> I once bought a NIC that didn't have the PCI IDs in the illumos drivers... could that be the case here?
[20:46:29] *** khushildep has quit IRC
[20:46:40] <rmustacc> AlainODea: Ignore the drac out of band management. What does the OS say?
[20:46:52] <konobi> dladm show-phys shows them, iirc
[20:46:57] <xmerlin> is it possible to add many ips inside a VM nic configuration? ...and how many vnics are supported? ;)
[20:47:25] <xmerlin> "vm nic configuration" ---> JSON
[20:47:36] <konobi> only one ip
[20:47:44] <konobi> since it's served via dhcp
[20:47:56] <konobi> but you can support more, by adding to allowed-ips
[20:48:16] <xmerlin> ah ok ...I've missed allowed-ips
[20:48:19] <xmerlin> thank you
[20:48:46] <konobi> iirc you can support up to 32 vnics per vm
[20:49:01] <xmerlin> ok
[20:49:09] <xmerlin> better than xen
[20:49:12] <xmerlin> :)
[20:50:32] <rmustacc> No, you're probably giong to cap out around 8-10 because of QEMU.
[20:50:51] <konobi> ah, maybe that was just zones
[20:51:01] <AlainODea> rmustacc: rebooting now. I'll check shortly.
[20:51:59] <xmerlin> this one is a linux vm
[20:52:36] <xmerlin> konobi, what's the syntax of the field allowed-ips? ...comma separated / space separated..?
[20:52:47] <xmerlin> I cannot find any information on the man page
[20:52:47] <rmustacc> It should all be in vmadm(1)
[20:53:09] <rmustacc> You can't specify the allowed ips. You have to enable ip-spoofing
[20:53:29] <rmustacc> If you want multiple IP addresses on one mac.
[20:53:36] <xmerlin> :(
[20:54:10] <konobi> rmustacc: you can on the vnic
[20:54:23] <rmustacc> Sure you can do lots of things to a vnic with dladm.
[20:54:33] <rmustacc> But that's not through vamdm.
[20:54:44] <konobi> true
[20:54:44] <xmerlin> so if I have a web server with many ssl websites ...the only way is to enable ip-spoofing ....or I can add 7/9 additional vnics
[20:55:11] <konobi> or SNI
[20:55:42] <rmustacc> Well, I'm off, take care folks.
[20:55:47] <AlainODea> xmerlin: a single web server with many SSL websites doesn't need multiple IPs
[20:55:47] <konobi> ttfn
[20:55:48] <nahamu> later rmustacc
[20:55:57] <AlainODea> rmustacc: take care :)
[20:56:07] <konobi> "As of November 2012, the only major user base whose browsers do not support SNI appear to be users of Internet Explorer 8 or below on Windows XP."
[20:56:46] <xmerlin> AlainODea, without SNI ...only the first certificate is served ...because the SSL is done before the virtual hosting
[20:57:27] <xmerlin> konobi, and that's the main problem
[20:57:59] <konobi> most people don't use IE
[20:58:14] <AlainODea> xmerlin: ah okay. That changes things completely. I'm currently doing Apache httpd virtual hosting and SSL within the NameVirtualHosts
[20:58:37] <AlainODea> konobi: depends on your market
[20:58:46] <xmerlin> AlainODea, the NameVirtualHosts is not enought
[20:59:25] <lundh> could someone explain t me why I get roughly 62 MB/s write speed to a delegated zfs area in a zone while I get 116MB/s through NFS and 148MB/s in the global zone?
[20:59:38] *** dysinger has joined #smartos
[20:59:38] <lundh> the delegat seems extremely slow to me
[20:59:40] <konobi> AlainODea: i mean worldwide... it's < 25% iirc
[21:00:59] *** masked has joined #smartos
[21:00:59] *** masked has joined #smartos
[21:01:31] <AlainODea> konobi: Indeed. I made the same argument and got in an interesting (read: heated) discussion at work over a while back. Our users are primarily middle-aged non-technical credit union and bank staff. Sometimes they use IE because they have to other times because its what's on the machine. A surprising number switch to Firefox/Chrome upon being shown the performance numbers though :)
[21:02:25] <konobi> AlainODea: plus there's chrome frame
[21:02:26] <konobi> =0)
[21:03:34] <AlainODea> konobi: Chrome Frame is quite a nice solution. We have a few keen users using that as a workaround for backward IT depts.
[21:07:35] <wesolows> lundh: delegation doesn't affect performance. are you sure you're doing the same exact workload? have you set the throttle parameter, and does vfsstat show you getting throttled?
[21:08:22] <lundh> how do I check all that? I oly followed what the wiki said I should do to get a basic sone up and running
[21:09:13] <AlainODea> konobi: I got the BIOS on the R720xd upgraded to 3.1.6 (latest) and still no love for those 10GbE NICs. Dell has drivers for the NICs, but no Solaris drivers. I've never needed NIC drivers before except with wifi on Linux which was a huge pain back in the day.
[21:10:08] <nahamu> AlainODea: what are the PCI id's for the NICs?
[21:11:48] <wesolows> you can check throttling setting with vmadm; it's a property called zfs_io_priority
[21:12:06] <wesolows> vfsstat 1 will show you d/s (delays per second) as well as useful performance data
[21:12:23] <wesolows> you can use dtrace and/or vfsstat to see what you're writing -- size matters, a lot
[21:12:35] <lundh> zfs_io_priority is set to 100
[21:12:54] <wesolows> that's the default. so, next question is whether you're getting throttled. you may, if any other zone wants to do I/O
[21:13:18] <AlainODea> nahamu: do I get the PCI id's from dladm show-linkprop?
[21:13:21] <lundh> Only got two and the otehr one is not doing anything
[21:13:44] <nahamu> AlainODea: I forgot that you could see them with dladm... ignore me.
[21:14:27] <Triskelios> I didn't know that...
[21:15:00] <lundh> wesolows: d/s is 0 all the time except right at the start where it was 0.1
[21:16:15] <lundh> what i'm doing is dd if=/dev/zero of=zeroes count=10240 bs=1024k
[21:16:29] <AlainODea> nahamu: I'm not seeing them in show-linkprop. It was more of a theory. It turned out to be wrong. How do you normally do this?
[21:16:41] <wesolows> and you're getting 0 d/s while writing?
[21:16:46] <lundh> yes
[21:16:50] <nahamu> oh. I think with "prtconf -d" or something like that...
[21:16:52] <wesolows> and those writes are going to disk at 65 MB/s or whatever?
[21:17:00] <lundh> yeah
[21:17:07] <wesolows> sounds like it's dtrace time
[21:17:41] <AlainODea> nahamu: I'll give that a shot. Thanks :)
[21:17:44] <lundh> by the way, this is in the only vm on the system, I destroyed the other one
[21:17:52] <wesolows> wait, this is in a VM?
[21:17:56] <lundh> yes
[21:17:57] <wesolows> I thought it was a zone.
[21:18:03] <lundh> sorry, jeay a zone
[21:18:05] <lundh> yeah
[21:18:08] <wesolows> ok
[21:18:19] <wesolows> then it will at least be possible to figure out what's happening here.
[21:18:19] <konobi> AlainODea: they're shoulding up in dladm show-phys though, right?
[21:18:35] <lundh> the system is new, all I have on it is ghe global zone and this zone
[21:19:12] <wesolows> and when you do dd if=... (same command) onto an NFS mount of this same fs on a remote system that respects NFS default-sync semantics, you see 2x throughput. Correct?
[21:19:27] <lundh> yes
[21:19:55] <lundh> I mounted the nfs in the same zone as I have the delagate in
[21:20:25] <AlainODea> konobi: the NICs show up in dladm show-phys. The duplex is "unknown" though while duplex on the gigabit shows half. Only the 10gigabit NICs are cabled.
[21:20:35] <konobi> AlainODea: have you tried running snoop on the interface?
[21:21:01] <wesolows> I'm sure the problem is stupid and obvious, but it's not jumping out at me. So warm up dtrace(1M) and go to town. You have GZ access so it should be easy.
[21:21:26] <AlainODea> konobi: I haven't. Forgive my lack of experience here. Is snoop a util on SmartOS itself?
[21:21:33] <nahamu> AlainODea: silly question, but does the number of NICs in "dladm show-phys" match the number of NICs you believe are in the system?
[21:21:44] <konobi> AlainODea: yup
[21:22:03] <lundh> wesolows: I'll start from scratch. killing the zone and creating a new one
[21:22:05] <AlainODea> nahamu: It does match. Four onboard gigabit. Two 10gigabit.
[21:22:06] <wesolows> You could start by asking whether the same I/O sizes are resulting from each workload, or what the write(2) and/or fsync(2) latency is for dd in each case, or any of a number of other starting points.
[21:22:41] <wesolows> ... or you could take the Microsoft Windows approach and randonly reboot stuff in the hopes that it will somehow solve the problem.
[21:23:17] <lundh> hehe, its just that I have no idea how to do what you just said :) dtrace is an unknown world
[21:23:29] <AlainODea> konobi: k, I'll give snoop a try. I'm trying my luck now since I'm supposed to be at my parents having eggnog, mulled wine and copious quantities of ham :) I may be dragged away shortly.
[21:23:30] <wesolows> it's worth your time to learn how to use it.
[21:23:34] <lundh> but sure, I'll give it a try
[21:23:42] <wesolows> the documentation is superb.
[21:24:01] <wesolows> google "dynamic tracing guide" or start at dtrace.org.
[21:25:03] <wesolows> on the face of it, this problem sounds pretty interesting.
[21:26:02] <lundh> by the way, I cant mesure raw performance like I did in the global zone, dd doesnt show me the stats. is there a different version of dd in the global zone?
[21:26:12] <AlainODea> konobi: sweet. It's like a built-in wireshark. I'm getting broadcasts. I see the significant noise of ActiveDirectory and other Windows stuff coming across.
[21:26:20] <lundh> I had to time it and divide
[21:26:27] <wesolows> you're probably getting GNU dd by default in the zone, if you haven't fixed your path
[21:26:33] <lundh> ah
[21:26:37] <wesolows> use time(1) or similar instead
[21:26:42] <konobi> AlainODea: so, seems like it might be a network problem, not smartos =0)
[21:26:53] <wesolows> but dtrace will give you much more detailed data anyway
[21:27:04] <wesolows> I never completely trust userland tools
[21:27:10] <AlainODea> konobi: It might be a PEBKAC :) I'm going to kick myself soon. I wonder if setting a static IP will work.
[21:27:13] <lundh> thats what I did (used time)
[21:27:25] <lundh> dtrace looks useful though :)
[21:27:48] <wesolows> there's also the dtrace toolkit, which I hope we're shipping...
[21:28:10] <wesolows> it has examples and a bunch of starting points that may be useful
[21:28:34] <lundh> any hint on how trace this issue with dtrace?
[21:29:00] <wesolows> well, I suggested 2 starting points. I'm not sure which is more interesting, because I don't know what the cause is.
[21:29:12] <wesolows> you're basically looking for differences in the 2 cases as seen by ZFS and/or the HW.
[21:29:23] <lundh> I'm in the Dynamic Tracing Guide
[21:29:29] <wesolows> the problem could literally be anywhere in the stack -- from dd(1) to the disk driver.
[21:29:37] <AlainODea> konobi: I am a total idiot. It works now. The problem is indeed not SmartOS, its the switch ACLs eating the packets before they get logged by the firewall.
[21:30:21] <wesolows> well, you could start at the top of the stack with the syscall provider. see what syscalls dd is doing in each case, perhaps aggregate writes by size and see if it's doing fsyncs and when.
[21:30:33] <wesolows> you can also look at syscall latency for write and/or fsync that way
[21:30:48] <wesolows> I kind of doubt that will show anything interesting, but it's a place you could start.
[21:31:09] *** viridari has joined #smartos
[21:31:09] *** viridari has joined #smartos
[21:31:10] <lundh> I have a lot to read... :)
[21:31:38] <konobi> intel++ # just works
[21:32:01] <konobi> there's even a book you should buy!
[21:33:11] <wesolows> if you pkgin in dtracetools, you will get /opt/local/bin/*.d which is (I think) at least part of Brendan Gregg's DTrace Toolkit.
[21:33:56] <wesolows> I wish we shipped this in the GZ though; it's more useful there. Need to figure out why that's not the case.
[21:35:09] <lundh> just realised that the root file system is 100 full
[21:35:10] <nahamu> wesolows: since it doesn't need compiling it could all get dropped into an overlay, no?
[21:35:11] <wesolows> You can read those, in addition to the examples in the Guide, to learn how things work. You can also buy Brendan's book.
[21:35:38] <wesolows> nahamu: sure. lots of ways it could get into the platform.
[21:35:46] <lundh> and that it only has 421 MB alocated
[21:35:49] <lundh> wierd
[21:36:07] <nahamu> lundh: the "root" filesystem is just a ramdisk.
[21:36:16] <lundh> nahamu: root in the zone
[21:36:21] <nahamu> lundh: oh
[21:36:24] <nahamu> hrm
[21:36:40] <konobi> there's a default quota
[21:36:46] <lundh> yeah, 10 G
[21:36:59] <wesolows> is the pool itself full? zpool list will tell you.
[21:37:02] <wesolows> (in the GZ)
[21:37:30] <lundh> nowhere near
[21:38:15] <nahamu> "zfs get all zones/<uuid>"?
[21:38:47] <nahamu> (don't paste it here, but it might tell you what's going on...)
[21:39:31] <lundh> used 10G, available 0 referenced 420
[21:39:43] <nahamu> do you have lots of snapshots?
[21:40:03] <lundh> none but I realised that my benchmarking file is larger then the allocated space
[21:40:13] <wesolows> that'll do it
[21:40:26] <wesolows> you might want to give yourself some more quota...
[21:40:36] <lundh> maybe
[21:43:06] <lundh> lets see if that speeds things up too
[21:44:26] <lundh> that did the trick. 145MB/s
[21:45:32] <lundh> and read is now 7.8 GB/s
[21:45:50] <lundh> the data _might_ have been cached
[21:47:32] <wesolows> you think? ;-)
[21:48:00] <lundh> lets try with something larger then the RAM this time
[21:48:11] <wesolows> there are very few HPC/enterprise storage systems that can do 7.8 GB/s uncached
[21:48:24] <wesolows> they all come with 8-9 figure price tags
[21:48:54] <lundh> I guarantee you this one didnt
[21:49:00] <LeftWing> And would that just be streaming?
[21:49:21] <LeftWing> (That is ... sequential)
[21:49:33] <wesolows> it's dd. trivial large block sequential.
[21:50:08] <LeftWing> Ah. I should probably open my eyes properly before IRC.
[21:50:28] <lundh> yeah, cause IRC is life and death :)
[21:51:03] <wesolows> so now I'm curious why a smaller quota resulted in different performance between direct mount and NFS
[21:51:11] <wesolows> that isn't behaviour I would expect
[21:51:34] <lundh> what I did was that I tried to write a lager file then there was room for
[21:51:50] <lundh> dd did not fail, it just slowed down
[21:51:53] <wesolows> well, that should just fail.
[21:51:53] <lundh> from what I can see
[21:51:58] <lundh> exactly
[21:52:07] <lundh> thats why I didnt even think about that
[21:52:34] <wesolows> maybe it was failing and GNU dd wasn't bothering to tell you.
[21:52:43] <lundh> could be
[21:52:54] <wesolows> that would also explain apparently reduced performance -- it wrote 10 GB but spent a lot more time failing.
[21:53:08] <wesolows> seems like a bug that bad, even the GNU people would fix, but who knows.
[21:54:09] <lundh> I dont know but I guess that I could use that as a test case for studying dtrace later on
[21:54:29] <lundh> is there any issues with sharing a delegated zfs volume using nfs from the global zone? (locks etc?)
[21:55:07] <wesolows> probably not
[21:55:17] <wesolows> delegation isn't as interesting as you think.
[21:55:20] <lundh> reads are still slow
[21:55:28] <lundh> 182 MB/s
[21:55:42] <wesolows> what's the pool topology?
[21:56:05] <lundh> 2x4TB drives in raid1 and a 128 GB SSD as l2arc
[21:56:14] <wesolows> 182 MB/s isn't slow, that's blazing
[21:56:40] <wesolows> for a single mirrored pair, that's great
[21:57:08] <lundh> it is? I thought I would have ended up at ~250
[21:57:22] <wesolows> most disks are rated for 150 or so
[21:57:38] <wesolows> you're actually getting more, which means prefetch and/or caching is working for you somehow
[21:58:00] <lundh> exactly and the zfs mirror spread the reads over both disks
[21:58:22] <lundh> every second on disk 1 and the other half on disk 2
[21:58:24] *** ins0mnia_ has quit IRC
[21:58:34] <wesolows> that's not the way I've always modeled it, but I'm not the expert on that.
[21:58:55] <wesolows> even if that were true, that would actually take them both out of sequential mode and result in greatly reduced throughput
[21:59:07] <lundh> oh
[21:59:19] <lundh> got 206 MB/s on the second try
[21:59:21] <wesolows> there is a HUGE premium on issuing sequential reads to disks
[21:59:33] <lundh> so I guess the cache kicked in there
[21:59:43] <wesolows> yes, probably
[21:59:57] <wesolows> anyway, these figures are very good. you should be happy.
[22:00:03] <lundh> I am
[22:01:26] <lundh> With that sorted out. how do I get this data exposed to a VM with Ubuntu?
[22:01:49] <konobi> cifs?
[22:02:17] <lundh> cifs between to *nix systems?
[22:02:35] <lundh> I have done that before and I dont like it
[22:02:39] <wesolows> tar | ssh
[22:02:47] <wesolows> rsync | ssh
[22:02:47] <wesolows> etc
[22:02:48] <konobi> nc!
[22:03:28] <lundh> none of these options sound that attractive to be honest
[22:03:58] <konobi> well you have to pick one
[22:04:34] <lundh> yeah
[22:04:44] <lundh> oh, before that
[22:04:56] <konobi> i'm using cifs between ubuntu and a smartos zone just fone
[22:05:04] <lundh> the delegated area has ended up in /zones/f5bb9da6-3f82-4599-b700-beb9236b93e0/data
[22:05:33] <lundh> is it safe to change the mountpoint using the zfs-command or should I do that in some other way?
[22:05:53] <wesolows> should be safe to change; that's the point of delegation
[22:06:47] <lundh> it looks weird in the GZ, zfs list show you the sone mountpoint, which doesnt exist. that why I asked :)
[22:11:02] *** masked has quit IRC
[22:11:39] *** Cpt-Oblivious has quit IRC
[22:14:20] *** neophenix has quit IRC
[22:18:53] *** masked has joined #smartos
[22:18:54] *** masked has joined #smartos
[22:30:41] *** ryancnelson has joined #smartos
[22:33:05] *** ins0mnia has joined #smartos
[22:34:12] *** des2 has joined #smartos
[22:41:51] <lundh> is there any torrent software that works on smartos?
[22:54:31] <wesolows> don't personally know, but I would expect so. If you've already checked pkgsrc, why not try building your favourite?
[22:58:10] <lundh> lots of dependencies
[23:09:36] *** newlix has joined #smartos
[23:09:37] *** newlix_ has quit IRC
[23:10:59] *** masked has quit IRC
[23:12:25] <nahamu> lundh: pkgsrc should have one.
[23:13:17] <nahamu> "pkgin se bittorrent" shows Transmission is available
[23:14:25] <lundh> installing the what I need to get pkgsrc working at the moment
[23:14:37] <lundh> I'll check in a few minutes :)
[23:14:40] <nahamu> oh, you're in the GZ...
[23:14:47] <lundh> no
[23:15:01] <nahamu> use pkgin
[23:15:31] <lundh> running a git clone at the moment, wrong decision?
[23:15:37] <LeftWing> pkgin manages the existing pkgsrc binaries you get from Joyent, and is probably what you want.
[23:15:54] <lundh> oh
[23:17:20] <lundh> its a very old version
[23:17:46] <nahamu> perhaps, but it does answer "is there any torrent software that works on smartos?" :-P
[23:17:55] <lundh> it does :)
[23:18:05] <lundh> I should hae been more specific
[23:18:07] <nahamu> it's deps might be enough to satisfy building the latest version...
[23:18:14] <nahamu> s/it's/its/
[23:18:20] <nahamu> I really do speak English...
[23:18:21] <lundh> maybe
[23:18:37] <lundh> hehe, I dont
[23:18:47] <lundh> well, I can but I usually dont
[23:18:50] <LeftWing> Transmission-2.42nb3 may be a year old, but it certainly still works.
[23:19:08] <lundh> Transmission-2.61 is the one that I can see
[23:19:24] <LeftWing> ha, well that's like a couple of months old at best.
[23:19:36] <LeftWing> I guess I'm on an older dataset.
[23:19:38] <wesolows> he has 1.8 I'm sure, which is completely different from our shitty old 1.6.3
[23:19:58] <wesolows> I have a 1.8.4 zone I've been playing with. It's a big change.
[23:20:29] <lundh> how do you upgrade zones?
[23:20:31] <LeftWing> I'm on base:1.7.2 apparently.
[23:20:36] <wesolows> you don't
[23:20:43] <wesolows> create new ones
[23:20:52] <lundh> ok
[23:21:01] <wesolows> it is possible to switch pkgsrc repos and do it that way, but it's not recommended
[23:21:16] <nahamu> I seem to be on 1.7.1
[23:21:33] <wesolows> because unfortunately the stuff in pkgsrc mostly comes from people who aren't disciplined in their compatibility and upgrade guarantees
[23:21:36] <lundh> wesolows: so there is no upgrade path?
[23:21:39] <LeftWing> Yeah, it has all of the pathologies of regular horrible OS upgrades, plus less use and testing.
[23:21:54] <wesolows> not really. I mean, it's possible, depending on what you're doing.
[23:22:16] <LeftWing> wesolows: Do we do security fixes within a micro?
[23:22:25] <wesolows> yes, bug fixes generally
[23:22:29] <LeftWing> cool
[23:22:52] <lundh> so if I want to upgrade the software I have to create a new zone and configure it?
[23:23:03] <wesolows> that's the recommended way, yes
[23:23:09] <wesolows> (if by "software" you mean the image)
[23:23:24] <wesolows> the platform is upgraded independently of any images
[23:23:28] <lundh> applicaitons installed
[23:23:48] <lundh> like transmission
[23:23:54] <wesolows> also, for things you build yourself and put in /usr/local or whatever, you can obviously upgrade at any time
[23:24:04] <lundh> yeah
[23:24:12] <LeftWing> lundh: To get a newer transmission you'll likely need to switch to a new dataset in the future, yes.
[23:24:25] <LeftWing> We encourage (and people are mostly already anyway) the use of config management
[23:24:35] <LeftWing> like Chef or Puppet or even Makefiles or something.
[23:24:47] <wesolows> rdist!
[23:24:53] <LeftWing> Oh lord. :P
[23:24:59] <wesolows> no Ruby required.
[23:25:03] <LeftWing> I mostly manage my config files via UUC... oh.
[23:25:24] <lundh> sure but if I want to store big amounts of data in a zone, migration could be a problem
[23:25:25] <wesolows> rdist is a single binary written in C. the value of that can't be overstated.
[23:25:52] <wesolows> lundh: correct. most customers who work with lots of data have distributed or replicated databases to store it. mysql, riak, whatever.
[23:26:03] <LeftWing> or they stick it out on a NAS.
[23:26:27] <wesolows> yes, because everyone loves a SPOF
[23:26:55] <LeftWing> I'll say.
[23:26:57] <wesolows> they'll spend millions in developing a super duper application that's 13 kinds of awesome, then stick a NAS box behind it.
[23:27:14] <wesolows> a firmware bug bites them and down it all goes
[23:27:17] <LeftWing> Simultaneously hilarious and depressing. =\
[23:27:21] <nahamu> I remember rdist!
[23:27:54] <lundh> so I might have to reconsider a few things then...
[23:28:30] <LeftWing> lundh: So if your data is in a child dataset of the zone
[23:29:11] <LeftWing> I would say you can create a new zone with a delegated dataset, shut it down, remove the new zone's delegated dataset and then do a ZFS clone in its place.
[23:29:32] *** masked has joined #smartos
[23:29:33] *** masked has joined #smartos
[23:29:34] <LeftWing> so the /data filesystem from the old zone will be duplicated 'instantly' into the new one.
[23:29:56] <lundh> that sounds like a workable solution
[23:30:17] <wesolows> or just stick with software you have, once it's working the way you want.
[23:30:28] <wesolows> "upgrade" = risk
[23:30:49] <lundh> that is what I usually do but somtimes I have to upgrade due to external circumstances
[23:30:53] <LeftWing> Building something with no upgrade path also = risk
[23:31:09] <wesolows> well, there's always an upgrade path
[23:31:27] <LeftWing> True.
[23:32:03] <LeftWing> If you throw enough data into the dataset that it fills more than half of the pool, though, it's going to be a pain. :P
[23:32:05] <wesolows> as long as you have the source to your entire software stack, you can never be fucked that way
[23:32:26] <lundh> Lets start from scratch: I'm building a home server to replace my old broken one. The old one ran FreeBSD without any VMs. What I want is to store my media library, run it as a time machine server, keep backups of other servers, run a few web servers and learn about virutalisation. maybe smartos is the wrong choice?
[23:32:52] <wesolows> let's take the easy ones first :-)
[23:32:58] <wesolows> web servers = 1 per zone, trivial
[23:33:10] <wesolows> TM server = 1 zone, install netatalk from pkgsrc
[23:33:13] <lundh> exactly
[23:33:24] <lundh> did that already, trivial :)
[23:34:00] <wesolows> storing files, also trivial; just put them in a zone
[23:34:36] <lundh> delegated dataset or not?
[23:34:42] <wesolows> the missing piece, where what you're trying to do doesn't really fit SmartOS or any other "cloud" deployment system, is the web app you use to manage those files -- put, get, delete, etc.
[23:35:08] <wesolows> and the database in which you store them, which ought to be replicated to ease transitions and provide redundancy
[23:35:27] <wesolows> it would also be fine if you didn't plan to run a bunch of applications in that zone
[23:35:41] <wesolows> i.e., just use it to store stuff and retrieve it via http or scp or whatever
[23:36:15] <wesolows> then you can have a separate zone in which you install applications but don't persist data; you can replace that zone with the new hotness every 5 minutes if you want and it won't hurt any
[23:36:34] <lundh> thats where the torrent app comes into picture. some it if originates from torrents
[23:36:49] <wesolows> the problem comes from using a single zone to store persistent (software-independent) data, AND to run a bunch of applications you frequently want to upgrade but don't manage yourself
[23:37:30] <lundh> but if the applications need to access the files?
[23:37:41] <wesolows> HTTP
[23:37:44] <LeftWing> Transmission provides for post-download hooks -- you could punt completed files into Riak! ha ha.
[23:37:51] <wesolows> like I said, that's the missing piece.
[23:38:05] <wesolows> HTTP is the lingua franca of inter-system (and therefore inter-zone) communication
[23:38:38] <wesolows> LeftWing: I'd probably hack up something like that, yeah.
[23:39:33] <wesolows> the other thing you can do here is just manage your download software yourself instead of in pkgsrc
[23:40:25] <wesolows> i.e., have 2 zones of the same image. one you use only for building new copies of transmission, the other of which you use for downloading stuff. when you want to upgrade, you build a new copy, test it, and then just replace the one in your production zone.
[23:40:32] <wesolows> this is actually what a lot of customers do.
[23:41:04] <lundh> sounds like a good idea
[23:41:07] <wesolows> pkgsrc provides basic tools and simple dependencies, but all of the software their apps rely on is hand-rolled and upgraded in lockstep as a single unit, so it can be thoroughly tested.
[23:41:18] <wesolows> the drawback of course is that you have to roll all that yourself.
[23:42:17] <wesolows> so your data is bound to your zone, but the application can be upgraded arbitrarily from a known good build.
[23:42:23] <lundh> could be done. I dont plan o upgrading that often
[23:42:52] <lundh> and I could probably learn alot by doing it this way
[23:43:20] <wesolows> you'd learn about building software, which is tedious and a pain in the ass.
[23:43:57] <lundh> I could also stick with smartos which at least give me the potion to learn more :)
[23:44:39] <wesolows> I don't know what else I'd use, if I had only 1 machine.
[23:45:06] <lundh> I ran everything in the same FBSD instance before. it works but upgrading is... risky
[23:45:16] <wesolows> more likely I'd do what I did, which is buy 1 small system to provide shared file storage over NFS/ssh/whatever, and 1 to provide services
[23:45:18] <lundh> and testing software is a pain
[23:45:25] <wesolows> yes
[23:45:41] <wesolows> gotta say, I love ejecting the DVD, putting in a new one, rebooting, and there's my upgrade!
[23:45:57] <wesolows> takes 2 minutes and requires zero work.
[23:46:17] <lundh> that must be a nice feeling, my upgrades are... a bit more complicated
[23:46:44] <wesolows> well I'm actually doing it the hard way, because my server is uniquely shitty.
[23:47:06] <wesolows> it can't boot from USB... most modern systems can skip the eject step, since they'll just take out the USB key and put a new one in.
[23:47:26] <lundh> thats what I do
[23:51:12] *** ins0mnia has quit IRC
[23:51:13] <lundh> My idea was to cheat and let ubuntu do the hard work with the apps and let a zone handle the data storage and netatalk-part
[23:51:24] <lundh> but I cant figure out how to connect the two
[23:51:45] <wesolows> that's your missing web app
[23:52:43] <lundh> nfs would have been great
[23:53:02] <wesolows> yep. no NFS in zones is the #1 annoyance among SmartOS users
[23:53:16] <wesolows> most of whom are trying to do various things that end up looking like what you're doing
[23:53:30] <wesolows> but it doesn't even crack the top 20 list among paying customers
[23:53:52] <lundh> I guess home users are a diffenret kind of users
[23:54:35] <wesolows> yes. there's more demand for shared storage, and less willingness to just use an external system for it if it's really needed.
[23:54:46] <lundh> would it be an option to keep the data on a zfs volume on the GZ and share that, using nfs to both ubuntu and a zone?
[23:55:12] <wesolows> sharing zvols is a very bad idea
[23:55:13] <lundh> and then redistribute using netatalk from the zone
[23:55:30] <lundh> ok, Ill take your word for it by why?
[23:55:34] <wesolows> oh, via nfs... I guess that's possible.
[23:56:05] <wesolows> I'm not sure what ubuntu is for though.
[23:56:23] <lundh> mostly rtorrent
[23:56:26] <ryancnelson> home users can say "make a zone with 2TB of storage i bought at best buy, and these 4gb of ram i have" … customers in the public cloud on server-grade gear can't, and they get something more like 50G with their 2gb of ram. dimensions of things you buy are very different
[23:57:24] <lundh> ryancnelson: you are right about that, my hosted server has 2 GB RAM and 20 GB disk and thats huge
[23:58:05] <lundh> would have been even smaller if atlassian would have allowed it
[23:58:06] <wesolows> with the quantum improvement in bandwidth available at home, I'm quite sure that my current servers will be my last. When they fail, I'll move my stuff onto zones on the Internet.
[23:58:31] <wesolows> 1/8 of a real computer is a lot better than anything I can afford to buy and run here.
[23:59:17] <lundh> thats fine for compute heavy machines, not storage heavy ones
[23:59:52] <ryancnelson> drives up "demand" for shared storage someplace, when "i want 2TB with my 1gb ram" means "give me half the storage on this node, and 'maroon' most of the available ram" … to which any business will say "no" … but a home user can just buy another 3tb western digital sata spindle