September 9, 2011  
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30

[00:00:18] *** patcito has joined #chromium-os
[00:02:21] *** achuith has quit IRC
[00:03:53] *** achuith has joined #chromium-os
[00:03:53] *** ChanServ sets mode: +v achuith
[00:04:17] *** chocobo__ has joined #chromium-os
[00:04:17] *** ChanServ sets mode: +v chocobo__
[00:10:34] *** saintlou has joined #chromium-os
[00:10:34] *** ChanServ sets mode: +v saintlou
[00:11:05] <kliegs> crosbot: sheriffs?
[00:11:05] <crosbot> kliegs: sheriffs: benchan, adlr, vpalatin, yjlou
[00:11:39] <kliegs> benchan, adlr, vpalatin: can you help with http://code.google.com/p/chromium-os/issues/detail?id=20207 please?  I think the toolchain is somehow out of sync somewhere along the chain
[00:12:31] <benchan> kliegs: let me try locally
[00:15:33] <vpalatin> kliegs: did you try gcc-4.6 at some point ?
[00:15:39] <kliegs> vpalatin: I haven't
[00:15:52] <kliegs> at least not intentionally
[00:16:07] <vpalatin> I don't think this attribute (ie EABI attr 44 == idiv/div stuff) was present in our toolchain before that one
[00:16:11] <kliegs> we've had multiple people here all hit the issue - one of which is a chromium developer so wouldn't try any funky toolchain stuff that I can think of
[00:16:27] <kliegs> vpalatin: I should just be using whatever toolchain is installed by default
[00:18:33] <vpalatin> kliegs: can you try "armv7a-cros-linux-gnueabi-readelf -A /build/tegra2_kaen/usr/lib/libcrosapi.a" on your own workspace ?
[00:18:56] <kliegs> sure. against the locally built libcros?
[00:19:24] <kliegs> vpalatin: you want all the output or just one line?
[00:20:12] <vpalatin> kliegs: send me all the output if you have it for the wrong case
[00:20:44] <kliegs> ok. refetching from upstream
[00:21:40] <kliegs> attached to bug
[00:23:36] <vpalatin> kliegs: It really looks like the faulty libcrosapi.a has been compiled with another toolchain which is pretty recent (gcc 4.6 ?)
[00:24:09] <vpalatin> how this stuff ends up in an official package would be interesting to know ...
[00:25:30] <kliegs> vpalatin: tell me about it
[00:28:39] *** Adys has quit IRC
[00:29:04] <kliegs> vpalatin: if a buildbot is misconfigured nsylvain is trooper on duty and can help. i need to leave soon so not sure I have time to dig into that
[00:29:49] <vpalatin> kliegs: I have read a bit further the binutils source, It seems this bit is set by the compiler depending on the type of CPU
[00:29:58] <vpalatin> and should not be here for cortex A9
[00:30:48] <davidjames> vpalatin: asharif just upgraded binutils a couple days ago, then downgraded
[00:31:02] *** Adys has joined #chromium-os
[00:31:16] <davidjames> vpalatin: not sure which version is deployed everywhere right now
[00:31:36] <vpalatin> davidjames: so, might be that
[00:31:53] <kliegs> davidjames, vpalatin: that seems likely. guessing buildbots have bad version
[00:32:12] <benchan> davidjames: if there a way to force rebuild the binary package on the buildbots?
[00:33:50] <kliegs> benchan: we forced a rebuild of libcros earlier today - the package was rev'd ~2-3 hours ago and the new rev of it is still bad
[00:33:50] <davidjames> benchan: Yes just uprev the package
[00:34:08] <kliegs> davidjames: (in case you haven't been following) the newly rev'd libcros is still bad
[00:34:37] <vpalatin> kliegs: do we have the logs of that build ?
[00:35:56] <kliegs> vpalatin: should be, although I suspect the actual compiler command line is lost
[00:36:40] <benchan> kliegs: could it be some libs that libcros depends on also have the same problem? so the tag gets propagated to libcros during compilation?
[00:37:06] <davidjames> benchan: Looks like tegra2 bot has binutils-2.21
[00:37:35] <davidjames> benchan: Raymes thinks that maybe binutils-2.21 introduces this bug
[00:37:41] <kliegs> benchan: that ispossible
[00:38:22] <kliegs> benchan: but I don't know enough about it
[00:38:39] <benchan> davidjames: is binutils-2.21 part of chroot or a output of build_package?
[00:39:25] <davidjames> benchan: part of chroot
[00:39:53] <kliegs> any idea how the buildbot ended up with it
[00:41:05] <davidjames> kliegs: binutils-2.21 was rolled out by toolchain team and then reverted, but setup_board has no logic for downgrading toolchains
[00:41:20] <benchan> davidjames: I encountered a similar issue before that I needed to blow away my chroot to fix it
[00:41:22] <davidjames> kliegs: So until bot is clobbered it sticks with new binutils, same for developer chroots
[00:41:49] <kliegs> ahh
[00:41:54] <kliegs> so basically need to clobber all the bots
[00:42:01] <davidjames> Not all, just incremental bots
[00:42:03] <kliegs> then the full builders will rebuild with the old toolchain?
[00:42:14] <davidjames> full builders already use right toolchain (2.20)
[00:42:15] <kliegs> don't we also need to force rebuilds of all packages?
[00:42:27] <davidjames> yes :(
[00:42:41] <kliegs> ahh ok. so full builders will update on their next cycles. and clobbering incrementals prevent any updates from cloberring the fulls?
[00:42:42] <benchan> sosa: did you suggest emerge --nousepkg would fix that issue?
[00:43:00] <davidjames> kliegs: Hmm no that won't work
[00:43:10] <sosa> clobber the incrementals?
[00:43:29] <vpalatin> davidjames, kliegs: I confirm this flag is never set by the current binutils-2.20 and is set by binutils 2.21
[00:43:33] *** quannnum has quit IRC
[00:43:34] <kliegs> benchan: --nousepkg will work but be slow
[00:43:36] <davidjames> kliegs: Well, incrementals will download their own old binaries after being clobbered
[00:43:54] <kliegs> davidjames: right. but at least when they uprev they'll be upreving new things
[00:43:55] <davidjames> kliegs: I could blow away the binhosts though, that'd teach the builders a lesson
[00:44:08] <kliegs> davidjames: I thought the full builders also uploaded their packages after each run?
[00:44:15] *** zmedico has quit IRC
[00:44:41] <kliegs> but I could easily be mistaken
[00:45:21] <davidjames> kliegs: They do, but binaries from preflights are preferred, so that we can tease out any issues with incremental builds
[00:45:45] <kliegs> davidjames: ucch. so we do need to clobber the binhost?
[00:45:49] <davidjames> (which, obviously, we are detecting now)
[00:45:53] <kliegs> davidjames: true
[00:46:35] <davidjames> kliegs: I think I'll adjust PFQ to be nousepkg for now
[00:47:30] <davidjames> Or maybe, just ignore any preflight or full binhosts... keep the chrome binary since that one is good
[00:51:40] <kliegs> davidjames: since i've got to run I'll trust you to pick the least painful path :)  good luck - don't envy you
[00:51:52] <kliegs> i'll pester you tomorrow about my chrome ebuild CL that rcui punted over :)
[00:52:37] *** gspencer has joined #chromium-os
[00:52:37] *** ChanServ sets mode: +v gspencer
[00:53:22] <kliegs> might be worth sending out a note to list as well so people stuck can workaround until everything catches up.
[00:53:24] <kliegs> anyways. 'night
[00:55:57] <davidjames> kliegs: Yeah good idea
[00:56:07] <davidjames> kliegs: I'm updating setup_board to do automatic downgrades now anyway
[00:56:46] * vpalatin is giving back his sheriff star. It's the evening on my virtual EST timezone.
[00:59:44] * adlr picks it up
[01:00:06] *** espeed has joined #chromium-os
[01:16:07] <adlr> puneet is fixing the file perms issue. there won't be a CL for it, sinceit's just a tarball that'll be fixed
[01:22:33] <benchan> adlr: crosbug.com/17798
[01:22:57] *** Solet has quit IRC
[01:23:09] <benchan> adlr: it may be a recurring issue for different platforms
[01:23:24] <adlr> looking
[01:24:19] <adlr> yeah, looks like the same issue
[01:24:56] <benchan> adlr: as dparker@ suggested in crosbug.com/20203, perhaps it's a good idea to add some sanity check in the eclass
[01:26:19] <davidjames> Hey folks, binutils-2.20 downgrade is rolling out now
[01:26:38] <davidjames> Now I'm looking to update preflights to not use their old binhosts so they can be clobbered
[01:26:48] <adlr> benchan: good idea
[01:29:04] *** espeed has quit IRC
[01:31:17] <benchan> adlr: working on it
[01:40:50] *** stevenjb has quit IRC
[01:42:33] *** Solet has joined #chromium-os
[01:49:54] <davidjames> Ok folks, tegra2 preflight is clobbered, you should have your fresh binaries soon
[01:51:07] <adlr> and by fresh, you mean old
[01:51:15] <adlr> ;)
[01:54:36] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "tegra2_kaen-binary" from d9b54277b1c2239aa1c67317dacb6874529612e6: abodenha at chromium dot org <abodenha at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>)'
[01:55:23] * adlr looks
[02:00:10] <adlr> davidjames: seems this is a binutils failure?
[02:01:04] <davidjames> adlr: Yeah I guess they need to be clobbered, clobbering them now
[02:01:52] <crosbot> tree became 'Tree is open (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:02:30] <benchan> adlr: chromeos-bootimage-0.0.2-r18: Error reading bct file /build/tegra2_kaen/firmware/bct/board.bct
[02:03:29] <crosbot> tree became 'Tree is closed (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:03:37] *** wfrichar has quit IRC
[02:03:44] <crosbot> tree became 'Tree is open (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:08:21] <crosbot> tree became 'Tree is closed (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:08:52] <crosbot> tree became 'Tree is open (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:14:35] *** Gireds has joined #chromium-os
[02:15:12] *** Gireds has left #chromium-os
[02:17:19] <crosbot> tree became 'Tree is closed (cycling PFQ for binutils downrev )'
[02:18:21] <crosbot> tree became 'Tree is open (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[02:41:06] *** stevenjb has joined #chromium-os
[02:48:58] *** stevenjb has quit IRC
[02:50:06] <davidjames> adlr: Ok, so good news is that this is solved by everybody upgrading to new binutils
[02:52:05] <adlr> davidjames: okay
[02:52:26] <davidjames> adlr: So asharif just rolled out new binutils, prebuilts should be available soon
[02:56:51] <adlr> i'm going to head out, thus brining an end to my sheriff duties
[03:03:24] <crosbot> tree became 'Tree is closed (cycling chroot builder)'
[03:03:39] <crosbot> tree became 'Tree is open (tegra2 fails -> crosbug.com/20207, stumpy fail -> sosa reverted bad CL already)'
[03:12:50] *** sosa has quit IRC
[03:17:22] * benchan heads out
[03:17:31] *** benchan has quit IRC
[03:29:33] *** SySfS has joined #chromium-os
[03:33:23] *** Clark008 has joined #chromium-os
[03:34:54] *** saintlou has quit IRC
[03:48:02] <SySfS> I have a technical question that I asked in the user channel and didn't get a response, could I ask it here as well?
[03:51:30] <crosbot> tree became 'Tree is open'
[03:52:47] <crosbot> tree became 'Tree is closed (cycling canaries to pick up sosa's revert)'
[03:53:13] *** SySfS has quit IRC
[03:53:48] <crosbot> tree became 'Tree is open (failing canaries fixed with sosa's revert)'
[03:54:44] *** SySfS has joined #chromium-os
[03:55:11] *** m1k3l_ has quit IRC
[03:55:23] *** SySfS has quit IRC
[04:00:43] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "tegra2_arthur-binary" from None: )'
[04:02:12] *** gfrog has joined #chromium-os
[04:04:22] *** gfrog has joined #chromium-os
[04:09:29] *** gfrog has quit IRC
[04:10:38] *** gfrog has joined #chromium-os
[04:12:51] <crosbot> tree became 'Tree is open (tegra2_arthur fail -> reinauer, crosbug.com/20224)'
[04:20:00] *** Daxvex has joined #chromium-os
[04:21:54] *** gfrog is now known as qfrog
[04:26:05] *** Space_Core has joined #chromium-os
[04:38:49] *** Space_Core is now known as TheBrokenGLaDOS
[04:56:40] *** qfrog has quit IRC
[04:57:50] *** gfrog has joined #chromium-os
[04:59:09] *** gfrog has joined #chromium-os
[05:00:24] *** stevenjb has joined #chromium-os
[05:14:39] *** vapier has quit IRC
[05:15:03] *** vapier has joined #chromium-os
[05:17:29] *** zmedico has joined #chromium-os
[05:40:58] *** corburn has joined #chromium-os
[05:43:51] *** stalled has quit IRC
[05:58:18] *** petermayo has quit IRC
[05:59:36] *** Daxvex has quit IRC
[06:09:46] *** stalled has joined #chromium-os
[06:10:53] *** stevenjb has quit IRC
[06:29:25] *** stevenjb has joined #chromium-os
[06:33:00] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot_master" on "x86 generic PFQ" from bbc94b1e5f587474b0259103828102047c6c2f1e: Chris Sosa <sosa at chromium dot org>)'
[06:38:22] <crosbot> tree became 'Tree is open (flaky race condition in creating download folder)'
[06:40:55] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-alex_he canary" from None: )'
[06:41:19] *** stevenjb has quit IRC
[06:48:36] <crosbot> tree became 'Tree is open (flaky rsync on autotest)'
[06:49:22] <crosbot> tree became 'Tree is open (flaky rsync on autotest crosbug.com/8337)'
[07:06:15] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-alex-binary" from a19e05e2bd4165d5c52510ab6c89d06acdf130f4: estade at chromium dot org <estade at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>, pdox at google dot com <pdox at google dot com@fcba33aa-ac0c-11dd-b9e7-8d5594d729c2>, zmo at google dot com <zmo at google dot com@736b8ea6-26fd-11df-bfd4-992fa37f6226>)'
[07:07:47] <crosbot> tree became 'Tree is open (Flaky vm timeout, watching)'
[07:12:30] *** sergiu has quit IRC
[07:20:27] *** vmil86 has joined #chromium-os
[07:48:03] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-zgb-binary" from 89c5be633f6c8533d6a51a98210182038a7639ea: Chris Sosa <sosa at chromium dot org>, Hung-Te Lin <hungte at chromium dot org>, _third_party_ at chromium dot org, estade at chromium dot org <estade at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>)'
[07:48:34] <crosbot> tree became 'Tree is open (ctest fix pushed.)'
[08:36:22] *** FusionX has quit IRC
[08:37:51] *** saggu has joined #chromium-os
[08:39:20] *** FusionX has joined #chromium-os
[08:40:04] <saggu> Hi.. does any one know how to port chromium os image to armv7 board using qemu emulator?
[08:53:30] *** sosa has joined #chromium-os
[08:53:31] *** ChanServ sets mode: +v sosa
[09:02:06] *** Styx has joined #chromium-os
[09:08:51] *** gfrog is now known as kfrog
[09:10:24] *** xc0ffee has joined #chromium-os
[09:13:07] *** magn3ts has quit IRC
[09:21:57] *** saggu has quit IRC
[09:29:54] *** petermayo has joined #chromium-os
[09:30:53] <crosbot> tree became 'Tree is open (ctest fix pushed for OSError: [Errno 17] File exists.  Some canaries haven't gotten the change yet and may go red.  Just re-open for those.)'
[09:39:13] *** petermayo_ has joined #chromium-os
[09:40:34] *** petermayo_ has left #chromium-os
[09:41:50] *** petermayo has quit IRC
[09:41:52] *** Honoome is now known as Flameeyes
[09:44:48] *** patcito has quit IRC
[09:48:09] *** corburn has quit IRC
[09:59:59] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "lumpy canary" from None: )'
[10:00:34] <sosa> any sheriffs around?
[10:01:35] *** pastarmovj has joined #chromium-os
[10:02:11] *** jujugre has joined #chromium-os
[10:03:04] <crosbot> tree became 'Tree is open (ctest fix pushed for OSError: [Errno 17] File exists. Some canaries haven't gotten the change yet and may go red. Just re-open for those.)'
[10:06:43] *** patcito has joined #chromium-os
[10:11:40] *** sosa has quit IRC
[10:27:07] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-mario canary" from None: )'
[10:28:33] *** BladeFreak has quit IRC
[10:28:56] *** BladeFreak has joined #chromium-os
[10:32:21] *** srikanth has quit IRC
[10:32:47] <crosbot> tree became 'Tree is open ( canaries are all passed test issue)'
[10:34:12] *** patcito has quit IRC
[10:36:58] <pastarmovj> Hi there :) is everyone seeing the 500 error when loading the chromeos waterfall?
[11:21:48] *** unreal has quit IRC
[11:27:24] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-zgb-binary" from 86e6d512c8e48689da6aee19cc15a1d0a28cc0d7: loislo at chromium dot org <loislo at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>, nirnimesh at chromium dot org <nirnimesh at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>, zelidrag at chromium dot org <zelidrag at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>)'
[12:04:06] *** unreal has joined #chromium-os
[12:45:03] *** kfrog has quit IRC
[13:11:11] *** Pedro has joined #chromium-os
[13:13:25] *** Pedro has quit IRC
[14:51:52] *** Sarten-X2 has quit IRC
[15:01:43] *** Sarten-X has joined #chromium-os
[15:15:52] *** BladeFreak has quit IRC
[15:28:19] *** jglasgow has joined #chromium-os
[15:28:20] *** ChanServ sets mode: +v jglasgow
[15:36:21] <jglasgow> I am the on duty sherrif.
[15:36:40] <jglasgow> I am the on duty sheriff, reporting for duty, and now wide awake.
[15:37:25] <jglasgow> Looks like the ZGB failure has fixed itself.  I am about to open as soon as I make sure all the tree closing builders are green.  But I'm about to figure out which is the up to date list of tree closers.
[15:41:12] <crosbot> tree became 'Tree is close (jglasgow researching x86-alex-binary FAIL VMTest)'
[15:41:27] <crosbot> tree became 'Tree is closed (jglasgow researching x86-alex-binary FAIL VMTest)'
[15:41:58] <jglasgow> I am looking into why the x86-alex-binary is failing the tests.
[15:47:55] <bryeung> I'm here for sherriff duty.
[15:48:01] <bryeung> jglasgow: how can I help?
[15:49:22] *** hru has left #chromium-os
[15:53:17] <jglasgow> bryeung: Lets see.  I do not understand why the x86-alex-binary tree is failing the VMTests.
[15:53:28] <jglasgow> It looks like the tests are timing out after 40 minutes.
[15:54:13] <jglasgow> I've gotten as far as grabbing the test results from http://chromeosbuild9.mtv.corp.google.com/archive/x86-alex-private-bin/0.16.1014.0-a1-b1230/ (test_results.tgz) -- but it isn't clear to me which test is not completing.
[15:54:52] <jglasgow> The following:
[15:54:52] <jglasgow> ./testUpdateWipeStateful/4_verify/suite_Smoke/job_report.html
[15:54:52] <jglasgow> ./testUpdateWipeStateful/2_verify/suite_Smoke/job_report.html
[15:54:52] <jglasgow> ./testUpdateKeepStateful/4_verify/suite_Smoke/job_report.html
[15:54:52] <jglasgow> ./testUpdateKeepStateful/2_verify/suite_Smoke/job_report.html
[15:54:52] <jglasgow> All show all tests passing.
[15:56:26] <bryeung> hrm.
[15:56:57] <jglasgow> Actually that was with build 1230.  Looking at build 1232 I see suite_Smoke/login_CryptohomeUnmounted              FAIL
[15:58:03] <bryeung> jglasgow: yes, just noticed that
[15:58:23] <jglasgow> Since that was the most recent failure of x86-alex-binary, lets focus on that.  I seem to recall that Cryptohome test frequently fails.  I'm going to search gmail to see if there is any history or explanation
[15:59:48] <bryeung> jglasgow: I'll search bugs.
[16:01:56] <bryeung> jglasgow: I don't see any related bugs
[16:02:36] <jglasgow> I am grabbing the test_results file to see what the specific failure is.
[16:04:43] <jglasgow> the change logs from 1230, 1231, 1232 do not point to anything related to Cryptohome.  Or at least I don't see it.  1229 was running fine. (http://chromeos-botmaster.mtv.corp.google.com:8026/builders/x86-alex-binary)
[16:05:30] <bryeung> jglasgow: agreed
[16:05:55] <bryeung> any luck with the test_results file?
[16:06:28] <jglasgow> not yet, but I just noticed that 1233 finished and here the failure seems to be a timeout (http://chromeos-botmaster.mtv.corp.google.com:8026/builders/x86-alex-binary/builds/1233/steps/VMTest/logs/stdio)
[16:08:38] <bryeung> so 1230 and 1231 are both timeouts
[16:08:53] <bryeung> so 1230 and 1233 are both timeouts
[16:09:15] <bryeung> 1231 and 1232 have this cryptohome failure
[16:09:29] <bryeung> and we have no suspicious looking CLs
[16:10:22] <jglasgow> file:///home/jglasgow/Downloads/test_harness_1232/testUpdateKeepStateful/2_verify/suite_Smoke/job_report.html
[16:10:42] <jglasgow> STATUS:
[16:10:42] 
[16:10:42] 
[16:10:42] 
[16:13:43] <jglasgow> bryeung: I'm at a loss.  I don't know why the login is timing out.  The debug messages for this test are not particularly useful either.
[16:13:52] <bryeung> hmm...and this is the only machine where this is happening
[16:14:01] <bryeung> maybe it's a network flake/issue on the machine?
[16:15:21] <jglasgow> I'm not ready to say that -- but look at the login_CryptohomeUnmounted test.  There is not much to it.
[16:15:49] <jglasgow> Seems more like a ChromeLogin problem than a cryptohome problem.
[16:17:01] *** xc0ffee has quit IRC
[16:18:31] <jglasgow> I'm willing to put the tree in a throttled state.  Thoughts?
[16:18:31] <jglasgow> I'll file a bug, but we need to figure out who to assign it to.
[16:20:06] <bryeung> sgtm
[16:22:04] *** xc0ffee has joined #chromium-os
[16:29:45] <crosbot> tree became 'Tree is closed (jglasgow researching x86-alex-binary FAIL VMTest, filed bug http://crosbug.com/20234 )'
[16:30:57] <bryeung> jglasgow: was just talking to kliegs
[16:31:18] <bryeung> apparently there were issues with the cryptohome tests, which caused webui login to get rolled back
[16:32:46] *** xc0ffee has left #chromium-os
[16:32:50] <jglasgow> so where does that leave us now?
[16:33:51] <bryeung> jglasgow: I'm trying to figure out if it has been turned back on
[16:35:05] <bryeung> though I'm not sure why that would be causing failures on only one bot
[16:37:44] <bryeung> jglasgow: and webui is still turned off by default, so this leaves us no better off
[16:38:18] <kliegs> this may not be related, but I recall permissions errors on some of the cryptohome stuff - its had some changews - ellyjones I think?
[16:38:32] <kliegs> but earlier this week the cryptohome had errors about the wrong user owning directories.
[16:38:50] <kliegs> so its probably worth looking at the cryptohome changelogs for the tests and such in case some flake got introduced
[16:39:06] <kliegs> (I could also be wrong - could be a bug introduced in chrome)
[16:39:06] <ellyjones> hi
[16:39:14] <ellyjones> that all got reverted yesterday morning
[16:39:23] <ellyjones> after the unit tests turned out not to be so unit
[16:39:30] <ellyjones> it is unlikely that those changes caused this problem
[16:39:49] *** TW1920 has joined #chromium-os
[16:50:54] <kliegs> ok. this is really odd
[16:51:08] <kliegs> bryeung and I are seeing different log files for build 1231 of alex-binary
[16:51:16] <kliegs> refreshing enough times caused his to change
[16:51:23] <jglasgow> That is strange.
[16:51:33] <kliegs> jglasgow, ellyjones: Do you mind looking at vmtests for x86-alex-binary build 1231?
[16:51:41] <bryeung> jglasgow: in one of mine, it is cryptohomeUnmounted that is failing
[16:51:52] <bryeung> in the other, it is a Login failure
[16:52:21] <jglasgow> I will pull down the logs for 1231.  Do you also want me to try to run the vmtests?
[16:52:48] <kliegs> jglasgow: partly curious on what error you're actually seeing
[16:53:40] <ellyjones> VMTest passed in 1231, it looks like
[16:53:54] <jglasgow> Not reqally.
[16:53:59] <kliegs> ellyjones: I was showing passed when looking at the list, but clicking through it i see a failure
[16:54:00] <jglasgow> http://chromeos-botmaster.mtv.corp.google.com:8026/builders/x86-alex-binary/builds/1231/steps/VMTest/logs/stdio
[16:54:18] <ellyjones> _that's_ fucked up
[16:54:38] <ellyjones> it does two test runs, and Unmounted fails on the first and LoginSuccess on the second
[16:56:49] *** ers has quit IRC
[16:56:56] <kliegs> oh. yah. its just timestamps overlapping
[16:57:01] <kliegs> that's what confused us
[16:57:11] <kliegs> I always scroll up from the bottom.  guessing bryeung scrolled down from the top
[16:57:16] <kliegs> so we looked at different runs
[16:57:24] <kliegs> kinda embarassed now
[16:57:48] <bryeung> face palm!
[16:57:48] <kliegs> can those tests run safely in parallel?
[16:58:14] <ellyjones> did you file a bug against sosa or someone for the false pass report? I do not see an @@@STEP_FAILURE@@@ in the log, so that may be why
[16:58:33] <jglasgow> I did not file a bug report for the false pass repot.
[16:58:35] <jglasgow> report
[16:58:55] <kliegs> Is this just a timing issue with two tests running on the vm in parallel?
[16:59:09] <kliegs> that's my first hunch but I don't know the test infrastructure as well as I should
[17:00:42] <jglasgow> I don't understand -- do we run the VMTests from the same disk image simultaneously?  I would assume that we could only do that with a "read-only" image.  But if that's what we do, then there should be a problem running in parallel.
[17:02:20] <jglasgow> rochberg points out there could be CPU contention
[17:06:26] <bryeung> jglasgow: kliegs and I are looking at the tegra2_arthur failure now
[17:07:51] <jglasgow> zgb just failed VMTest.  suite_Smoke/security_ProfilePermissions.login FAIL
[17:08:59] <ellyjones> the zgb vmtest failure is a dbus timeout during the test (!?)
[17:09:16] <kliegs> did we increase parallelization of build tests recently?
[17:10:10] <rochberg_> I just tried to build chrome from SERVER_SOURCE (with gmerge), and my checkout failed 15 minutes in with no error message, just an svn A line (for redirect-cross-origin-tripmine.html)
[17:10:16] <jglasgow> kliegs: I do not know.
[17:10:23] <rochberg_> Any debug tips?
[17:10:42] <kliegs> rochberg_: inside chroot cd /var/log/portage/distfiles-target/chrome-src
[17:10:48] <kliegs> (or chrome-src-internal depending)
[17:10:55] <kliegs> run `which gclient` sync
[17:11:08] <kliegs> that will let you debug/run the sync command outside the ebuild.  and adjust if needed
[17:11:17] <kliegs> (you need the full path to gclient or it won't run)
[17:14:57] <jglasgow> zgb 1159 failures show a chrome failure (which should not be related to the dbus timeout, but is suspicious).
[17:15:02] <jglasgow> 2011-09-09T14:19:21.840036+00:00 localhost kernel: [  469.839565] chrome[16494]: segfault at 2a ip 729a2fcd sp 706ee694 error 4 in libpthread-2.10.1.so[7299b000+15000]
[17:15:02] <jglasgow> 2011-09-09T14:19:21.885459+00:00 localhost crash_reporter[16885]: Received crash notification for chrome[16478] sig 11 (developer build - not testing - always dumping)
[17:15:02] <jglasgow> 2011-09-09T14:19:23.632481+00:00 localhost crash_reporter[16885]: Stored minidump to /var/spool/crash/chrome.20110909.071921.16478.dmp
[17:15:59] <bryeung> jglasgow: I think I have the tegra2_arthur problem under control (with lots of help from kliegs and cwolfe)
[17:16:05] <bryeung> fixing now...
[17:16:44] <rochberg_> Sync appears to be running happily, or at least making further progress.   Too bad that rm -rf "${ECHROME_STORE_DIR}" is going to blow it away before I rebuild.
[17:17:19] <rochberg_> Maybe it won't, now that I look at it.
[17:18:31] <kliegs> it only does the rm -rf on version mismatch following sync errors
[17:27:36] *** benchan has joined #chromium-os
[17:27:36] *** ChanServ sets mode: +v benchan
[17:29:21] *** stevenjb has joined #chromium-os
[17:29:36] *** lipsinV2 has quit IRC
[17:37:04] <ellyjones> is it chromium-gitmaster at googlegroups dot com?
[17:39:51] *** lipsinV2 has joined #chromium-os
[17:40:16] <ellyjones> (that I mail for a new repo :P)
[17:41:34] <jglasgow> kliegs,bryeung: so magically x86-alex-binary just passed all the vmtests.
[17:41:49] <kliegs> jglasgow: if its a parell timing issue, could be
[17:41:55] <bryeung> yay for magic :-/
[17:42:02] <jglasgow> we still have a failure in the x86-zgb-binary.  I'm tempted to open the tree.
[17:42:09] <jglasgow> Who should research the timing issue?
[17:42:24] <jglasgow> Is that my job as sheriff?
[17:46:09] <bryeung> jglasgow: okay, so at least the tegra2_arthur issue should be fixed
[17:46:12] <bryeung> we're getting closer
[17:46:14] <kliegs> jglasgow: sort of
[17:46:30] <kliegs> jglasgow: although more you should track down someone who can
[17:50:25] *** jujugre has left #chromium-os
[17:52:55] <jglasgow> kliegs: any suggestion on who would be most qualified?  Otherwise I'll try to run the test locally and see if I can sort anything out -- determine if it is just general flakiness or random chrome crashes.
[17:53:27] <kliegs> jglasgow: while I hate always punting to him, I seem to recall davidjames recently talking about test parallelization
[17:53:32] <kliegs> nirnimesh may also know things about test
[17:53:41] <jglasgow> okay.  I'll try to talk to them.
[17:55:01] <jglasgow> So does anybody object to opening the tree?
[17:55:21] *** wfrichar has joined #chromium-os
[17:55:22] *** ChanServ sets mode: +v wfrichar
[17:55:23] <bryeung> jglasgow: sounds okay to me
[17:59:55] <crosbot> tree became 'Tree is open (zgb is flaky failure)'
[18:00:43] <jglasgow> I am going to lunch.
[18:00:47] <jglasgow> Be back in 15 minutes.
[18:02:59] *** behdad has joined #chromium-os
[18:05:11] *** ttuttle|work has quit IRC
[18:05:11] *** ttuttle|work has joined #chromium-os
[18:05:11] *** pratchett.freenode.net sets mode: +v ttuttle|work
[18:14:57] *** saintlou has joined #chromium-os
[18:14:58] *** ChanServ sets mode: +v saintlou
[18:24:27] *** Solet has quit IRC
[18:29:05] *** petermayo has joined #chromium-os
[18:29:06] *** ChanServ sets mode: +v petermayo
[18:32:29] <jglasgow> Back.
[18:32:32] <jglasgow> Yes.
[18:32:44] <jglasgow> Is there something I should do?  I was typing up some notes.
[18:32:51] <bryeung> jglasgow: thanks!  I'm going to grab lunch.  Back soon.
[18:32:56] <jglasgow> SGTM
[18:33:09] *** lipsinV2 has quit IRC
[18:33:37] *** Solet has joined #chromium-os
[18:39:22] *** m1k3l has joined #chromium-os
[18:40:13] <davidjames> kliegs: alex binary machine is short on RAM, that's why it's failing
[18:40:33] <davidjames> kliegs: as for zgb, don't know, looks like a different issue
[18:43:25] *** rcui has joined #chromium-os
[18:43:25] *** ChanServ sets mode: +v rcui
[18:44:01] <crosbot> tree became 'Tree is open (alex-binary: crosbug.com/19200; zgb: flaky? need to file bug)'
[18:44:47] <crosbot> tree became 'Tree is open (alex-binary: crosbug.com/19200; zgb: filing bug -> rcui)'
[18:50:20] <rochberg_> Does chromeos-chrome use gold on ARM?
[18:51:04] <rochberg_> (Do not look this up, I am just going to build chrome and see)
[18:53:54] <kliegs> rochberg_: I think it does.  I think everything uses gold.  but wouldn't swear to it
[18:55:26] <ihf_> Chromeos-chrome uses gold on ARM.
[18:56:06] <kliegs> davidjames: how did you figure out its short on RAM?  disappointed I didn't notice
[18:57:19] <bryeung> back
[18:58:19] <kliegs> davidjames: on the chrome patch CL - is it ok if I leave the definitions at the top and move the conditional into the function?  I like the notion of all the patch files being listed at the top and easy to see
[18:59:50] <davidjames> kliegs: Hmm, I usually look at src_prepare for patches
[19:00:12] <bryeung> passed!  the tree is vibrantly green!
[19:00:26] <davidjames> kliegs: So if they're listed under src_prepare it's easier there. I would either put them all in src_prepare or all at the top (the way you have it now)
[19:00:34] <kliegs> davidjames: I guess all the ebuilds I looked at for example had patches all declared at the top. even if its src_prepare that applies
[19:01:22] <davidjames> kliegs: Aha, there are also lots that just 'epatch patch' inside src_prepare
[19:01:46] <davidjames> kliegs: No need to have a list of patches, just do 'if use foo; do epatch patch; fi'
[19:02:25] <kliegs> davidjames: yah, I was thinking of doing that initially. then figured since I was doing this I might as well make it more flexible while i'm writing it
[19:02:39] <davidjames> kliegs: Maybe overkill? Just epatch whatever patch you want :)
[19:02:51] <kliegs> davidjames: probably overkill.
[19:02:57] <davidjames> You don't even have any patches yet, building a conditional list of patches seems strange when there are no patches yet :)
[19:03:21] <davidjames> Let's just get in the 1 patch you need :)
[19:03:55] <kliegs> davidjames: the dependant CL has the patch :)  Given its contents are changing over time i didn't want it tied to the framework
[19:05:44] <xiyuan> sherrifs, just turned on webui login again. Please watch for VMTest failures.
[19:06:14] <xiyuan> I'll also watch the tree. Hopefully this time it runs fine.
[19:06:31] <ihf_> ok
[19:10:54] *** jochen__ has quit IRC
[19:10:58] <davidjames> xiyuan: Did you run trybot?
[19:10:59] *** jochen__ has joined #chromium-os
[19:11:00] *** ChanServ sets mode: +v jochen__
[19:11:23] <davidjames> xiyuan: Please run trybot on both internal and external trees before turning on webui login again, since it failed last time
[19:12:32] <kliegs> davidjames: i uploaded a new cl as I described above - if its not ok with you i'll redo in the simple way.
[19:13:12] *** behdad has quit IRC
[19:14:13] *** behdad has joined #chromium-os
[19:14:14] *** ChanServ sets mode: +v behdad
[19:16:02] <xiyuan> davidjames: yes. nirnimesh and I have run VMTests yesterday and it should be fine this time.
[19:17:59] *** saintlou_ has joined #chromium-os
[19:17:59] *** ChanServ sets mode: +v saintlou_
[19:21:02] *** saintlou has quit IRC
[19:23:32] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "tegra2 full" from 4007eff5290698cfca91cf3ecd1592bd56f7f48f: Sam Leffler <sleffler at chromium dot org>)'
[19:28:09] <jglasgow> The latest failure "Problem in 'virtual/libc' dependencies" seems to have no relation to sleffler's commit.
[19:28:10] <crosbot> tree became 'Tree is closed (raymes->reverting virtual/libc change)'
[19:30:43] <crosbot> tree became 'Tree is open (virtual/libc change reverted, should cycle green)'
[19:31:48] <gsam_> ooh, my luck to catch the tree broken
[19:38:30] *** McMAGIC--Copy has quit IRC
[19:43:32] <crosbot> tree became 'Tree is open'
[19:48:20] *** sergiu has joined #chromium-os
[19:48:20] *** sergiu has joined #chromium-os
[19:52:30] *** eggy is now known as mrichards
[19:54:09] <xiyuan> yay. webui login suvivied first pass on x86-generic-pfq. :)
[19:55:41] <ellyjones> woo
[19:57:30] *** mnissler_ has quit IRC
[19:57:44] *** mnissler_ has joined #chromium-os
[19:57:44] *** ChanServ sets mode: +v mnissler_
[19:57:48] <gsam_> hey
[19:58:14] <gsam_> did my change hit a bad tree?
[19:58:22] * gsam_ goes to check logs
[19:59:50] <gsam_> looks like virtual/libc
[20:01:16] *** elly has quit IRC
[20:02:40] *** elly has joined #chromium-os
[20:02:46] <ellyjones> yeah, gsam_ :P
[20:13:54] <bryeung> x86-alex-binary looks like the same timeout flake caused by lack of RAM
[20:29:16] <ellyjones> has anyone seen this message from a local trybot before?
[20:29:17] <ellyjones> ERROR: No update cache found. Please run cros_generate_update_payloads before running this harness.
[20:30:01] <m1k3l> hi, what is the last good chromiumos version that works with qemu?
[20:31:10] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "x86-alex-binary" from cec0eb909645c8020a6c27156ad3b276ecbded23: timurrrr at chromium dot org <timurrrr at chromium dot org@0039d316-1c4b-4281-b951-d872f2087c98>, xiyuan <xiyuan at chromium dot org>)'
[20:31:56] <crosbot> tree became 'Tree is open (x86-alex-binary is a flake)'
[20:32:02] <vpalatin> m1k3l: all versions in the waterfall : http://build.chromium.org/p/chromiumos/waterfall  with the VMTests box green have successfully passed tests in KVM
[20:34:28] *** saintlou_ has quit IRC
[20:35:31] <crosbot> tree became 'Tree is open (x86-alex-binary is offline for upgrading to 12GB RAM)'
[20:38:07] <bryeung> jglasgow: I'm beginning to be more convinced of your theory that chrome is having troubles at login
[20:40:10] <ellyjones> trying to get a clean trybot run for a change is very frustrating lately
[20:40:38] <crosbot> tree became 'Tree is open (x86-alex-binary is offline for upgrading to 12GB RAM like the other bots)'
[20:40:40] <rcui> ellyjones: why is that?
[20:40:51] <ellyjones> because they sync and the tree is red a lot :)
[20:41:10] <ellyjones> by the way, rcui, have you seen that message I pasted above? "ERROR: No update cache found."
[20:41:11] <rcui> i need to update the doc, but i added lkgm support
[20:41:20] <rcui> so u can try with --lkgm flag
[20:41:42] <ellyjones> cool :)
[20:41:54] <ellyjones> we'll see, I think my second run just worked
[20:42:06] <ellyjones> having local trybots makes it much easier to commit with confidence, though
[20:42:12] <ellyjones> thanks for making them so easy to use :)
[20:42:44] <rcui> no problem, i'm glad you like it! :)
[20:43:04] <rcui> send any feedback u have my way!
[20:43:11] <rcui> brb lunch
[20:43:14] <bryeung> xiyuan: you here?
[20:43:20] <xiyuan> yes
[20:44:13] <bryeung> xiyuan: I'm seeing a login crash on on the stumpy-binary builder
[20:44:38] <bryeung> was wondering if it could be related to your webui CL
[20:45:40] <xiyuan> bryeung: where is that builder? I could check the log to see if anything looks familiar.
[20:45:45] <ellyjones> no, my second run for x86-alex still gives this:
[20:45:46] <ellyjones> ERROR: No update cache found. Please run cros_generate_update_payloads before running this harness.
[20:45:57] <ellyjones> does that look familiar to anyone?
[20:50:05] <ellyjones> rcui: can I just do cbuildbot --lkgm -g $cl x86-generic-pre-flight-queue?
[20:51:10] <bryeung> ellyjones: I think rcui may have stepped away for lunch
[20:51:25] <bryeung> ellyjones: (and sorry, I've never seen that error before)
[20:52:31] *** 5EXAAAQIF has joined #chromium-os
[20:52:37] <adlr> i broke the build and submitted a fix
[20:53:05] <adlr> so if you see gestures have unittest failures, i'm sorry :(, but just try again w/ ToT
[20:54:03] <nirnimesh> I think VMTest on x86-generic-PFQ is going to fail. Timeout waiting for the devserver to startup.
[20:54:53] <ihf_> adlr: do you have a link
[20:55:25] <adlr> ihf_: this broke the build: http://gerrit.chromium.org/gerrit/#change,7492 and this fixed it: http://gerrit.chromium.org/gerrit/#change,7494
[20:55:32] <crosbot> tree became 'Tree is closed (Automatic: "cbuildbot" on "stumpy-binary" from cec0eb909645c8020a6c27156ad3b276ecbded23: xiyuan <xiyuan at chromium dot org>)'
[20:56:12] <bryeung> I think this failure is a flake: we've been seeing occasional crashes during login so far today.
[20:56:38] <nirnimesh> It's a flake
[20:56:50] <crosbot> tree became 'Tree is open (stumpy-binary failure likely a flake)'
[20:57:00] <bryeung> nirnimesh: is there a bug tracking these failures?
[20:57:35] <nirnimesh> bryeung: http://code.google.com/p/chromium-os/issues/detail?id=20171
[20:57:56] <ellyjones> rcui: --lkgm fails in BuildTarget with the virtual/libc problem
[20:58:50] <bryeung> xiyuan has agreed to have a look to see if we can make progress on these failures: thank you xiyuan!
[20:59:56] <nirnimesh> it's a rather infrequent crasher. it's not particularly related to that particular failing test. All the other tests in that suite do login in exactly the same way
[21:00:09] <crosbot> tree became 'Tree is open (stumpy-binary failure is a flake)'
[21:00:41] <bryeung> nirnimesh: oh.  we were seeing some other failures that looked like crash on login in other tests earlier this morning.
[21:01:57] <nirnimesh> well, all that should be history after webui change. 20171 is the only login crasher left afaik
[21:02:16] <bryeung> nirnimesh: good to hear. thanks.
[21:07:30] <xiyuan> bryeung: did not find relevant dmp file in the artifacts. All dmps are from loggingUserCrash test.
[21:08:41] <bryeung> xiyuan: okay.  let's not worry about it right now then, as hopefully the webui login change will prevent these crashes.
[21:10:20] <xiyuan> bryeung: okay. webui login changes the story and hopefully it makes things better. :;
[21:10:49] <rcui> ellyjones: are you running a full build?
[21:11:46] <ellyjones> rcui: how do I tell? I want to do exactly what the pfq does
[21:12:43] <bryeung> stepping away for a few minutes, back soon
[21:13:13] <rcui> ok you're not running a full build then
[21:13:28] <rcui> can you send me ur cbuildbot.log file in <repo_root>/chromite/buildbot?
[21:14:03] <ellyjones> erk... sure, but I kicked off another try
[21:14:08] <ellyjones> if it fails the same way I'll send you a log
[21:20:12] <rcui> ok
[21:23:47] *** phh has left #chromium-os
[21:24:35] *** patcito has joined #chromium-os
[21:24:37] <crosbot> tree became 'Tree is closed (pfq flake)'
[21:25:43] <kliegs> style check - in ebuilds for a conditional dependency is it sorted by the name of the conditional or the name of the dependant ebuild?
[21:26:05] <kliegs> so  is it bar/bas  abc? foo/bar    or   abc? foo/bar     bar/bas?
[21:27:26] <crosbot> tree became 'Tree is open (pfq flake during devserver startup)'
[21:30:28] *** BladeFreak has joined #chromium-os
[21:32:50] <crosbot> tree became 'Tree is open (devserver startup flake -> http://crosbug.com/20251)'
[21:50:14] <rcui> x86-zgb-binary and alex-canary are seeing the devserver timeouts
[21:50:29] <rcui> just saw it on internal TOT pfq as well
[21:54:22] <crosbot> tree became 'Tree is closed (devserver startup flake seen on many bots -> http://crosbug.com/20251, rcui investigating)'
[21:54:36] <rcui> even more bots are failing
[21:54:41] <rcui> *hanging at vmstage
[21:56:26] <rcui> need some help with the investigation...
[22:00:27] *** rcui has quit IRC
[22:04:29] *** rcui has joined #chromium-os
[22:04:29] *** ChanServ sets mode: +v rcui
[22:07:36] *** Inumedia has quit IRC
[22:08:06] *** Inumedia has joined #chromium-os
[22:08:28] <jglasgow> rcui: I'm back.  What can I do to help.
[22:09:32] <rcui> i'm trying to figure out what's causing hte devserver hangs
[22:09:44] *** Ruetobas has quit IRC
[22:14:22] *** Ruetobas has joined #chromium-os
[22:18:49] <ellyjones> rcui: ~ellyjones/public/cbuildbot.log
[22:20:08] <ellyjones> same error as before
[22:39:55] *** Styx has quit IRC
[22:40:45] *** saintlou has joined #chromium-os
[22:40:45] *** ChanServ sets mode: +v saintlou
[22:44:56] *** m1k3l has quit IRC
[22:58:29] *** cowbud has quit IRC
[23:13:29] <rcui> looks like we found the issue
[23:13:36] <rcui> enter_chroot is taking a long time, causing the timeout
[23:13:47] <ellyjones> weird
[23:14:12] <rcui> and dev_server enter_chroot call is blocked on another background enter_chroot call which is taking a long time
[23:14:40] <rcui> because it's trying to emerge locale data
[23:15:14] <rcui> looking to potentially revert the change that introduced locale merging - the change is around a month old
[23:19:45] *** m1k3l has joined #chromium-os
[23:24:34] *** patcito has quit IRC
[23:25:12] *** patcito has joined #chromium-os
[23:27:33] *** rush2end has quit IRC
[23:27:35] <crosbot> tree became 'Tree is closed (devserver startup flake, rcui/davidjames investigating, closing in on root-cause)'
[23:29:26] *** rush2end_ has joined #chromium-os
[23:29:37] *** rush2end_ is now known as rush2end
[23:30:29] *** rcui has quit IRC
[23:35:59] <gauravsh> that new gerrit red tree banner really burns my eyes
[23:36:06] <gauravsh> whoever changed that owes me a new pair of shades
[23:36:25] <adlr> gauravsh: make a chrome extension to fix it
[23:41:33] <gauravsh> adblock is able to get rid of it :P
[23:41:49] <gauravsh> (I already have the waterfall open on a separate window, so kids, don't try this at home)
[23:48:06] *** Inumedia_ has joined #chromium-os
[23:50:33] *** Inumedia has quit IRC
[23:51:41] *** jennb has joined #chromium-os
[23:51:41] *** ChanServ sets mode: +v jennb
[23:53:38] <nirnimesh> gauravsh: it's a pavolvian way of encouraging to keep the tree green
[23:55:01] <crosbot> tree became 'Tree is closed (preflights clobbered to fix virtual/libc issue, other bots need migration script -> rcui)'
[23:56:48] <gauravsh> rginda had some javascript code to give a warning while trying to submit on a red tree. whatever happened to that?
[23:58:50] *** BladeFreak has quit IRC
[23:59:52] <crosbot> tree became 'Tree is closed (preflights clobbered to fix virtual/libc issue, other bots need migration script -> rcui, zgb binary taken down for investigation -> ferringb)'

top