Monday, 2019-09-30

*** d0ugal has quit IRC00:07
*** d0ugal has joined #oooq00:23
*** ykarel|away has joined #oooq01:55
*** udesale has joined #oooq04:04
*** surpatil has joined #oooq04:18
*** surpatil has quit IRC04:20
*** surpatil has joined #oooq04:29
*** surpatil has joined #oooq04:29
*** ykarel|away has quit IRC04:36
*** jtomasek has quit IRC04:36
*** ykarel|away has joined #oooq04:53
*** jaosorior has joined #oooq05:03
*** ratailor has joined #oooq05:25
*** soniya29 has joined #oooq05:30
*** raukadah is now known as chandankumar05:33
*** marios has joined #oooq05:37
*** jfrancoa has joined #oooq05:51
*** soniya29 has quit IRC06:04
*** soniya29 has joined #oooq06:05
*** soniya29 has quit IRC06:12
*** soniya29 has joined #oooq06:14
*** jtomasek has joined #oooq06:16
*** soniya29 has quit IRC06:28
*** soniya29 has joined #oooq06:29
*** ykarel|away is now known as ykarel06:31
*** kopecmartin has joined #oooq07:05
*** jtomasek has quit IRC07:06
*** jtomasek has joined #oooq07:07
*** amoralej|off is now known as amoralej07:10
*** tesseract has joined #oooq07:17
mariosarxcruz|zzz: zbr o/ is the containers-multinode-rocky one known i didn't see something in https://etherpad.openstack.org/p/ruckroversprint16 but it looks to be real lots of fails at http://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-containers-multinode-rocky  & blocking that https://review.opendev.org/#/c/684260/ (cc chandankumar )07:25
*** arxcruz|zzz is now known as arxcruz|ruck07:25
marioshttps://2738dc1e3903897cb454-9e9b3039052bbd5ed4d3c1ac40f1e9bf.ssl.cf2.rackcdn.com/684260/4/check/tripleo-ci-centos-7-containers-multinode-rocky/80e6764/logs/undercloud/home/zuul/undercloud_install.log.txt.gz  issue during image prepare?07:25
arxcruz|ruckmarios: hey, no, i wasn't aware07:27
arxcruz|rucklet me check07:27
*** jpena|off is now known as jpena07:29
mariosarxcruz|ruck: ack just came there from code review (its chandankumar patch) but the job looks like its been red for a while (and it votes)07:30
*** bogdando has joined #oooq07:30
ykarelmarios, arxcruz|ruck need https://review.opendev.org/#/c/685466/ to fix rocky ^^07:31
mariosykarel: thx :)07:32
ykarelarxcruz|ruck, are you aware of overcloud image build issue in ovb jobs in all releases?07:32
ykareli see all failing while contacting cloud.centos.org07:32
arxcruz|ruckhey guys, I just started the monday, give me a sec :)07:33
mariosarxcruz|ruck: happy monday !07:33
arxcruz|ruckmarios: thanks :)07:33
arxcruz|ruckmarios: can you +w https://review.opendev.org/#/c/685466/ ?07:34
arxcruz|ruckor you don't have permission for this repo ?07:34
mariosarxcruz|ruck: checking07:36
ykarelarxcruz|ruck, ack take ur time, sorry07:37
ykarelmarios, can u also review https://review.opendev.org/#/c/685343/ backport07:39
ykarelalso the depends on https://review.opendev.org/#/c/685368/07:39
*** tesseract has quit IRC07:43
*** tesseract has joined #oooq07:43
mariosykarel: ack but https://review.opendev.org/#/c/685368/ already has votes07:44
*** bogdando has quit IRC07:44
mariosykarel: (but red job and you already recheck)07:44
ykarelmarios, hmm i rechecked, those failures are not related to the patch though07:45
mariosykarel: ok but unles you make the job non voting not much i can do more then07:46
mariosykarel: even if i +A it wont merge07:46
ykarelmarios, those job should pass this time,07:47
ykarelok i will ping again one jobs are green for _W07:47
ykarel+W07:47
mariosykarel: ack ping me07:47
arxcruz|ruckykarel: seems cloud.centos.org is down07:51
arxcruz|ruckat least the service at port 8007:51
arxcruz|rucki can ping though07:51
arxcruz|ruckwho is responsible for centos?07:51
arxcruz|rucknah, nevermind07:51
ykarelarxcruz|ruck, see #rdo, discussion going there07:51
ykarelarxcruz|ruck, but good to create bug07:52
ykareland track there07:52
*** soniya29 has quit IRC07:53
*** ykarel is now known as ykarel|lunch07:53
arxcruz|ruckykarel|lunch: creating07:53
arxcruz|ruckykarel|lunch: https://bugs.launchpad.net/tripleo/+bug/184591807:54
openstackLaunchpad bug 1845918 in tripleo "OVB jobs failing to connect to cloud.centos.org" [Critical,Triaged]07:54
*** bogdando has joined #oooq08:00
*** dtantsur|afk is now known as dtantsur08:24
pandaI'm home08:32
*** chem has joined #oooq08:32
*** derekh has joined #oooq08:38
*** ykarel|lunch is now known as ykarel08:43
pandamarios: anyting you want to tell me ?08:46
mariospanda: yes. hello how are you08:46
mariospanda: lets sync in a bit? i saw your mail. would you like to schedule a call ... you just got it... like 1 hour?08:46
marioss/got it/got in/08:46
pandamarios: ack08:47
*** soniya29 has joined #oooq08:53
*** tosky has joined #oooq09:10
*** holser has joined #oooq09:21
*** zbr is now known as zbr|ruck09:24
zbr|ruckmorning! looking now...09:24
zbr|ruckmarios: in fact i asked on friday about missing yum_update.sh but nobody answered09:25
marioszbr|ruck: 10:31 < ykarel> marios, arxcruz|ruck need https://review.opendev.org/#/c/685466/ to fix rocky ^^09:31
zbr|rucki seen it, if we are lucky in 3h is fixed.09:32
arxcruz|ruckyup09:32
pandawe are never lucky09:56
*** tosky has quit IRC10:17
*** tosky has joined #oooq10:18
*** soniya29 has quit IRC10:26
*** soniya29 has joined #oooq10:26
zbr|rucksadly panda was right on https://review.opendev.org/#/c/685466/11:05
zbr|rucki keep seeing There was an issue creating /root/DLRN as requested: [Errno 13] Permission denied: '/root/DLRN'11:06
zbr|ruckseen it last week at least twice, not sure where as logstash does not help.11:07
zbr|ruckarxcruz|ruck: ^ any idea about what could caused this?11:11
chandankumarmarios: panda arxcruz|ruck kopecmartin I am taking sick leave, will see ya tomorrow11:11
*** chandankumar is now known as raukadah11:11
pandaI lost count, what was I right about ?11:11
pandaraukadah: get well, and if I see you chatting here today, I'll ban you :)11:12
mariosraukadah: ack11:12
mariosraukadah: hope you're better soon11:13
*** udesale has quit IRC11:13
zbr|ruckarxcruz|ruck: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22There%20was%20an%20issue%20creating%20%2Froot%2FDLRN%20as%20requested%5C%2211:19
zbr|ruckknown or we should raise new one?11:19
zbr|ruckor is just a side effect from another failure?11:20
zbr|ruckpanda: marios what user is supposed to be used by build-test-packages role? i am looking at tasks and lots of them are missing become11:25
zbr|ruckseems almost random11:25
marioszbr|ruck: not sure haven't looked there lately11:27
zbr|ruckmarios: tx, trying to make a patch now.11:29
zbr|ruckwhat I do not understand is why they randomly fail11:30
*** jpena is now known as jpena|lunch11:31
ykarelzbr|ruck, how many times this is seen, atleast https://review.opendev.org/#/c/685466/ it was fallback of rerunning the deployment11:34
ykarelseems some network error in between, then zuul reran the script11:34
ykarelit reran after reaching tempest poing11:35
ykarels/poing/point11:35
ykarelgood to check if same symptom at other places where u noticed the same failure11:35
zbr|ruckykarel: for the moment I raised https://bugs.launchpad.net/tripleo/+bug/184594211:36
openstackLaunchpad bug 1845942 in tripleo "tqe: There was an issue creating /root/DLRN as requested" [High,Triaged]11:36
ykarelzbr|ruck, ack i see 5 failures in last 7 days, good to check if all have network issue + reran, or something else11:37
ykareliirc that role run with non root11:38
ykareland there was some mess related to gather_facts, which changes the user facts in between11:39
ksamborhey question, what I should do to use different ovn version in our ci jobs? Because this patch https://review.rdoproject.org/r/#/c/22609/ was merged 4 days ago and when I run reproducer containers with ovn still contains previous ovn11:39
ykarelksambor, since this is a dependencies, it should be already available in repos11:40
ykarelmay be some mirror were not updated, any link where u seeing issue11:41
ykarelthe package is available in trunk deps repo:- https://trunk.rdoproject.org/centos7-master/deps/latest/x86_64/ovn-2.11.1-3.el7.x86_64.rpm11:41
ykarelohh i think u are looking at containers, if yes then it needs promoted containers11:42
ksamborunfortunatly on ci this job is timeouting on images prepere so I have only local env11:42
*** weshay has joined #oooq11:43
ykarelksambor, so on local env you are looking ovn version in containers, right?11:43
ykareland you are using containers from docker.io, right?11:43
ykarelif both are yes, it needs promotion of containers with new packages11:44
weshayykarel, which branch?11:44
ykarelweshay, master11:44
weshayykarel, lp#?11:44
ykarelweshay, ksambor is looking why https://review.rdoproject.org/r/#/c/22609/ ovn update is not in containers yet11:45
ykarelso /me just answering, not sure if there is bug, or is really needed for this case11:46
ykareland yes master promotion is blocked with other issue11:46
ksamborykarel yeah I look inside containers and yes they are docker containers11:46
weshaywe need a promotion on master because we bumped ovn?11:46
ksamborovn is bumped because if we are want to use ovn with ssl previous version have issues with that (so this is needed only for this patch https://review.opendev.org/#/c/680345/)11:48
ksamborweshay: ^^11:48
ykarelksambor, ack, so containers being build for promotion already have ovn version u looking for:- https://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-centos-7-master-containers-build-push/5be927d/logs/build.log.txt.gz#_2019-09-30_00_30_0511:48
ykarelso once master have a promotion, you will have new containers(with new ovn) at docker.io11:49
ksamborcool thanks!11:49
ykareland if you are just looking for testing it, you can adjust update_repo in container-prepare-parameter to include master-testing repo11:49
ksamborykarel, weshay: thank you !11:50
*** amoralej is now known as amoralej|lunch11:50
weshayzbr|ruck, arxcruz|ruck morning guys11:54
pandamarios: fixed the staging script to have the correct hierarchy, now focusing on the split11:55
pandamarios: also addressing your comments11:55
toskywhy oh why is rocky consistently failing in https://review.opendev.org/#/c/683932/ ?11:57
arxcruz|rucktosky: we are working on it11:59
arxcruz|ruckweshay: hi boss, need 10 min11:59
mariospanda: ack nothing major there was just reading through and didn't even get to the docker/socket stuff12:01
arxcruz|ruckweshay: hi boss12:05
weshayzbr|ruck, arxcruz|ruck https://meet.google.com/rzp-osav-fgz12:06
arxcruz|ruckweshay: ok12:06
weshayykarel, turning off dns, no luck https://review.opendev.org/#/c/685639/   we see it in current-tripleo as well12:07
ykarelweshay, didn't get, see it in current-tripleo as well12:09
ykarelany log/reference?12:09
ykarelsince it's network related, it should be affected at all places, all releases check/promotion12:10
ykarelso, wherever image get's build and trying access to cloud.centos.org12:11
weshayykarel, ya.. saw that all weekend.. re: cloud.centos.org12:13
ykarelweshay, yes and also it's randomly hitting12:13
ykarelthere are some jobs which passed12:13
ykarelso what i noticed whenever it hit server 38.110.33.4, it fails, while with other server it passes12:14
ykarelbut currently it's working locally atleast12:14
ykarelso running more jobs with https://review.rdoproject.org/r/#/c/20171/ to see if it's still hitting, if issue is still there we need to check with rdo operators12:15
ykarelif there is some network issue in RDO Cloud12:16
ykarelalso fyi i discussed the issue in #centos-devel, and as per them there is no issue on cloud.centos.org12:16
* weshay gets on my test system.. chatting w/ arx atm12:17
weshayykarel, we're going to move the cloud image to the rdo image server12:19
ykarelweshay, ack, may be it's already there good to check that too12:19
ykarelbut possible latest one is not there12:20
weshayykarel,  arxcruz|ruck images will be moved here http://images.rdoproject.org/base/centos7/12:29
ykarelweshay, ack12:29
*** jpena|lunch is now known as jpena12:30
weshayzbr|ruck, need you responding to my pings this week12:31
*** rfolco has joined #oooq12:33
zbr|ruckweshay: i back12:39
zbr|ruck(lunch)12:39
weshayzbr|ruck, k.. arxcruz|ruck is getting totally slammed atm.. we need your man power!12:39
weshayzbr|ruck, we're on an escalation call atm, out in 25min and lets sync up and see how we can help him12:40
*** Goneri has joined #oooq12:44
*** amoralej|lunch is now known as amoralej12:44
ykarelarxcruz|ruck, weshay, fyi the three ovb jobs i ran with https://review.rdoproject.org/r/#/c/20171/ to check if issue improved currently, it passed the build-images role12:50
weshayykarel, k..12:56
*** aakarsh has joined #oooq12:56
*** dtantsur is now known as dtantsur|afk12:58
jfrancoazbr|ruck: hey, a heads up for a new issue with the undercloud upgrades job https://bugs.launchpad.net/tripleo/+bug/184593513:02
openstackLaunchpad bug 1845935 in tripleo "Dying containers during undercloud-upgrade" [High,New]13:02
jfrancoazbr|ruck: I've been checking it the whole morning but I can't find out what's happening, neither narrow down when did it start happening..it was on Friday 27 for sure..but I can't find the patch that caused it13:03
*** ratailor has quit IRC13:10
weshayarxcruz|ruck, zbr|ruck https://meet.google.com/cwy-duhx-xum?authuser=113:10
arxcruz|ruckweshay: so, back to hangouts ?13:10
weshayarxcruz|ruck, zbr|ruck https://review.rdoproject.org/r/#/c/22609/13:16
weshayzbr|ruck, http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-standalone-full-tempest-master/6742c15/logs/stestr_results.html13:19
zbr|ruckweshay: managed to create bug https://bugs.launchpad.net/tripleo/+bug/184595613:28
openstackLaunchpad bug 1845956 in tripleo "neutron_tempest_plugin.scenario.test_trunk.TrunkTest failing on master" [Undecided,Confirmed]13:28
weshayzbr|ruck, k.. cool.. skip list it for master :)13:28
zbr|ruckweshay: do I need to add it to trello or this is automatic?13:29
*** soniya29 has quit IRC13:31
arxcruz|ruckweshay: https://review.opendev.org/#/c/685709/13:34
weshayarxcruz|ruck, keep going . .need it through queens13:35
zbr|ruckweshay: what reason do i need to put?13:35
weshayzbr|ruck, check the failure :)13:35
zbr|rucknot sure, but I would paste Failed to establish authenticated ssh connection to cirros13:36
arxcruz|ruckweshay: queens doesn't have the settings, shall I add it?13:37
*** chem has quit IRC13:37
*** chem has joined #oooq13:37
weshayarxcruz|ruck, https://review.opendev.org/#/c/685346/13:38
arxcruz|ruckweshay: it's not merged yet, i update my patch13:39
weshayya please.. thanks13:41
weshaymarios, let me change the invite to a google meet 2 seconds13:42
weshaymarios, https://meet.google.com/fag-ppbp-beo?authuser=113:43
mariosweshay: k13:43
rfolcopanda, the reviews on your taiga tasks are all merged, what is the top patch that I need to rebase ? and the other 2 splits ?13:46
pandarfolco: https://review.rdoproject.org/r/#/c/22656/13:51
pandarfolco: the other two are https://review.rdoproject.org/r/#/c/22655 and https://review.rdoproject.org/r/#/c/2265413:51
rfolcopanda, thanks13:52
*** redrobot has joined #oooq14:00
arxcruz|ruckykarel: didn't get your comments on https://review.opendev.org/#/c/68570914:05
* ykarel looks14:06
rfolcopanda, # The call in production is currently done by a root crontab installed a14:06
rfolco# intance creation time by cloud-init.14:06
arxcruz|ruckykarel: we just want to move the promotion so far, according weshay14:07
rfolcopanda, crontab? cloud-init? dlrn-promoter is a service script. What you talking about there??14:07
ykarelarxcruz|ruck, ok with just promotion part, my main point was baseos_undercloud_image_url var is not used at the place we are seeing issue14:08
ykarelthat cloud.centos.org image url is set in diskimage-builder element14:09
ykarelfor which i shared link in the comment14:09
pandarfolco: the provisioning part14:10
pandarfolco: there's a whole part that we are not seeing here14:10
pandarfolco: in infra-setup itself14:10
rfolcopanda, ok still needs to be root14:11
pandarfolco: wanna sync on this stuff ?14:15
rfolcopanda, your job will be tougher at this moment to explain me changes... but if you don't mind, I accept14:16
rfolcopanda, I have one thing to say about base and openstack-base... these 2 are not pushed to registry, these are intermediate containers. So we might need to change these for ppc pushed containers like you did with nova-compute and neutron-server for x8614:18
pandarfolco: doesn't matter, those two are for testing, but they are included by default in the current container-push playbook, so it looks for them, even if they are not pushed. I've changed already too much on container-push, I don't wwant to touch the core logic right now.14:20
pandarfolco: in general it's one thing that we are avoiding as much as possible, changing the core logic14:21
*** Vorrtex has joined #oooq14:22
rfolcopanda, ok14:22
arxcruz|ruckweshay: check ykarel comments14:22
* weshay looking at it14:23
weshayarxcruz|ruck, we can't do anything w/ https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/centos7/root.d/10-centos7-cloud-image#L1414:24
rfolcopanda, 54 fails delegated, 55 passes, expected to merge 54 w/ failing state ?14:24
weshaywe could override it in our jobs I guess14:24
arxcruz|ruckweshay: yup, but in which level ?14:25
weshayarxcruz|ruck, /me gets14:25
pandarfolco: 54 fails rdo-tox-molecule, I'm gong to fix it, because it's a voting job, but i completely change the code that is failing in 5514:27
pandaso it's just to please the gate jobs ...14:27
pandarfolco: if the explanation of the infra-setup credentials part is useful for your task then I'll explain now, if not we can chat later14:28
arxcruz|ruckweshay: https://opendev.org/openstack/tripleo-quickstart-extras/src/branch/master/roles/build-images/templates/overcloud-image-build.sh.j214:28
pandamaybe marios will have the same quekstion, and I'll explain once how all that part is working14:28
weshayarxcruz|ruck, yup.. there you go14:29
rfolcopanda, I think it's easier to start a new patch and just use my old patch as reference... instead of fixing rebases... my patch isn't functional anyway14:30
rfolcopanda, give me a green light to review all your 3 when they are ready14:30
pandarfolco: green light. fixing rdo-tox-molecule will not really change anything there14:32
rfolcopanda, I burned start14:33
arxcruz|ruckykarel: so, we can set base_image_url and base_image_path that will be set in https://opendev.org/openstack/tripleo-quickstart-extras/src/branch/master/roles/build-images/templates/overcloud-image-build.sh.j2#L2814:34
rfolcopanda, actually this expression makes no sense... should be jumped the gun I think14:34
ykarelarxcruz|ruck, looking14:38
ykarelarxcruz|ruck, yes looks like that's the right place14:40
ykarelgood to attempt14:40
zbr|ruckweshay: question re strategy to ease merges, the yum_update.sh just went (again) into gate few minutes ago but we have other important changes that failed to merge due to it.should I update them to add depends-on? -- this would lose their wf, example https://review.opendev.org/#/c/685466/14:47
*** surpatil has quit IRC14:48
arxcruz|ruckykarel: https://review.opendev.org/#/c/685709 take a look now14:58
arxcruz|ruckhave a feeling we will promote master :)15:01
arxcruz|ruckthe pipeline looks good so far15:01
arxcruz|ruckbrb15:01
weshayzbr|ruck, arxcruz|ruck back.. sorry.. mtg..15:04
weshayzbr|ruck, which patches are you referring to? /me not in context15:05
zbr|ruckhttps://review.opendev.org/#/c/685466/ -- this is critical to merge15:05
weshayzbr|ruck, do you have an example of a job where yum_update.sh is causing a failure?15:07
* weshay looking15:07
weshayagree it should merge, but also I never found where exactly that is killing us15:07
weshaynot that I looked much15:07
zbr|ruckhttps://review.opendev.org/#/c/683350/ --15:08
weshaythanks15:08
zbr|rucktripleo-ci-centos-7-containers-multinode-rocky  failed due to this15:08
ykarelarxcruz|ruck, ack looking15:08
weshayzbr|ruck, ya.. that is rocky only issue.. but yes needs to merge asap15:12
weshaynice find by ykarel15:12
zbr|ruckweshay: rocky fails but this bug prevents most of other fixes from merging. we already lost 6h+ hours due to another random failure. i am keeping a eye on it to recheck the others as soon it merges.15:14
ykarelarxcruz|ruck, looks good, and commented on https://review.rdoproject.org/r/#/c/22663/ as it's not testing the upstream patch15:15
weshayzbr|ruck, k.. the other option is to ask in openstack-infra to send it to the top of our gate queue15:15
weshaynot to force merge, just prioritize15:16
zbr|ruckweshay: maybe we need to activate prio label on our repos, few projects already have them.15:16
ykarelarxcruz|ruck, also not sure if qcow2 will work there, as earlier compressed file .xz was used15:19
ykarelbut yes CI will tell that if it will not work, or else need to check dib code for it15:19
ykarelmainly https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/redhat-common/bin/extract-image script is used there15:19
*** ykarel is now known as ykarel|afk15:20
weshaythanks ykarel|afk15:20
arxcruz|ruckykarel|afk: done15:21
arxcruz|ruckbrb, going to german class, be back in a few hours15:21
*** dsneddon has joined #oooq15:21
mariospanda: rfolco i am out tomorrow is a public holiday here15:24
mariosweshay: (as mentioned earlier ^^^)15:24
mariospanda: rfolco sorry should have mentioned in our call earlier but i think the work split means i am not blocking folks15:24
pandamarios: you cant be on PTO15:25
pandamarios: I protest.15:25
mariospanda: i will not be :D15:25
mariospanda: i am paying for it15:26
pandamarios: I'll make you pay when you come back15:26
* panda shakes fists15:26
mariospanda: thanks i will spend my whole day tomorrow thinking about that and looking forward to wednesday morning15:27
pandamarios: not wednesday morning, I'll be on PTO15:27
marios:D15:27
mariosits all good panda rfolco will finish it all today anyway15:28
pandarfolco: thanks folco!15:28
marioswhat a guy right panda ?15:28
mariosgo folco15:28
marioswoo15:28
pandafolcgo15:29
weshaymarios, can you meet on tuesday at 3pm utc.. you have a meeting there15:29
weshayre: scenarios for rhel15:29
mariosweshay: yes sure15:29
mariosweshay: like half hour ago but tomorrow :D15:30
mariosweshay: did you send an invite /me doesn't see it15:30
weshaymarios, tues oct 1 3pm utc15:32
mariosweshay: ack i see the invite just came15:32
weshaythanks15:35
*** ykarel|afk is now known as ykarel|away15:44
*** chem has quit IRC15:45
*** chem has joined #oooq15:45
weshaymarios, panda zbr|ruck http://38.145.34.55/centos7_master.log  have an error here15:55
weshayImportError: No module named markupsafe15:55
*** tesseract has quit IRC15:55
zbr|ruckweshay: i am on it...15:55
marioszbr|ruck: weshay me leaving in few need something?15:59
weshaymarios, nope thanks15:59
zbr|ruckadded https://review.rdoproject.org/r/#/c/22665/ for promoter16:07
*** bogdando has quit IRC16:08
*** marios has quit IRC16:14
zbr|ruckarxcruz|ruck: marios: should I attempt to manually promote from promoter or just wait for it to run again?16:19
weshayzbr|ruck, just hold.. don't manually promote16:27
zbr|ruckpanda: is tripleo-ci-promotion-staging still supposed to fail?16:29
panda zbr|ruck yes, it's not ready yet16:30
zbr|ruckok. just wanted to known if is safe to ignore.16:30
pandazbr|ruck: we'll remove the non=voting when is.16:30
pandaI'll never ever split anything ever again in my life16:43
pandahalf a day spent on code errors that i know I already solved in other patches16:43
*** jpena is now known as jpena|off16:45
*** jaosorior has quit IRC16:46
*** chem has quit IRC16:49
*** zbr|ruck is now known as zbr16:52
*** derekh has quit IRC17:00
*** ykarel|away has quit IRC17:06
weshayrfolco, the promoter appears to be broken17:10
rfolco:(17:11
weshayhttp://38.145.34.55/centos7_master.log17:11
*** amoralej is now known as amoralej|off17:11
weshaythis just merged.. but was fixing a diff issue https://review.rdoproject.org/r/#/c/22665/17:11
weshayzbr, ^17:11
weshayrfolco, I think you are the only on at this point that can fix it17:12
rfolcoweshay, do the patch did not fix it ?17:12
rfolcoso..17:12
weshayrfolco, looks like it's missing docker now17:13
weshay"ansible_loop_var": "item", "changed": false, "item": ["openstack-base", "current-tripleo"], "msg": "Failed to import docker or docker-py (Docker SDK for Python) - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)."}17:14
rfolcoweshay, will log into promoter to check venv17:15
weshayrfolco, adding ansible to the python virtenv probably changes where the other requirements are pulled from17:16
weshayhttps://github.com/rdo-infra/ci-config/commit/56d86e1ed1076e2565d0dad0f96f9d529cc7b05f#diff-db3078c05dda6f0e6fbc3a633f2f379c17:17
rfolcoisn't panda around ?17:20
weshaypanda, ?17:21
weshayyou still around?17:22
weshayrfolco, docker is on the box17:23
rfolcoyes docker-ce17:23
rfolcobut then docker-py is required for delegated molecules tests17:23
rfolcoand probably that f***ed the production workflow17:23
weshayonly avail to root17:24
*** ykarel|away has joined #oooq17:24
weshayrfolco,  let's chat17:25
rfolcok17:26
weshayrfolco, 3min17:26
panda...17:26
rfolcopanda, please look at promoter logs17:26
rfolcoTASK [Pull the images from rdoproject registry] ********************************17:27
rfolcofailed: [localhost] (item=aodh-api) => {"ansible_loop_var": "item", "changed": false, "item": "aodh-api", "msg": "Failed to import docker or docker-py (Docker SDK for Python) - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)."}17:27
weshaypanda,  https://meet.google.com/nmy-dvix-edn?authuser=1 rfolco17:28
weshaypanda, join the call please17:30
*** holser has quit IRC18:17
*** holser has joined #oooq18:23
pandaweshay: fails with authetication error (?)18:31
weshaypanda, let's roll back18:35
pandaweshay: wait18:35
weshayrfolco, panda  I have a holiday lunch I need to go to shortly18:35
weshaypanda, k.. waiting18:35
weshaypanda, can we open a proper tmux session on the promoter18:36
weshayI'll have to close my laptop soonish18:36
pandaweshay: ok18:39
pandaweshay: tmux atta if you want18:39
pandaweshay: but I think we got a bogus token before18:39
weshayk.. /me gets on18:40
pandaweshay: maybe it was not our fault18:40
weshaypanda++18:40
weshaypanda,  /me on centos user tmux18:40
weshaypanda, which user?18:41
pandaweshay: I was with root18:42
pandaweshay: another attempt running ...18:43
pandaweshay: not much to see now, but I was looking at the .docker/config.json to look at the token, and it was a small 4 character token18:43
*** jtomasek has quit IRC18:43
weshayk18:43
pandaweshay: now we have a proper token18:44
weshayah cool18:44
weshayis it pull from rdo or pushing to docker.io ?18:44
* weshay looks18:44
weshaypulling18:44
pandaweshay: it was failing to push to rdo-registry18:45
pandaweshay: we push the common-tripleo tag there too (I forgot why)18:45
rfolcocurrent18:45
pandarfolco: yeah  that one18:47
pandanow it's pulling18:47
*** jtomasek has joined #oooq18:48
pandaweshay: you can go, I'll keep an eye on things, then reactivate the loop over releases18:50
*** holser has quit IRC18:50
weshaypanda++18:50
weshaythanks18:50
*** jtomasek has quit IRC18:50
*** jtomasek has joined #oooq18:51
*** jtomasek has quit IRC18:52
*** holser has joined #oooq18:53
*** ykarel|away has quit IRC18:54
*** weshay has quit IRC18:59
*** holser has quit IRC19:07
pandarfolco: I think rdoregistry is having issues19:10
panda [WARNING]: The value of the "source" option was determined to be "pull". Please set the "source" option explicitly. Autodetection will be removed in Ansible 2.12.19:10
pandafailed: [localhost] (item=glance-api) => {"ansible_loop_var": "item", "changed": false, "item": "glance-api", "msg": "Error pulling image trunk.registry.rdoproject.org/tripleomaster/centos-binary-glance-api:91964b910a29b9793d9298a6d1b236341ba90e9e_638b9ca3 - 500 Server Error: Internal Server Error (\"{\"message\":\"error parsing HTTP 400 response body: unexpected end of JSON input19:10
panda: \\\"\\\"\"}\")"}19:10
*** holser has joined #oooq19:11
*** holser has quit IRC19:12
*** jbadiapa has joined #oooq19:17
*** ccamacho has joined #oooq19:22
*** kopecmartin is now known as kopecmartin|off19:47
*** holser has joined #oooq19:49
*** jbadiapa has quit IRC20:27
*** aakarsh has quit IRC20:35
*** jfrancoa has quit IRC20:46
*** jbadiapa has joined #oooq20:46
*** irclogbot_0 has quit IRC20:55
*** irclogbot_2 has joined #oooq20:55
*** ccamacho has quit IRC20:57
*** irclogbot_2 has quit IRC21:01
*** irclogbot_2 has joined #oooq21:01
*** Goneri has quit IRC21:01
*** irclogbot_2 has quit IRC21:07
*** irclogbot_3 has joined #oooq21:07
*** irclogbot_3 has quit IRC21:13
*** irclogbot_1 has joined #oooq21:13
*** irclogbot_1 has quit IRC21:19
*** irclogbot_1 has joined #oooq21:19
*** irclogbot_1 has quit IRC21:25
*** irclogbot_0 has joined #oooq21:25
*** irclogbot_0 has quit IRC21:33
*** irclogbot_0 has joined #oooq21:35
*** irclogbot_0 has quit IRC21:41
*** irclogbot_3 has joined #oooq21:41
*** irclogbot_3 has quit IRC21:46
*** irclogbot_1 has joined #oooq21:47
*** irclogbot_1 has quit IRC21:53
*** irclogbot_0 has joined #oooq21:53
*** holser has quit IRC21:58
*** irclogbot_0 has quit IRC21:59
*** irclogbot_1 has joined #oooq21:59
*** irclogbot_1 has quit IRC22:05
*** irclogbot_2 has joined #oooq22:05
*** irclogbot_2 has quit IRC22:15
*** irclogbot_0 has joined #oooq22:15
*** tosky has quit IRC22:24
*** jbadiapa has quit IRC22:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!