Thursday, 2018-07-05

*** sshnaidm|rover is now known as sshnaidm|afk00:56
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)00:58
*** ykarel has joined #oooq02:35
*** skramaja has joined #oooq02:55
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)02:58
*** agopi has quit IRC03:02
*** ykarel has quit IRC03:29
*** saneax has quit IRC03:34
*** udesale has joined #oooq03:44
*** ykarel has joined #oooq03:47
*** ykarel_ has joined #oooq04:04
*** ykarel has quit IRC04:04
*** ykarel_ has quit IRC04:12
*** ykarel_ has joined #oooq04:38
*** holser_ has joined #oooq04:45
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci- (1 more message)04:58
*** ratailor has joined #oooq05:09
*** ccamacho has quit IRC05:14
*** bogdando has joined #oooq05:23
*** jbadiapa has quit IRC05:24
*** yolanda has joined #oooq05:30
*** quiquell|off is now known as quiquell05:31
*** ykarel_ is now known as ykarel05:43
*** holser_ has quit IRC05:50
chkumar|ruck%gatestatus05:54
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci- (1 more message)05:54
*** ccamacho has joined #oooq06:19
*** saneax has joined #oooq06:26
*** sanjayu_ has joined #oooq06:30
*** saneax has quit IRC06:33
*** quiquell is now known as quiquell|bbl06:35
*** jbadiapa has joined #oooq06:43
*** matbu has joined #oooq06:49
*** jfrancoa has joined #oooq06:51
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, legacy-tripleo-ci-centos-7 (1 more message)06:58
*** gkadam has joined #oooq07:11
*** holser_ has joined #oooq07:12
*** sshnaidm|afk has quit IRC07:14
*** quiquell|bbl is now known as quiquell07:16
*** sshnaidm|afk has joined #oooq07:21
quiquellsshnaidm|afk, chkumar|ruck: added zuulv3 RDO job builds to RR dashboard07:25
chkumar|ruckquiquell: cool, thanks :-)07:25
quiquellchkumar|ruck: Let me know if you are missing something07:26
chkumar|ruckquiquell: http://38.145.34.131:3000/d/2kHMNHvik/exploration?orgId=107:27
chkumar|ruckquiquell: I wanted to know how many a job failed in a day or two07:28
chkumar|ruckquiquell: under job frequency tab count is too much07:28
quiquellchkumar|ruck, let me remove the time range from tables (it was not suppose to be there)07:28
quiquellso you can play with global07:28
chkumar|ruckquiquell: one more improvement under job list result success should be green not RED07:29
quiquellchkumar|ruck: noted07:29
quiquellok, reload again07:29
quiquellclick on influxdb filter +07:29
quiquellselect passed07:29
quiquelland value False07:29
*** amoralej|off is now known as amoralej07:29
quiquellthen in the right corner "Last 7 days" if you click it you can select 3 days07:30
quiquellor whatever range you think of07:30
quiquellchkumar|ruck: http://38.145.34.131:3000/d/2kHMNHvik/exploration?orgId=1&from=now-3d&to=now&var-influxdb_filter=passed%7C%3D%7CFalse07:30
quiquellThe url has the filter07:31
chkumar|ruckquiquell: yup now it looks good07:31
chkumar|ruckquiquell: thanks :-)07:31
quiquellchkumar|ruck: You can put the filter you want and the range you want07:31
quiquellIf you click in the job name you go to the logs07:31
quiquellIf you click in the Patch you go to the review07:31
*** tosky has joined #oooq07:32
chkumar|ruckthat looks nice now07:32
*** ratailor_ has joined #oooq07:33
*** ratailor has quit IRC07:35
chkumar|ruck%gatestatus07:36
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, legacy-tripleo-ci- (1 more message)07:36
chkumar|ruckquiquell: we need one more fix http://38.145.34.131:3000/d/pgdr_WVmk/cockpit?orgId=1&from=now%2Fd&to=now%2Fd07:36
chkumar|ruckunder last jobs tab, in jOb section, if we click on job name, it does not show log url07:37
chkumar|ruckthe log url is getting appended with graphana url in firefox07:37
quiquellchkumar|ruck: ack07:40
*** jaganathan has joined #oooq07:44
*** florianf has joined #oooq07:47
quiquellchkumar|ruck: ok now I see the legacy- jobs at the dashboard08:05
*** ykarel is now known as ykarel|lunch08:08
*** sshnaidm|afk is now known as sshnaidm|rover08:16
*** Goneri has joined #oooq08:17
quiquellchkumar|ruck: Fixed08:22
quiquellchkumar|ruck: There is something bad at RDO zuuls.. with ovb jobs08:26
chkumar|ruckquiquell: thanks :-)08:29
chkumar|rucksshnaidm|rover: I am filing a master bug to track the overcloud deploy timeout08:29
sshnaidm|roverchkumar|ruck, is it same kind of timeouts?08:30
chkumar|rucksshnaidm|rover: https://review.rdoproject.org/etherpad/p/chkumar-ruck-rover-sprint16-notes08:30
chkumar|ruckcheck line: 18 to 2508:30
chkumar|rucksshnaidm|rover: till early morning scenario2 was failing08:31
sshnaidm|roverchkumar|ruck, in line 18 there is a job http://logs.openstack.org/45/560445/78/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/6195e6e/job-output.txt.gz#_2018-07-05_02_50_08_41415208:33
quiquellsshnaidm|rover: Can you merge the RR monitor stuff ?08:33
sshnaidm|roverchkumar|ruck, it doesn't have overcloud timeout08:33
sshnaidm|roverchkumar|ruck, the job is just cut because it doesn't have time anymore, bc of long tasks: http://logs.openstack.org/45/560445/78/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/6195e6e/job-output.txt.gz#_2018-07-05_02_50_50_16823108:34
sshnaidm|roverchkumar|ruck, so it's related to timeouts we have in general08:34
sshnaidm|roverchkumar|ruck, but overcloud itself passed well08:34
chkumar|rucksshnaidm|rover: do we need a bug for that?08:34
sshnaidm|roverquiquell, yeah, in which order?08:35
quiquellsshnaidm|rover: Parenting order08:35
sshnaidm|roverchkumar|ruck, I think we have08:35
quiquellsshnaidm|rover: They are all done one after another as a big family08:35
sshnaidm|roverchkumar|ruck, it's a bug for general timeouts in job08:37
sshnaidm|roverchkumar|ruck, next line is the same: http://logs.openstack.org/45/560445/78/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-updates/04f7b4a/job-output.txt.gz#_2018-07-05_02_51_36_90071808:38
sshnaidm|roverchkumar|ruck, and next too: http://logs.openstack.org/45/560445/78/check/tripleo-ci-centos-7-scenario002-multinode-oooq-container/2c3d93a/job-output.txt.gz#_2018-07-05_02_54_35_16302908:39
chkumar|rucksshnaidm|rover: we donot have a master bug to track timeout issue08:43
quiquellchkumar|ruck: I think we have08:44
quiquellhttps://bugs.launchpad.net/tripleo/+bug/177679608:44
openstackLaunchpad bug 1776796 in tripleo "tripleo gate jobs timing out, duplicate containers pulls a possible cause" [Critical,Triaged] - Assigned to Quique Llorente (quiquell)08:44
chkumar|ruckquiquell: I am adding the findings there.08:46
quiquellchkumar|ruck: Yep is the place08:46
sshnaidm|roverquiquell, but I think I saw ara error in other places and it still worked, maybe just red herring08:48
quiquellsshnaidm|rover: Then the problem I have is the timeout08:48
sshnaidm|roverquiquell, it's timeout of collection logs08:48
sshnaidm|roverquiquell, I bet some infra host stuck08:49
quiquellDamn...08:49
sshnaidm|roveras usual08:49
quiquellBut all the jobs have fail08:49
sshnaidm|rover:D08:49
chkumar|rucksshnaidm|rover: quiquell after 11:00 jobs are coming greener08:49
chkumar|ruckin grafite08:49
chkumar|rucksorry08:49
chkumar|ruckgrafana08:49
sshnaidm|roverbtw, there were github problems tonight, so please check if it's not that: https://status.github.com/messages08:50
sshnaidm|roverquiquell, maybe we can add this to grafana too ^^ :)08:50
chkumar|rucksshnaidm|rover: yup08:51
chkumar|rucksshnaidm|rover: periodic pike jobs got impacted due to github issues08:51
quiquellsshnaidm|rover: Good one08:51
quiquellsshnaidm|rover: Next to openstack infra issues08:52
sshnaidm|roverquiquell, and this too maybe: http://ds.iris.edu/seismon/eventlist/index.phtml  :D08:56
chkumar|rucksshnaidm|rover: http://logs.openstack.org/45/560445/78/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/a37efa0/logs/undercloud/var/lib/mistral/5654fc3f-9375-439a-b53e-25df469b290f/ansible.log.txt.gz08:56
quiquellsshnaidm|rover: ^ DNS ? have it at my mind08:56
quiquellsshnaidm|rover: hubbot server is a good place for the RR monitor ?08:56
sshnaidm|roverquiquell, yep08:57
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, legacy-tripleo-ci-centos-7-container-to-container-upgrades-master, tripleo-ci-centos-7-scenario000-multinode-oooq-container-updates @  (1 more message)08:58
quiquellsshnaidm|rover: Going to gate the RR monitor scripts, with telegraf --test :-)08:59
sshnaidm|roverquiquell, cool!08:59
quiquellwe need to convert that into infrastructure-as-code08:59
chkumar|rucksshnaidm|rover: need some pointer on above job keystone init is not happening08:59
*** ratailor_ has quit IRC09:02
*** ykarel|lunch is now known as ykarel09:08
chkumar|rucksshnaidm|rover: openstack check jobs are red due to github timepout issue09:09
sshnaidm|roverchkumar|ruck, when?09:10
ykarelchkumar|ruck, openstack-check/ or periodic pike?09:12
ykarelbtw pike github failures were between 05:27 UTC and 05:32 UTC09:13
chkumar|ruckykarel: https://review.rdoproject.org/zuul3/status.html09:15
chkumar|ruckykarel: check running check jobs09:15
chkumar|ruck*openstack-check09:15
chkumar|rucksshnaidm|rover: ^^09:15
* ykarel looks09:15
ykarelchkumar|ruck, i can see different failures, but can't find github ones, can you share link?09:19
chkumar|ruckykarel: http://logs.rdoproject.org/72/576772/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ocata-branch/17c995c/job-output.txt.gz09:19
chkumar|ruckykarel: others are due to overcloud deploy timing out09:20
ykarelchkumar|ruck, no different issues09:20
chkumar|ruckykarel: one with vxlan issue09:21
ykarelhmm09:21
ykarelso timeout was at 6:45 UTC, and as per github status page it's operational since 7:10 UTC09:23
ykarelso we should not see the timeout issue now09:24
chkumar|ruckhttp://logs.rdoproject.org/63/578863/4/openstack-check/legacy-tripleo-ci-centos-7-container-to-container-upgrades-master/b8ca6cb/job-output.txt.gz09:24
chkumar|ruckykarel: yes09:25
chkumar|rucksshnaidm|rover: those were related to the patch itself09:33
*** dtantsur|afk is now known as dtantsur09:33
*** Goneri has quit IRC09:40
*** Goneri has joined #oooq09:42
quiquellsshnaidm|rover: seismon, good for summer and chand choosing destination09:47
*** Goneri has quit IRC09:54
sshnaidm|roverchkumar|ruck, last one seems like a problem with zuulv3 transition..09:56
sshnaidm|roverchkumar|ruck, does it happen more?09:56
*** Goneri has joined #oooq09:58
sshnaidm|roverquiquell, need to fix log links in "last jobs" table09:58
quiquellsshnaidm|rover: It's RDO zuulv3 problem with OVB09:59
quiquellsshnaidm|rover: check this https://softwarefactory-project.io/zuul/api/tenant/rdoproject.org/builds?change=56860209:59
quiquelljob_url is wrong09:59
sshnaidm|roverquiquell, yeah, I see it happens when "node failure"10:00
sshnaidm|roverquiquell, actually there are no logs..10:00
quiquellsshnaidm|rover: Yep, internal IRC is for this ghings ? or better open a ticket ?10:01
sshnaidm|roverquiquell, idk, maybe #rdo, but pabelanger is not here atm10:02
sshnaidm|roverquiquell, because he is doing the migration10:02
quiquellsshnaidm|rover: Already ask there10:02
quiquellsshnaidm|rover: They are super busy now10:02
chkumar|rucksshnaidm|rover: nope10:02
sshnaidm|roverchkumar|ruck, then let's ignore it, we have enough problems..10:03
quiquellsshnaidm|rover: You can filter them with the influxdb_filter10:03
sshnaidm|roverquiquell, I wish! :D10:04
quiquellSomething like *ovb* and status failure10:04
sshnaidm|roverquiquell, I need ad-hoc vars in my brain10:04
quiquellsshnaidm|rover:Â Hehe10:04
quiquellsshnaidm|rover: Working in a review for that10:05
chkumar|rucksshnaidm|rover: I am not able to set status to traiged and importance in the bug10:05
chkumar|rucksshnaidm|rover: one more bug https://bugs.launchpad.net/tripleo/+bug/178022410:05
openstackLaunchpad bug 1780224 in tripleo "ERROR configuring keystone_init_tasks in tripleo-ci-centos-7-scenario001-multinode-oooq-container job" [Undecided,New]10:05
quiquellsshnaidm|rover: Without ovb http://38.145.34.131:3000/d/pgdr_WVmk/cockpit?orgId=1&var-launchpad_tags=alert&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&var-promotion_names=current-tripleo-rdo-testing&var-releases=master&var-releases=queens&var-releases=pike&var-releases=ocata&var-influxdb_filter=job_name%7C!~%7C%2Fovb%2F10:06
chkumar|rucksshnaidm|rover: it is not showing10:06
sshnaidm|roverchkumar|ruck, I see, seems like need to add you to group, lemme look10:06
sshnaidm|roverchkumar|ruck, add you here: https://launchpad.net/~tripleo10:08
chkumar|rucksshnaidm|rover: joined the team10:12
chkumar|rucksshnaidm|rover: this one https://bugs.launchpad.net/tripleo/+bug/1780183 is happening in another jobs also10:13
openstackLaunchpad bug 1780183 in tripleo "Overcloud failed to Deploy in tripleo-ci-centos-7-nonha-multinode-oooq" [High,Triaged] - Assigned to Sagi (Sergey) Shnaidman (sshnaidm)10:13
*** jbadiapa has quit IRC10:15
*** bogdando has quit IRC10:16
sshnaidm|roverchkumar|ruck, yep, looking in it10:16
quiquellpanda: You there ?10:26
amoralejone more voe +2+w for https://review.openstack.org/#/c/579888/ ?10:27
amoraleji have a review that needs it10:27
*** bogdando has joined #oooq10:45
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: tripleo-ci-centos-7-scenario001-multinode-oooq-container, tripleo-ci-centos-7-scenario002-multinode-oooq-container, legacy-tripleo-ci-centos-7 (1 more message)10:58
chkumar|rucksshnaidm|rover: https://review.openstack.org/#/c/579888/11:00
chkumar|ruckneed +w on this one11:00
chkumar|rucksshnaidm|rover: for phase 2 queens promotions what is blocking11:02
chkumar|ruckwhere I can help?11:02
*** amoralej is now known as amoralej|lunch11:04
sshnaidm|roverquiquell, something is wrong with your patches parenting: https://review.rdoproject.org/r/#/c/14595/4  maybe just collect all json changes in one patch and scripts in other?11:20
sshnaidm|roverchkumar|ruck, I think you opened a bug for it yesterday, right? network problems in job11:21
chkumar|rucksshnaidm|rover: https://bugs.launchpad.net/tripleo/+bug/178009111:21
openstackLaunchpad bug 1780091 in tripleo "containerized undercloud deployment failed on periodic jobs" [Critical,Triaged]11:21
chkumar|rucksshnaidm|rover: weshay was saying to check network configuration in this one11:22
chkumar|rucksshnaidm|rover: but from last comment, someone has also hitted it11:22
quiquellsshnaidm|rover: will try to rebase the pile of sh...11:23
quiquellsshnaidm|rover: Fixed, try it now11:24
quiquellsshnaidm|rover: The last parent was already merged rebase was neede11:24
chkumar|rucksshnaidm|rover: All bugs are listed here https://review.rdoproject.org/etherpad/p/chkumar-ruck-rover-sprint16-notes from line 10 -1511:25
sshnaidm|roverchkumar|ruck, this etherpad is growing fast, better to have all bugs together in the top11:32
*** sshnaidm|rover is now known as sshnaidm|rov|lnc11:33
pandaquiquell: now I am11:34
quiquellpanda: I am ok now11:35
quiquellpanda: sprint16 stuff, but let's talk in the meeting11:35
quiquellpanda: Have add some stuff to the doc11:36
*** skramaja_ has joined #oooq11:58
*** skramaja has quit IRC11:59
*** panda is now known as panda|lunch12:11
*** sshnaidm|rov|lnc is now known as sshnaidm|rover12:15
quiquellsshnaidm|rover: Do you agree on moving RR monitoring to hubbot server ?12:16
sshnaidm|roverquiquell, moving what exactly?12:18
quiquellsshnaidm|rover: The stuff we have now telegraf + influxdb + grafana12:18
quiquellsshnaidm|rover: Later on delegate on infra12:19
*** jbadiapa has joined #oooq12:20
*** trown|outtypewww is now known as trown12:20
sshnaidm|roverquiquell, again problem: https://review.rdoproject.org/r/#/c/14610/212:21
sshnaidm|roverquiquell, I'd like to have a different host for that12:21
*** rlandy has joined #oooq12:23
quiquellsshnaidm|rover: rebased, repo is changing a lot12:25
quiquellsshnaidm|rover: try now12:27
*** quiquell is now known as quiquell|lunch12:27
sshnaidm|roverquiquell|lunch, ok, so 2 remains in conflict: https://review.rdoproject.org/r/#/q/owner:%22F%25C3%25A9lix+Enrique+Llorente+Pastora%22+status:open12:34
quiquell|lunchsshnaidm|rover: rebased12:38
rlandypanda|lunch: wanted to touch base about scenario007 and 008 tempest tests12:40
sshnaidm|roverchkumar|ruck, the board filtered out from ovb: http://38.145.34.131:3000/d/pgdr_WVmk/cockpit?orgId=1&var-launchpad_tags=alert&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&var-promotion_names=current-tripleo-rdo-testing&var-releases=master&var-releases=queens&var-releases=pike&var-releases=ocata&var-influxdb_filter=job_name%7C!~%7C%2Fovb%2F12:50
sshnaidm|roverchkumar|ruck, looks much better12:50
sshnaidm|roverquiquell|lunch, it's weird that "cloud !~ /rdo/" doesn't do it, I'd expect it filter out everything in rdo cloud..12:51
quiquell|lunchsshnaidm|rover: there some builds with empty cloud, have to take a lok12:51
*** quiquell|lunch is now known as quiquell12:52
quiquellsshnaidm|rover: If you checkout the exploration there are some with cloud and region empty12:52
sshnaidm|roverquiquell, hmm, need to debug it12:52
quiquellWe can filte by empty to get them12:53
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: legacy-tripleo-ci-centos-7-container-to-container-upgrades-master @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722412:58
panda|lunchrlandy: after the planning , ok ?12:59
rlandypanda|lunch: ack12:59
panda|lunchrlandy: I created a specific card, and removed the migration from the rest of the code12:59
*** panda|lunch is now known as panda12:59
rlandyok12:59
sshnaidm|roverpanda, is the meeting now?13:00
pandasshnaidm|rover: yes13:01
weshayhttps://etherpad.openstack.org/p/tripleo-ci-squad-meeting13:02
*** agopi has joined #oooq13:04
*** florianf has quit IRC13:12
ykarelarxcruz, hi13:14
arxcruzykarel: hey wanted to talk with you actually :)13:14
arxcruzykarel: whats up ?13:14
ykarelarxcruz, i was looking at https://bugs.launchpad.net/tripleo/+bug/177962813:15
openstackLaunchpad bug 1779628 in tripleo "Disable ssl validation in tempest on tripleo quickstart" [Medium,In progress] - Assigned to Arx Cruz (arxcruz)13:15
arxcruzykarel: yes ?13:15
ykarelbut looks like issue is happening with tempest container,13:15
ykarelshouldn't that be fixed13:15
ykarelwithout container it seems to pass13:15
arxcruzhmmmm13:15
arxcruzykarel: i only saw this behavior on featureset035 and another one that I don't remember now13:16
ykareland currently the patch is in gate13:16
ykarelarxcruz, i checked because fs035 on promotion job is not facing this issue13:16
arxcruzykarel: oh, right...13:16
ykarelnot sure why on check job tempest is running in container while in promotion job not13:17
arxcruzykarel: that's because we are setting on tempestconf to false right now13:17
arxcruzykarel: however, the default is true, so we are overwriting it13:17
arxcruzonce this patch get merged, we will remove this from tempestconf13:17
arxcruzthat was my agreement with tosky :)13:17
toskyyep13:17
toskythanks :)13:18
arxcruzwe shouldn't overwrite by default tempest options, unless is explicited required by the user13:18
sshnaidm|roverquiquell, all merged \o/13:18
* quiquell crying13:18
toskyit would be interesting to know hy it does not work on a tripleo-quickstart deployment too, if some other configuration keys are needed13:18
ykarelarxcruz, ack. But why issues is not seen in fs035 promotion job13:18
arxcruztosky: i need to investigate, basically we just need to point the ca file13:18
toskyoh, makes sense13:19
arxcruzykarel: because tempestconf is overwriting it automagically13:19
arxcruzshit, now i don't know what i was doing...13:20
ykareli still don't get :(13:20
ykarelhow tempestconf is overriting in promotion job but not in check job,13:21
ykarelit should be same at both places13:21
arxcruzykarel: can you point me the log from the promotion ?13:22
arxcruzi think i have an idea why13:22
ykarelarxcruz, https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master/a220a1c/undercloud/home/jenkins/tempest.log.txt.gz13:22
ykarelthe only difference i can see is tempest_format13:23
ykarelbut can be other as well13:23
arxcruzykarel: hmmm so, found the problem13:27
ykareland what's that?13:28
arxcruzykarel: running in container, we loose the export 'PYTHONWARNINGS=ignore:Certificate has no, ignore:A true SSLContext object is not available'13:28
arxcruzwhile running locally, we don't loose13:28
*** amoralej|lunch is now known as amoralej13:28
arxcruzykarel: https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master/a220a1c/undercloud/home/jenkins/tempest.log.txt.gz#_2018-07-05_10_00_3913:28
arxcruzso containers will fail, because this is set outside the container13:29
ykarelarxcruz, so this needs to be fixed na13:30
ykarelas this would be faced when running ipv6 job with tempest container13:32
arxcruzykarel: disable ssl will fix it13:35
arxcruzykarel: ohhh, i see your point...13:37
arxcruztosky: ykarel so, the problem is in tempestconf, we need to disable ssl directly on tempestconf because, it's failing due the fact tempestconf is being executed in the container, and don't have the PYTHONWARNINGS variable set13:38
arxcruzso when request tries to access https, it fails13:38
arxcruzoutside container isn't affected be cause PYTHONWARNINGS is set properly13:39
ykarelarxcruz, so can't  PYTHONWARNINGS be set in container to make both container and host behave the same13:40
*** ykarel is now known as ykarel|away13:40
arxcruzykarel: probably yes, i don't know how, but it should be possible let me google it13:40
*** skramaja_ has quit IRC13:43
*** ykarel|away has quit IRC13:45
*** florianf has joined #oooq13:46
*** quiquell is now known as quiquell|mtg13:53
*** agopi has quit IRC13:58
chkumar|rucksshnaidm|rover: looks cool14:00
*** ykarel has joined #oooq14:08
ykarelarxcruz, ok14:15
*** agopi has joined #oooq14:25
*** agopi_ has joined #oooq14:27
*** agopi has quit IRC14:30
*** ccamacho1 has joined #oooq14:44
*** ccamacho1 has quit IRC14:44
*** ccamacho1 has joined #oooq14:45
*** ccamacho has quit IRC14:45
*** tcw1 has joined #oooq14:53
*** tcw has quit IRC14:57
*** tcw1 is now known as tcw14:57
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: legacy-tripleo-ci-centos-7-container-to-container-upgrades-master @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722414:58
rfolcosorry marios did not get what you said15:01
*** ykarel is now known as ykarel|away15:03
arxcruzykarel|away: did you saw the patch?15:07
ykarel|awayarxcruz, yes, looks okk, but want to see the job result15:08
arxcruzykarel|away: i think it will fail, because the includes on ansible15:08
ykarel|awayalso good to add the update to bug as this is related15:08
arxcruzbut we'll see15:08
ykarel|awayokk15:09
weshaychkumar|ruck, ping15:12
sshnaidm|roverweshay, I think he finished for today15:20
sshnaidm|roverweshay, he sent mail with updates15:20
pandarlandy: sorry I was muted, I'll ping you for the scenarios15:25
rlandypanda: sure - whenever you are ready15:25
rlandywe need to close that out15:25
*** sanjayu_ has quit IRC15:25
*** quiquell|mtg is now known as quiquell|off15:26
sshnaidm|roverfyi, my patch is rebased on pandas patch: https://review.openstack.org/#/c/57845615:27
mariosrfolco: o/ sorry man bluejeans was over irssi window :)15:33
pandarlandy: we can chat now if you want15:43
rlandypanda: sure15:44
pandarlandy: I'm in my channel15:45
*** agopi_ is now known as agopi15:45
chkumar|ruckweshay: pong15:58
weshaychkumar|ruck, howdy... going to write up a card for you handle while ruck/rover16:00
weshaychkumar|ruck, we need a fs21 job that runs tempest w/o skip list.. arxcruz has that in progress16:00
chkumar|ruckweshay: sure16:00
weshaychkumar|ruck, but will also need to have the skip list cleaned out for the upstream releases16:00
*** bogdando has quit IRC16:00
weshaychkumar|ruck, make sense?16:00
chkumar|ruckweshay: yup make sense16:00
weshaychkumar|ruck, thanks chkumar|ruck16:01
weshaychkumar|ruck, don't worry about sending email status updates16:02
weshaychkumar|ruck, I appareciate.. but I think that is too much overhead16:02
chkumar|ruckweshay: sigi and I not synced today so sent email16:02
weshayk k16:02
weshay:)16:02
weshayjust wanted you to know it's not required or my expectation for you to email updates on a daily basis16:03
*** udesale has quit IRC16:04
*** yolanda_ has joined #oooq16:13
*** yolanda has quit IRC16:17
arxcruznot my fault16:18
arxcruz:)16:18
*** jfrancoa has quit IRC16:20
weshayarxcruz, :)16:22
arxcruzweshay: fyi, i'm working on stackviz bug16:22
arxcruzykarel|away: when you have some time, i would like to talk with you about packaging ;)16:23
weshayrlandy, can you reserve about 15minutes in your day to review https://review.openstack.org/#/c/565740/ w/ me16:26
rlandyweshay: sure - after I finish with panda16:26
ykarel|awayarxcruz, sure, can do that tomorrow, just ping me :)16:26
arxcruzykarel|away: yup16:26
*** agopi is now known as agopi|lunch16:30
*** panda is now known as panda|off16:47
*** yolanda__ has joined #oooq16:53
*** ykarel|away has quit IRC16:54
*** yolanda_ has quit IRC16:56
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: legacy-tripleo-ci-centos-7-container-to-container-upgrades-master @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722416:58
*** amoralej is now known as amoralej|off17:03
*** ccamacho1 has quit IRC17:03
*** bogdando has joined #oooq17:05
*** trown is now known as trown|lunch17:07
*** agopi|lunch is now known as agopi17:08
*** yolanda_ has joined #oooq17:12
*** yolanda__ has quit IRC17:15
*** rlandy is now known as rlandy|brb17:21
*** dtantsur is now known as dtantsur|afk17:26
*** bogdando has quit IRC17:35
*** rlandy|brb is now known as rlandy17:40
*** sshnaidm|rover has quit IRC17:43
*** holser_ has quit IRC17:43
*** sshnaidm|rover has joined #oooq17:52
weshayrlandy, you have a sec?17:55
*** trown|lunch is now known as trown17:59
rlandyweshay; sure, ping me when you are want to do the review18:08
weshayrlandy, ready18:08
rlandyk - bj?18:08
weshayya18:08
*** tcw has quit IRC18:10
*** tcw has joined #oooq18:12
*** yolanda__ has joined #oooq18:13
*** florianf has quit IRC18:14
*** yolanda_ has quit IRC18:15
*** ccamacho has joined #oooq18:23
*** yolanda_ has joined #oooq18:35
*** yolanda__ has quit IRC18:39
*** yolanda__ has joined #oooq18:39
*** yolanda_ has quit IRC18:42
*** sshnaidm|rover is now known as sshnaidm|off18:46
*** ccamacho has quit IRC18:54
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: legacy-tripleo-ci-centos-7-container-to-container-upgrades-master @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722418:58
rfolcoaren't we going to waste resources for undercloud jobs with this nodeset in our base job ? https://github.com/openstack-infra/tripleo-ci/blob/master/zuul.d/base.yaml#L1919:18
rfolcoI guess we need one base job for multinode and one for singlenode, just like devstack has19:19
*** gkadam has quit IRC19:32
*** sanjay__u has joined #oooq19:56
weshayrlandy, if this passes --extra-vars directly.. I think a file may work19:59
weshayhttps://review.openstack.org/#/c/546474/19:59
* rlandy looks20:00
weshayrfolco, that was the plan all along wasn't it?20:00
weshayto have one base job for the singlenode undercloud jobs20:00
weshayand one for the multinode jobs20:00
rfolcoweshay, I suspect nodepool is wasting one slave for undercloud singlenode jobs with two-node nodeset20:01
rfolcoweshay, devstack has one base for multinode and one for singlenode, nodesets according to each situation20:01
weshayrfolco, 1. why suspect.. go check, 2. why are we using two node config for the undercloud jobs20:01
weshayrfolco, it's not how the old jobs work20:02
weshayFAK20:02
weshayrfolco, come one man20:02
weshayon20:02
weshayrfolco, the old parents for the tripleo jobs did that correctly20:02
rfolcoweshay, there is no way to check this without looking at nodepool20:03
weshayhrm20:03
*** Goneri has quit IRC20:03
weshayrfolco, this is what it used to look like https://github.com/openstack-infra/tripleo-ci/blob/6e7955597158fa3738c1f5812888d50c2d8cf2d3/zuul.d/base.yaml20:04
weshayrfolco, open a bug and fix the nodes please20:05
rfolcoweshay, exactly. The confusion was: we parent to multinode job for both, but we need abstract layer for each20:05
rfolcoweshay, working on it20:05
weshayrfolco, I think this is really just tech debt20:06
weshayhttps://review.openstack.org/#/c/578432/20:06
weshayrfolco, in that if we got multinode to work, single comes for free20:07
weshayhowever I am suprised the team did not get that done20:07
rfolconodepool looks at nodeset and gives how many slaves you want, so for singlenode you have to ask just one20:08
weshayrfolco, https://review.openstack.org/#/c/578456/13/zuul.d/multinode-jobs.yaml20:11
weshayrfolco, you don't have to worry about it yet20:11
rfolcoweshay, I don't get your point20:12
weshayrfolco, that change is not used yet20:12
rfolcoweshay, needs to be fixed before we merge, yes20:13
weshayrfolco, you better vote on https://review.openstack.org/#/c/578456/20:14
rfolcoweshay, done20:16
*** ccamacho has joined #oooq20:19
*** hubbot has quit IRC20:28
*** dmellado has quit IRC20:28
rlandyrdo down?20:44
rlandyguess so20:44
arxcruzweshay: https://review.openstack.org/#/c/580361/ fix stackviz21:09
arxcruzweshay: https://review.openstack.org/#/c/580423/ swift jobs re-enabled on scenario00221:15
arxcruzs/swift jobs/swift tests21:15
weshayrfolco, rlandy ya.. it just went down21:36
weshaythanks arxcruz21:36
*** holser_ has joined #oooq21:36
rlandyrfolco: thanks - saw saga on rhos-ops21:37
rlandyso much for debug :(21:37
*** hubbot has joined #oooq21:38
weshayarxcruz, I thought 21 would be easier to remember https://review.openstack.org/#/c/580480/21:39
arxcruzweshay: haha, yes! :D\21:40
*** holser_ has quit IRC22:01
*** holser_ has joined #oooq22:06
*** holser_ has quit IRC22:40
hubbotFAILING CHECK JOBS on stable/ocata: legacy-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024-ocata @ https://review.openstack.org/564291, master: legacy-tripleo-ci-centos-7-container-to-container-upgrades-master @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722422:59
*** tosky has quit IRC23:01
*** yolanda_ has joined #oooq23:07
*** yolanda__ has quit IRC23:08
*** yolanda__ has joined #oooq23:11
*** yolanda_ has quit IRC23:13
*** rlandy has quit IRC23:25
*** yolanda_ has joined #oooq23:40
*** yolanda__ has quit IRC23:42
*** agopi is now known as agopi|off23:51
*** agopi|off has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!