Tuesday, 2018-03-06

imacdonnguess we'll have to "agree to disagree" on that part .. I'll see what the RDO guys have to say about it00:01
rm_workyep00:01
rm_workto me this is extremely unambiguous00:02
rm_worki have a feeling the RDO packagers are aware of this as well00:02
rm_workso again, i have a feeling something else is going on here00:02
imacdonnOK.. will see00:02
rm_workthis isn't their first rodeo00:02
imacdonnsince you're here ... I have another issue that you may comment on ... I think you've see it before00:02
imacdonnfrom amphora-agent.... [2018-03-05 21:02:45 +0000] [1105] [DEBUG] Invalid request from ip=::ffff:10.250.32.84: [SSL: SSL_SESSION_ID_CALLBACK_FAILED] ssl session id callback failed (_ssl.c:1783)00:03
rm_workhmmm00:03
imacdonnseems to be intermittent .. some amphorae do it, others work OK ... checked clock already .... NTP is good00:03
rm_workwhich log is that00:03
rm_workyeah clock would have been my first recommendation00:03
imacdonn /var/log/amphora-agent.log ... something like that00:03
rm_workyeah k00:04
johnsomYou don’t want to run something different than CI, that means it is untested.  Just FYI00:04
johnsom:ffff: seems suspect00:05
rm_work^^ yes, i mentioned that00:05
imacdonnI think that's just because IPv6 is enabled00:06
rm_worktestins should === production, or you're asking for trouble00:06
rm_workthis is just a general rule00:06
johnsomLike a mixed ipv500:06
rm_work*testing00:06
rm_workyeah that looks like ipv4 in 600:06
rm_workbut that should be fine00:06
rm_worki think? it's valid notation00:06
rm_workjohnsom: why are you here? :P00:07
imacdonnthis is troubling too ... I can't delete the listener that failed to create due to the jinja version issue...00:08
imacdonn[imacdonn@home ~]$ openstack loadbalancer listener delete listener100:08
imacdonnLoad Balancer d3fe6073-4cce-476c-b968-9ed162d91e15 is immutable and cannot be updated. (HTTP 409) (Request-ID: req-be2a84e8-6d81-4097-9021-6691300e0579)00:08
rm_workwhat status is the LB in?00:08
rm_workdid it get stuck in PENDING_UPDATE?00:08
rm_workor did it go to ERROR?00:08
imacdonnprovisioning_status=ERROR00:08
rm_workour flows are generally supposed to keep the LB itself in ACTIVE and just send the Listener to ERROR so it can be cleaned up00:09
rm_workbut something must have happened that was catastrophic00:09
rm_workonce the LB itself goes into ERROR, there's not much you can do00:09
imacdonnmysql -e 'drop database octavia;'00:09
imacdonn:P00:09
rm_workI've been investigating ways we could try to do some correction, but... :*00:09
rm_workyou can delete the whole LB00:09
cgoncalveso/00:09
rm_workhey cgoncalves! I did get home finally :P00:10
rm_workit just took 24h+00:10
cgoncalvesimacdonn: catching up with the backlog. want to summarize your pip jinja2 issue?00:10
cgoncalvesrm_work: hey! cool, glad to hear you made it safe :)00:10
johnsomYeah, check your states.  You will only get 409 if it is still trying.  You should get lb and listener ERROR when it gives up trying, at which point you can delete00:10
imacdonncgoncalves: I don't want to continue the debate ... but basically RDO Queens comes with python2-jinja2 2.8.1 ... which is not sufficient00:11
cgoncalvesrm_work: think positive. there are still people stuck in dublin ;)00:11
rm_workyeah, RDO Queens should be packaging 2.10 which is what u-c prescribes00:11
cgoncalvesimacdonn: ok. if octavia/requirements.txt prescribes 2.10 it should be what the Require: should set00:12
imacdonncgoncalves: it doesn't ... requiremenrts.txt says >= 2.800:12
cgoncalvesI'll have a look tomorrow (01:12 am here)00:12
imacdonnwe've agreed that that's wrong and needs to be fixed00:12
rm_workyes00:12
rm_workbut also if RDO doesn't package according to u-c then things will get really wonky when you use multiple projects00:12
cgoncalvesimacdonn: alright. I'll check and submit a patch to RDO00:12
rm_workthe whole point of g-r and u-c is to make it so multiple projects can be installed in the same system00:13
johnsomrm_work sipping sherry overlooking the lake from the castle.  No worries on the PTO.  Just trying to help imacdonn while I can / up.00:13
rm_workeven without virtualenvs00:13
imacdonncgoncalves: OK. I can file an RHBZ on it00:13
rm_workcgoncalves: we should update it upstream00:13
rm_workand backport00:13
cgoncalvesimacdonn: much appreciated! assigned it to me (cgoncalves@redhat.com)00:13
rm_workand then rdo should pull that in00:13
imacdonncgoncalves: will do - thanks!00:13
rm_worki would have done it already but i'm in the middle of a patch00:13
johnsomrm_work: sushi burrito this time?00:14
cgoncalvesrm_work: not following. why is it needed to fix upstream? requirements.txt is not following u-c?00:14
rm_workcgoncalves: requirements.txt is supposed to be representative of the minimum version our project requires to function00:15
rm_worku-c is supposed to be used in conjunction when installing things00:15
rm_workwhat Openstack C-I tests (and thus absolutely what should be packaged/installed) is a combination of the two handled by pip00:15
rm_workso normally, pip would parse >=2.8 as "newest"00:16
rm_workand u-c would cap that at 2.1000:16
rm_workmeaning 2.10 is the version we test, even if 2.11 is out00:16
cgoncalvesah, i see the problem00:17
cgoncalveslimitations of .spec00:17
cgoncalvesJinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause)00:17
rm_workand regardless of what our requirements says, we expect to be installed with 2.1000:17
rm_workyes, so it has >=2.800:17
rm_workwhich as of queens release in pypi would have been 2.1000:17
cgoncalvesyeah we cannot add ANDs in Requires:00:17
rm_workso there's a thing that you can run i think that will generate an exact package list00:18
johnsomYes, requirement repo has a tool00:18
cgoncalveshttps://github.com/rdo-packages/octavia-distgit/blob/rpm-master/openstack-octavia.spec#L10900:18
cgoncalvesjohnsom: looking for trouble with your wife? if not, disconnect immediately :P00:20
rm_worki think this would do it:00:20
rm_workgenerate-constraints -p /usr/bin/python2.7 -p /usr/bin/python3 -b blacklist.txt -r requirements.txt > real-constraints.txt00:20
rm_workneed to test00:20
rm_worki just kinda mangled their example00:20
rm_workwhere requirements.txt is our reqs file and the rest is from the requirements repo00:20
rm_worklet me see what that does00:20
johnsomcgoncalves: yeah, technically it is her birthday here, but don’t expect me around tomorrow.  She is reading now, so safe for a quick irc checkin00:21
cgoncalveslol00:22
imacdonncgoncalves: what would be the right "Component" for this? There is no "openstack-octavia" .. is there something for general packaging issues ?00:23
imacdonn"distribution" ?00:23
rm_workyeah, as this is not an octavia specific issue00:24
rm_workif RDO didn't package jinja 2.10, then it is wrong for all of openstack00:24
rm_worknot just us00:24
cgoncalvesimacdonn: product: red hat openstack; component: openstack-octavia00:24
rm_workyeah, it looks like that command works00:25
imacdonncgoncalves: Oh, I was going for RDO .... not sure if I can file bugs againts RHOS00:25
rm_workgenerate-constraints -p /usr/bin/python2.7 -b blacklist.txt -r ../octavia/requirements.txt > real-requirements.txt00:25
rm_work^^ this generated me a list of the actual things you should install for octavia to work00:25
cgoncalvesimacdonn: anyone can00:25
rm_workhttps://gist.github.com/rm-you/8178ae242ed7606d58ec23efd648b41500:26
rm_workcgoncalves: so RDO packaging should run that first00:26
rm_workthen use the resulting requirements file00:26
rm_work(if your packaging stuff can't handle u-c natively)00:27
johnsomBe sure to checkout queens requirements repo and not master00:27
cgoncalvesrm_work: ^ that's how octavia/requimrents.txt should look like?00:27
rm_workah right whoops00:27
rm_workcgoncalves: err00:27
rm_workyes and no00:27
rm_workno in repo00:27
rm_workyes for anything packaging00:27
cgoncalvesrm_work: k. I'll handle it tomorrow with fresh eyes00:28
*** fnaval has joined #openstack-lbaas00:30
rm_workkk00:30
johnsomNight from Galway00:32
imacdonn'nite johnsom00:32
imacdonnthanks for the debate ;)00:33
rm_worknight!00:33
imacdonncgoncalves: Could you grab https://bugzilla.redhat.com/show_bug.cgi?id=1551821 ? I don't think I can assign it00:34
cgoncalvesimacdonn: done00:37
imacdonncgoncalves: thanks!00:38
*** yamamoto has quit IRC00:40
rm_workcgoncalves: but we SHOULD update in repo to require >= 2.10 I think? if possible00:41
rm_workthe issue being I don't know if we can get out-of-sync with global-requirements00:41
rm_workwe might have to ask g-r to update to simply require >=2.1000:41
rm_workas i think it is required for us to match it exactly00:41
*** yamamoto has joined #openstack-lbaas00:42
cgoncalvesrm_work: yeah I think so too. as it is now our requirements.txt looks bogus00:44
cgoncalvesrm_work: I mean, it should have been automatically updated from g-r (i.e. patch by openstack proposal bot)00:46
rm_workerr00:46
rm_worki'm not sure if g-r has different than us00:46
rm_workyeah https://github.com/openstack/requirements/blob/master/global-requirements.txt#L8400:46
rm_workwhich is why ours is what it is00:47
rm_workbecause it is derived from that00:47
rm_workso we may need to update g-r00:47
cgoncalves"Packagers are advised to use the versions (upper-constraints)"00:48
cgoncalvesrm_work: we could propose bumping it, yeah00:51
*** fnaval has quit IRC00:51
rm_workyes00:51
rm_workso we do need to update g-r00:51
rm_workas our line has to match theirs00:51
*** fnaval has joined #openstack-lbaas00:51
rm_workbut they don't allow backports00:51
rm_workso we can fix it00:51
rm_workbut00:51
rm_workyou'll have to backport internally or else fix RDO's build process to use u-c00:52
rm_workthe latter is the correct00:52
rm_work*more correct approach IMO00:52
imacdonn;)00:52
imacdonnrm_work: I reproduced the SSL_SESSION_ID_CALLBACK_FAILED issue, if you have any other thoughts / things to check ....00:52
rm_workwhat were the repro steps00:53
imacdonncreate a LB (Active-Passive, in case that matters)00:53
imacdonnobserve:00:53
imacdonn2018-03-06 00:48:29.106 24144 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://10.250.34.202:9443/0.5/plug/vip/10.250.34.205 request /usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:25600:53
imacdonn2018-03-06 00:48:29.112 24144 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: SSLError: ("bad handshake: SysCallError(-1, 'Unexpected EOF')",)00:53
imacdonnand amphora-agent.log says: [2018-03-06 00:51:28 +0000] [1122] [DEBUG] Invalid request from ip=::ffff:10.159.210.42: [SSL: SSL_SESSION_ID_CALLBACK_FAILED] ssl session id callback failed (_ssl.c:1783)00:53
imacdonncan also make the error by connecting to port 9443 with openssl s_client00:54
*** irenab has quit IRC00:55
cgoncalvesrm_work: right. let's bump to 2.10 first upstream00:56
rm_workyeah00:56
rm_worki can make a g-r patch00:56
*** oanson has quit IRC00:56
imacdonnI should confess that the amphora OS is Oracle Linux .. basically the same as CentOS / RHEL ... it seemed to work OK with Pike once I fixed NTP, but that doesn't seem to be the issue here (I did rebuild it with the new amphora-agent code, but maybe there's something else I need to update...)00:56
cgoncalvesrm_work: thanks! add me so that I can follow please00:57
rm_workimacdonn: no worries, i had assumed00:57
rm_worki did review your patch BTW and agree with nir00:57
imacdonnthe problem with supporting OL in DIB is that I don't know where to get a suitable base image .... but I haven't really looked00:58
rm_workthough i just realized it might be difficult if your image is bot publically available00:58
rm_workyeah lol00:58
rm_workjust realized that00:58
rm_work*is not00:59
rm_workwhich i'm guessing it isn't00:59
imacdonnthere are some supposedly OpenStack-ready images out there00:59
rm_workdoes oracle not produce a cloud-ready image?01:01
rm_worki guess perhaps not01:01
imacdonnthat's what I mean01:01
rm_workit doesn't necessarily have to be openstack-ready, it can be just "cloud-ready"01:01
rm_workour image builder can install stuff on it I believe to add cloud-init and such01:02
rm_workoh, though you'd need a base element01:02
rm_workwhich ... yeah01:02
rm_workTHAT can be a complex step01:02
rm_workis it even allowed to be public due to licensing?01:02
imacdonnI think so01:03
imacdonnyou just don't get any support01:03
rm_workI always assume I would have to sign away my firstborn and a pint of blood to be able to access anything oracle related01:03
*** oanson has joined #openstack-lbaas01:03
imacdonnhttps://community.oracle.com/thread/398408401:04
rm_workand also that oracle somehow becomes the owner of any hardware i run the software on, and becomes eligable to garnish my wages for license fees01:04
imacdonnGo to01:04
imacdonnhttps://edelivery.oracle.com01:04
imacdonnsearch by "Release"01:04
imacdonnstart typing "OpenStack"01:04
imacdonnyou will see the option to choose "Oracle Linux 7 Virtual Machine Image for Openstack 7.0.0.0.0"01:04
rm_workah yeah great01:04
rm_workso they ship a qcow201:04
*** irenab has joined #openstack-lbaas01:04
rm_workthat's A++01:04
imacdonnyeah, I'll take the wages .. you can keep the firstborn, though01:04
rm_workthough i can't get to that page without a userID01:05
rm_workwhich yeah, oracle, makes sense01:05
imacdonnI tihnk you can get one without paying anything .... in any case, I think it's OK if you have to download the image and make it available to DIB as a local file ... I think I read that the RHEL element works that way01:06
rm_workyes01:06
rm_workyou can give DIB a local path to the image via an ENV variable01:06
rm_workwhich is a valid approach for licensed images01:06
imacdonnI'll try to find some of that "copious free time" that everyone talks about01:07
imacdonn... to try it01:07
rm_workyeah, i think you'd still need a base element though01:08
rm_workbut probably if it's close enough to rhel you could  just pretend you're doing RHEL, tell disk-image-builder that, and then pass it the file for the oracle image01:09
imacdonnit should be very similar to the RHEL one .. yeah01:09
rm_workthe image-build testing process is really more about "waiting" than having a ton of time to actively do stuff01:09
imacdonnwaiting for the next thing to break ;)01:10
imacdonnswitching gears again ... I thought I'd turn on debug for the amphora-agent that had the SSL issue... so edited octavia.conf and restarted it ... and now it works!01:11
imacdonnsome Googling on the error pointed to PRNG readiness01:12
rm_workhmm01:16
rm_workwe really need to just ... make an octavia amp distro01:17
rm_workbecause this whole "every distro feels a need to have an amp image" thing seems silly to me01:17
imacdonnAmphorOS ?01:17
rm_workit's a minimally functioning little black-box01:17
rm_workit shouldn't matter what distro is on it01:17
imacdonnyeah... that'd make it smaller (quicker to deploy) too01:17
rm_workbut i understand that when a distro wants to release a cloud, they have to put a distro they can support throughout01:18
rm_workbut honestly01:18
rm_workyou could use the ubuntu image with an oracle cloud and if it wasn't a licensing problem, no one would even notice01:18
imacdonnyeah.. I doubt anyone would ever notice, but ................01:18
rm_workit's just dumb, the distro on the amp has nothing to do with anything01:19
rm_workall it does is complicate stuff because we have to support a ton01:19
rm_workand do different templates and such01:19
rm_workwe literally just need a minimal linux kernel, haproxy and keepalived and python compiled for it01:19
rm_workand that's it01:19
imacdonnwonder what it'd take to make it work on CirrOS01:20
rm_workheh01:20
rm_worki have looked at Alpine01:20
rm_workshould look at cirros probably...01:20
imacdonnit has cloud-init already01:20
imacdonnnever used Alpine01:20
rm_workalpine is a minimal tiny security-focused distro01:21
rm_workbut yes, existing cloud-init support and DIB elements would make things trivial01:21
rm_worknot sure why i didn't poke at this before01:21
rm_workhmm we might want to look at including https://github.com/openstack/diskimage-builder/tree/master/diskimage_builder/elements/cloud-init-datasources in our image builds and explicitly setting only ConfigDrive01:27
rm_worksince that's what we rely on01:27
rm_workit might speed up our amp boot times01:27
imacdonnhmm01:27
imacdonnI hadn't noticed you're using ConfigDrive01:28
rm_workerr, I believe we rely on that01:29
imacdonnI'm not a huge fan, based on past experience ... it was incompatible with live-migration at one point01:29
imacdonnthat's probably mostly fixed now01:29
rm_workhmm, we take them in but it seems optionally... except i'm pretty sure we hardcode stuff in whatever calls that01:30
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/compute/drivers/nova_driver.py#L9401:30
rm_worklooking01:30
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/worker/tasks/compute_tasks.py#L7801:31
rm_workyep01:31
imacdonngood to know01:34
*** b_bezak has joined #openstack-lbaas01:36
*** b_bezak has quit IRC01:40
rm_workadded you guys to the g-r patch i proposed01:42
imacdonnyep, got it .. tnx01:42
rm_workif we can land that, then we will be allowed to (nay, required to -- the patch will be automatic) update our project's requirements01:43
rm_workthough i guarantee you'll see comments like "not using upper-constraints is a really bad idea", lol01:47
imacdonnI wonder if the systemd octavia-amphora-agent.service needs to start "After" systemd-random-seed.service01:49
imacdonnalthough it appears it did anyway, but still failed)...01:51
imacdonnMar 06 00:48:20 localhost.localdomain systemd[1]: Starting Load/Save Random Seed...01:51
imacdonnMar 06 00:48:20 localhost.localdomain systemd[1]: Started Load/Save Random Seed.01:51
imacdonnMar 06 00:48:23 host-10-250-34-202 systemd[1]: Starting OpenStack Octavia Amphora Agent service...01:51
imacdonnMar 06 00:48:23 host-10-250-34-202 systemd[1]: Started OpenStack Octavia Amphora Agent service.01:51
rm_workwe could adjust the "requires" in the template01:51
imacdonnseems it should really wait until cloud-init has finished too .... (?)01:52
imacdonnI notice that the hostname changed after it was started...01:53
imacdonnMar 06 00:48:24 amphora-48a5cc1b-8898-4d10-8ec6-5c548e235cb6.novalocal systemd-hostnamed[962]: Changed static host name to 'amphora-48a5cc1b-8898-4d10-8ec6-5c548e235cb6.novalocal'01:53
imacdonnMar 06 00:48:21 localhost.localdomain systemd[1]: Started Initial cloud-init job (pre-networking).01:54
imacdonnMar 06 00:48:23 host-10-250-34-202 systemd[1]: Starting OpenStack Octavia Amphora Agent service...01:54
imacdonnMar 06 00:48:23 host-10-250-34-202 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...01:54
imacdonnMar 06 00:48:23 host-10-250-34-202 systemd[1]: Started OpenStack Octavia Amphora Agent service.01:54
imacdonnMar 06 00:48:24 host-10-250-34-202 cloud-init[856]: Cloud-init v. 0.7.9 running 'init' at Tue, 06 Mar 2018 00:48:24 +0000. Up 5.21 seconds.01:54
imacdonnwouldn't be surprised if there's a race condition in there somewhere01:54
rm_worki would have guessed that something else we required (like networking) would in turn require the random stuff02:00
*** imacdonn has quit IRC02:02
*** imacdonn has joined #openstack-lbaas02:02
imacdonnbased on this log, it doesn't appear to be an issue with the random seed ... at least it's not as simple as the service dependency,,., I'm now wondering if cloud-init is changing something that's causing interference02:03
imacdonndoes octavia ship a systemd service description for the agent itself?02:05
*** slaweq_ has joined #openstack-lbaas02:05
rm_workyes02:05
imacdonnI found the ones for haproxy, etc, but not the agent one yet02:05
rm_workhttps://github.com/openstack/octavia/blob/0949ee1b3c3616dd342732b937370c5d72fe9b51/elements/amphora-agent/install.d/amphora-agent-source-install/amphora-agent.service02:06
rm_workerr02:06
rm_workhttps://github.com/openstack/octavia/blob/master/elements/amphora-agent/install.d/amphora-agent-source-install/amphora-agent.service02:06
rm_workit's in the element02:06
imacdonnah, right02:06
imacdonnexcept.,..02:07
imacdonn[root@amphora-48a5cc1b-8898-4d10-8ec6-5c548e235cb6 ~]# rpm -qf /usr/lib/systemd/system/octavia-amphora-agent.service02:07
imacdonnopenstack-octavia-amphora-agent-2.0.0-1.el7.noarch02:07
imacdonnso it's part of the RDO RPM02:07
imacdonnand the RDO one looks slightly different02:08
rm_workpastebin it?02:08
imacdonnhttps://github.com/rdo-packages/octavia-distgit/blob/rpm-master/octavia-amphora-agent.service02:09
*** slaweq_ has quit IRC02:10
rm_workah02:11
rm_worknot much different02:11
imacdonnyeah02:11
imacdonnnothing that looks relevant02:11
imacdonnFWIW, I'm fairly sure that the DIB stuff doesn't use the one that you posted the link to ... I dissected the CentOS elements (IIRC) and didn't pick up that file02:12
rm_workthe dib one exactly uses that one02:15
rm_workcent/rhel/ubuntu all use that one when installing from source02:15
imacdonnk, maybe I missed it02:16
rm_workit comes from https://github.com/openstack/octavia/tree/master/elements/amphora-agent02:16
rm_workwhich is our only amp element02:16
rm_workwhich we add explicitly regardless of distro: https://github.com/openstack/octavia/blob/master/diskimage-create/diskimage-create.sh#L34002:16
imacdonnahh, but I was installing from the RDO RPM, not source02:17
*** slaweq has joined #openstack-lbaas02:17
imacdonnacademic, I suppose02:17
imacdonnnow looking at this:02:17
imacdonn# cloud-init normally emits a "cloud-config" upstart event to inform third02:18
imacdonn# parties that cloud-config is available, which does us no good when we're02:18
imacdonn# using systemd.  cloud-config.target serves as this synchronization point02:18
imacdonn# instead.  Services that would "start on cloud-config" with upstart can02:18
imacdonn# instead use "After=cloud-config.target" and "Wants=cloud-config.target"02:18
imacdonn# as appropriate.02:18
*** slaweq has quit IRC02:22
rm_workhmmm02:27
rm_workso that's in cloud-init?02:27
imacdonnyeah02:27
rm_workso we could change our After in our service to start after cloud-config02:27
rm_worki'd probably be ok with that change02:27
imacdonnyes, that's what I'm trying now02:27
dayousudo ip netns exec amphora-haproxy ping 10.0.0.702:28
imacdonn[imacdonn@home ~]$ openstack loadbalancer show lb6 -f value -c provisioning_status02:28
imacdonnACTIVE02:28
imacdonnso-far so-good ;)02:28
dayouAnyone knows hwy inside the amphora-haproxy namespace, the ip address of eth1 can't be accessed02:29
*** slaweq has joined #openstack-lbaas02:30
rm_workerr02:32
rm_workcan you do02:32
rm_worksudo ip netns exec amphora-haproxy ip a02:32
rm_workand pastebin that02:32
dayouhttp://paste.openstack.org/show/692748/02:34
dayouThis seems like a bug to me02:34
*** slaweq has quit IRC02:34
dayouI also tried in the lab, it's same as on devstack02:34
imacdonnip netns exec amphora-haproxy ip l set lo up02:39
imacdonndo that, then try your ping again02:39
imacdonndayou: ^02:39
dayouworked, :-D02:40
dayouwhy don't we put lo to up by default?02:41
imacdonnyeah... so, because eth1 is local, it wants to use lo .. dunno why it's not up by default02:41
*** slaweq has joined #openstack-lbaas02:42
imacdonnrm_work: That systemd thing made the agent start after cloud-init as intended .... unfortunately it doesn't solve the SSL problem :(02:43
rm_work:/02:44
rm_workerm so, dayou, i do not have this problem02:45
rm_worki mean, my `lo` shows DOWN also02:46
rm_workbut i have no traffic problems02:46
imacdonnI don't have traffic problems either ... being able to ping the eth1 address from inside the netns would not be required for normal operation02:46
dayourm_work, The problem I had was try to announce route via bgp inside namespace02:47
rm_workhmmm02:47
*** slaweq has quit IRC02:47
dayouAlso I am confused about whether I should annouce route inside the namespace or outside02:49
rm_workah02:49
rm_worki think it's because02:49
*** yamamoto has quit IRC02:49
*** yamamoto has joined #openstack-lbaas02:49
rm_workwe aren't actually supposed to accept traffic to the eth1 address directly02:50
rm_workthat's the vrrp_ip02:50
rm_worknot the actual VIP02:50
rm_workwe bind HAProxy to listen only on the VIP address02:50
rm_workwhich is directed to that port but not actually that address02:50
imacdonnyou can't ping the VIP either .. it's also a local route that wants to go through lo02:50
rm_worki can curl the VIP02:51
imacdonnnot sure why ping locally has to do with BGP tho02:51
rm_workbut not the vrrp_ip02:51
rm_workdidn't try ping02:51
rm_workbut we shouldn't respond to ping anyway02:51
imacdonnoh wait, I'm not looking at the VIP .. nevermind02:52
rm_workhmmm pinging the vrrp address doesn't respond, as expected, but for the VIP address it does actually respond for me, just oddly02:52
rm_workprobably because of the way our network is routed02:52
dayouit needs the loopback for other traffic such as bgp port 17902:53
imacdonnrm_work: are you pinging it on the active amphora ?02:53
imacdonnbecause that, for me, still wants to go through lo02:54
imacdonn[root@amphora-9896a337-6b25-4163-8326-d129c4b9e21f ~]# ip netns exec amphora-haproxy ip r g 10.250.34.21202:54
imacdonnlocal 10.250.34.212 dev lo src 10.250.34.21602:54
imacdonn    cache <local>02:54
*** slaweq has joined #openstack-lbaas02:55
rm_workyeah, from within the namespace02:55
rm_workmine shows through eth102:55
imacdonnoh, interesting02:55
rm_workbut our networking is a little different I think02:56
*** slaweq has quit IRC02:59
*** slaweq has joined #openstack-lbaas03:07
*** slaweq has quit IRC03:12
*** fnaval has quit IRC03:17
*** harlowja has quit IRC03:18
*** slaweq has joined #openstack-lbaas03:19
*** slaweq has quit IRC03:24
*** slaweq has joined #openstack-lbaas03:32
*** slaweq has quit IRC03:36
*** slaweq has joined #openstack-lbaas03:44
*** slaweq has quit IRC03:49
*** links has joined #openstack-lbaas03:51
*** slaweq has joined #openstack-lbaas03:57
*** slaweq has quit IRC04:01
*** slaweq has joined #openstack-lbaas04:09
*** slaweq has quit IRC04:14
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925904:20
*** slaweq has joined #openstack-lbaas04:22
*** slaweq has quit IRC04:26
*** slaweq has joined #openstack-lbaas04:34
*** sanfern has joined #openstack-lbaas04:36
*** slaweq has quit IRC04:39
*** slaweq has joined #openstack-lbaas04:46
*** slaweq has quit IRC04:51
*** slaweq has joined #openstack-lbaas04:59
*** chkumar246 is now known as chandankumar04:59
*** slaweq has quit IRC05:03
dayouFYI, needs to rebase https://review.openstack.org/#/c/528850/ and fix merge conflict05:09
*** slaweq has joined #openstack-lbaas05:11
*** harlowja has joined #openstack-lbaas05:12
*** slaweq has quit IRC05:16
*** slaweq has joined #openstack-lbaas05:24
*** gcheresh has joined #openstack-lbaas05:27
*** slaweq has quit IRC05:29
*** gcheresh has quit IRC05:34
*** slaweq has joined #openstack-lbaas05:36
*** slaweq has quit IRC05:40
*** slaweq has joined #openstack-lbaas05:48
*** slaweq has quit IRC05:53
openstackgerritHengqing Hu proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885005:57
openstackgerritHengqing Hu proposed openstack/octavia master: L3 ACTIVE-ACTIVE data model  https://review.openstack.org/52472205:57
openstackgerritHengqing Hu proposed openstack/octavia master: Make frontend interface attrs less vrrp specific  https://review.openstack.org/52113805:57
openstackgerritHengqing Hu proposed openstack/octavia master: Able to set frontend network for loadbalancer  https://review.openstack.org/52993605:57
*** slaweq has joined #openstack-lbaas06:01
*** ivve has quit IRC06:05
*** slaweq has quit IRC06:05
openstackgerritMin Sun proposed openstack/neutron-lbaas-dashboard master: Cannot update ssl certificate when update listener  https://review.openstack.org/54994706:08
*** slaweq has joined #openstack-lbaas06:13
*** threestrands has quit IRC06:14
*** slaweq has quit IRC06:18
*** slaweq has joined #openstack-lbaas06:26
*** harlowja has quit IRC06:29
*** slaweq has quit IRC06:30
*** chandankumar has quit IRC06:35
*** chandankumar has joined #openstack-lbaas06:36
*** slaweq has joined #openstack-lbaas06:38
*** slaweq has quit IRC06:43
*** kobis has joined #openstack-lbaas06:52
*** AlexeyAbashkin has joined #openstack-lbaas06:52
*** rcernin has quit IRC07:14
openstackgerritZhaoBo proposed openstack/octavia master: UDP support with netcat - DIB  https://review.openstack.org/53810507:22
*** AlexeyAbashkin has quit IRC07:50
*** slaweq_ has joined #openstack-lbaas08:31
*** tesseract has joined #openstack-lbaas08:32
*** slaweq_ has quit IRC08:36
*** b_bezak has joined #openstack-lbaas08:38
*** oanson has quit IRC08:41
*** irenab has quit IRC08:42
*** pcaruana has joined #openstack-lbaas08:44
*** irenab has joined #openstack-lbaas08:49
*** oanson has joined #openstack-lbaas08:50
*** pcaruana has quit IRC08:54
*** ivve has joined #openstack-lbaas09:08
*** ivve has quit IRC09:13
*** AlexeyAbashkin has joined #openstack-lbaas09:17
*** slaweq_ has joined #openstack-lbaas09:32
*** slaweq_ has quit IRC09:37
*** slaweq_ has joined #openstack-lbaas09:43
*** rcernin has joined #openstack-lbaas09:45
*** dims has quit IRC09:45
*** slaweq_ has quit IRC09:48
*** dims has joined #openstack-lbaas09:49
*** slaweq_ has joined #openstack-lbaas09:53
*** sticker has joined #openstack-lbaas09:55
*** slaweq_ has quit IRC09:57
*** slaweq_ has joined #openstack-lbaas10:03
*** slaweq_ has quit IRC10:08
*** salmankhan has joined #openstack-lbaas10:12
*** slaweq_ has joined #openstack-lbaas10:13
openstackgerritHengqing Hu proposed openstack/octavia-dashboard master: Being able to change insert headers of listener  https://review.openstack.org/54999910:15
*** slaweq_ has quit IRC10:18
*** slaweq_ has joined #openstack-lbaas10:23
rm_worknmagnezi_: so it's been a few days, what do you think of the haproxy 1.8 on cent7 thing?10:28
rm_workcgoncalves: you too10:28
*** slaweq_ has quit IRC10:28
openstackgerritHengqing Hu proposed openstack/octavia-dashboard master: Being able to change insert headers of listener  https://review.openstack.org/54999910:29
cgoncalvesrm_work: I think it is time for you to go to bed :)10:31
rm_workmeh10:31
rm_worki woke up at 3pm10:31
cgoncalvesrm_work: I've brought that up yesterday in a sync call10:31
rm_worki'm gonna take another crack at that api config dump thing10:32
cgoncalvesrm_work: we will try during coming weeks to have it distributed in the openstack channels/RDO10:32
rm_worksweet10:32
rm_worki can sit on the patch until you can get an official home for it10:32
rm_workso long as it's really happening10:33
*** salmankhan has quit IRC10:33
*** salmankhan has joined #openstack-lbaas10:33
*** slaweq_ has joined #openstack-lbaas10:33
cgoncalvesrm_work: it's not confirmed. it wouldn't be the first precedence so I'd say we have good chances10:34
cgoncalvesrm_work: it would be okay for me to merge your patch that installs haproxy from paas repo as I have reviewed the .spec and can tell it comes from rhel and other RH folks10:35
*** slaweq_ has quit IRC10:38
rm_workhmmm k10:39
rm_workwe can always update the location10:39
rm_worki would like to poke the gate for it a little more10:39
rm_workor let me run it internally for a week or something :P10:39
cgoncalvesinternally in a production env xD10:41
*** slaweq_ has joined #openstack-lbaas10:44
rm_work... yes10:44
rm_workhey, I'm fairly confident it'll be fine10:44
rm_workbut i'd just smatter in a few LBs10:44
*** slaweq_ has quit IRC10:49
*** b_bezak has quit IRC10:49
*** sanfern has quit IRC10:53
*** AlexeyAbashkin has quit IRC11:05
*** salmankhan has quit IRC11:10
*** salmankhan has joined #openstack-lbaas11:14
*** slaweq_ has joined #openstack-lbaas11:14
*** AlexeyAbashkin has joined #openstack-lbaas11:16
*** slaweq_ has quit IRC11:19
*** slaweq_ has joined #openstack-lbaas11:24
*** slaweq_ has quit IRC11:29
*** slaweq_ has joined #openstack-lbaas11:35
*** yamamoto has quit IRC11:39
*** slaweq_ has quit IRC11:39
*** slaweq_ has joined #openstack-lbaas12:15
*** slaweq_ has quit IRC12:20
*** slaweq_ has joined #openstack-lbaas12:26
*** slaweq_ has quit IRC12:30
*** slaweq_ has joined #openstack-lbaas12:36
*** yamamoto has joined #openstack-lbaas12:40
*** slaweq_ has quit IRC12:41
*** yamamoto has quit IRC12:45
*** slaweq_ has joined #openstack-lbaas12:46
*** slaweq_ has quit IRC12:50
*** sanfern has joined #openstack-lbaas12:53
*** slaweq_ has joined #openstack-lbaas12:56
openstackgerritOpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements  https://review.openstack.org/54955112:57
*** slaweq_ has quit IRC13:01
*** aojea_ has joined #openstack-lbaas13:01
*** atoth has joined #openstack-lbaas13:03
prometheanfirerm_work: we are working on publishing signed musl images13:05
*** slaweq_ has joined #openstack-lbaas13:06
*** slaweq_ has quit IRC13:11
*** slaweq_ has joined #openstack-lbaas13:17
*** slaweq_ has quit IRC13:21
*** slaweq_ has joined #openstack-lbaas13:27
*** slaweq_ has quit IRC13:31
*** slaweq_ has joined #openstack-lbaas13:37
openstackgerritNir Magnezi proposed openstack/neutron-lbaas master: [WIP]: Nuke lazy loaders and flush db sessions  https://review.openstack.org/54961313:39
openstackgerritNir Magnezi proposed openstack/neutron-lbaas master: [WIP]: Nuke lazy loaders and flush db sessions  https://review.openstack.org/54961313:40
*** slaweq_ has quit IRC13:41
*** yamamoto has joined #openstack-lbaas13:42
*** fnaval has joined #openstack-lbaas13:44
*** slaweq_ has joined #openstack-lbaas13:47
*** yamamoto has quit IRC13:47
*** fnaval has quit IRC13:49
*** slaweq_ has quit IRC13:52
openstackgerritNir Magnezi proposed openstack/neutron-lbaas master: [DNM]: Test CI  https://review.openstack.org/55008513:56
*** slaweq_ has joined #openstack-lbaas13:57
openstackgerritNir Magnezi proposed openstack/neutron-lbaas master: [DNM]: Test CI  https://review.openstack.org/55008514:00
*** sanfern has quit IRC14:00
*** slaweq_ has quit IRC14:02
*** slaweq_ has joined #openstack-lbaas14:07
*** slaweq_ has quit IRC14:12
openstackgerritHengqing Hu proposed openstack/octavia master: Provide devstack samples for l3 active active  https://review.openstack.org/52087814:16
*** slaweq_ has joined #openstack-lbaas14:18
*** salmankhan has quit IRC14:20
*** slaweq_ has quit IRC14:23
*** fnaval has joined #openstack-lbaas14:27
*** sapd__ has joined #openstack-lbaas14:27
*** salmankhan has joined #openstack-lbaas14:28
*** slaweq_ has joined #openstack-lbaas14:28
*** sapd_ has quit IRC14:31
*** slaweq_ has quit IRC14:32
*** slaweq_ has joined #openstack-lbaas14:38
*** aojea_ has quit IRC14:39
*** slaweq_ has quit IRC14:42
*** yamamoto has joined #openstack-lbaas14:43
*** slaweq_ has joined #openstack-lbaas14:48
*** yamamoto has quit IRC14:49
openstackgerritCarlos Goncalves proposed openstack/octavia master: [WIP] Add grenade support  https://review.openstack.org/54965414:50
*** slaweq_ has quit IRC14:53
*** slaweq_ has joined #openstack-lbaas14:58
*** sticker has quit IRC15:03
*** slaweq_ has quit IRC15:03
*** slaweq_ has joined #openstack-lbaas15:09
xgerman_cgoncalves:  any idea why the pip install doesn’t work for https://review.openstack.org/#/c/549259/15:13
xgerman_apt install15:13
xgerman_maybe should ask the people in infra15:13
*** slaweq_ has quit IRC15:13
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925915:17
xgerman_n.m. was silly15:17
*** slaweq_ has joined #openstack-lbaas15:19
nmagnezi_xgerman_, johnsom, rm_work, Ziuul shows that our CI for neutron-lbaas seems to be broken (tested with https://review.openstack.org/#/c/550085/ )15:20
xgerman_mmh15:20
xgerman_johnsom is still in Ireland sipping whiskey (I hope) — and my Zuul isn’t strong15:21
xgerman_but will check15:21
dayouzuul seems to be broken at the moment15:22
xgerman_ok, then let’s wait - if there are logs I can sift through let me know15:23
*** slaweq_ has quit IRC15:23
nmagnezi_dayou, is that so? you get bad results for the dashboard patches?15:24
*** links has quit IRC15:24
*** slaweq has joined #openstack-lbaas15:29
cgoncalvesnmagnezi_, dayou, xgerman_, johnsom, rm_work: ostestr is no more. I'll put a patch up for review switching to stestr15:31
cgoncalvesref: http://lists.openstack.org/pipermail/openstack-dev/2017-September/122135.html15:31
xgerman_thx15:32
*** slaweq has quit IRC15:34
*** slaweq has joined #openstack-lbaas15:39
*** rcernin has quit IRC15:43
*** slaweq has quit IRC15:44
*** yamamoto has joined #openstack-lbaas15:45
openstackgerritCarlos Goncalves proposed openstack/octavia master: Migrate to stestr  https://review.openstack.org/55013415:46
cgoncalvesxgerman_, nmagnezi_: ^15:46
*** slaweq has joined #openstack-lbaas15:49
*** yamamoto has quit IRC15:51
*** slaweq has quit IRC15:53
*** slaweq has joined #openstack-lbaas15:54
*** aojea_ has joined #openstack-lbaas15:57
xgerman_let’s wait until it passes Zuul :-)16:02
xgerman_+ we will need a patch for our other 3 projects cgoncalves16:02
*** kobis has quit IRC16:03
cgoncalvesxgerman_: on it! :)16:03
xgerman_sweet16:04
cgoncalvesxgerman_: actually we don't. they don't use ostestr16:05
xgerman_nice. even lbaasv216:05
xgerman_n-lbaas16:05
cgoncalvesgrep returns no results for 'ostestr' :)16:06
cgoncalveswe probably want to switch all to stestr. currently set to run 'python setup.py test [...]' in tox.ini16:10
xgerman_yeah16:15
*** sapd__ has quit IRC16:17
*** sapd__ has joined #openstack-lbaas16:17
*** sapd__ has quit IRC16:19
*** sapd__ has joined #openstack-lbaas16:20
*** aojea_ has quit IRC16:23
*** kevinbenton has quit IRC16:41
*** kevinbenton has joined #openstack-lbaas16:43
*** yamamoto has joined #openstack-lbaas16:47
*** yamamoto has quit IRC16:53
cgoncalvesuh! neutron-lbaasv2-dsvm-api failing with couple of "One or more ports have an IP allocation from this subnet" errors16:54
*** AlexeyAbashkin has quit IRC16:57
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925917:03
xgerman_ouch17:03
johnsomcgoncalves: that is usually the side effect of another failure17:12
cgoncalvesjohnsom: is your wife distracted reading? xD17:13
cgoncalvesok, I'll take a deeper look. I see same failure on other patches17:13
johnsomYes actually17:13
johnsomYeah, those tests will give that on cleanup if something else bombed17:14
johnsomLook i o-cw o-api17:15
cgoncalveswill do. thanks for the pointers!17:15
*** yamamoto has joined #openstack-lbaas17:49
*** aojea_ has joined #openstack-lbaas17:51
*** yamamoto has quit IRC17:54
rm_workprometheanfire: cool18:08
*** salmankhan has quit IRC18:12
*** salmankhan has joined #openstack-lbaas18:15
xgerman_+118:16
*** salmankhan has quit IRC18:20
*** AlexeyAbashkin has joined #openstack-lbaas18:26
*** tesseract has quit IRC18:27
*** AlexeyAbashkin has quit IRC18:30
*** harlowja has joined #openstack-lbaas18:39
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925918:43
imacdonnrm_work: I *think* I got my SSL problem solved. Seems it was clock after all I noticed that the amphora-agent was starting before ntpd. I made two changes - enabled the "ntpdate" service in addition to "ntpd", and made the agent start "After" time-sync.target ... so-far, it's consistently worked since that18:45
*** yamamoto has joined #openstack-lbaas18:50
*** yamamoto has quit IRC18:54
*** aojea_ has quit IRC19:00
rm_workwe need to merge a similar patch in neutron-lbaas19:01
rm_worksince we co-gate it in octavia19:01
*** ivve has joined #openstack-lbaas19:01
rm_workor figure out why it's failing19:03
cgoncalvesrm_work: are you referring to neutron-lbaasv2-dsvm-api?19:04
rm_workyeah, looks like it's failing on something19:04
cgoncalvesrm_work: have you found the issue?19:04
rm_worknot yet19:04
rm_workjust started looking19:04
rm_workonce i realized it wasn't just ostestr19:04
cgoncalvesrm_work: issue is _vip port_id still exists19:05
cgoncalvesFound port (8e036024-20e5-4fd1-a3ee-66f0efa33c7d, 10.1.0.10) having IP allocation on subnet f7f4dde8-ff91-4c0c-89dd-81a4d114d48c, cannot delete19:06
cgoncalvesCalling delete_port for 8e036024-20e5-4fd1-a3ee-66f0efa33c7d owned by neutron:LOADBALANCERV2 {{(pid=13582) delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1566}}19:06
rm_workis that on the neutron side?19:07
cgoncalvesyes19:07
cgoncalveshttp://logs.openstack.org/54/549654/4/check/neutron-lbaasv2-dsvm-api/0eaaf0c/logs/screen-q-svc.txt.gz19:07
cgoncalveshmmm I think we're not waiting for the port deletion to complete19:09
cgoncalvesneutron-server[13497]: DEBUG neutron.plugins.ml2.plugin [None req-3e6b133d-4502-46a7-a09d-84baa618a581 None None] Deleting port 8e036024-20e5-4fd1-a3ee-66f0efa33c7d {{(pid=13582) _pre_delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1491}}19:10
cgoncalves^ here it starts19:10
cgoncalvesneutron-server[13497]: DEBUG neutron.api.rpc.handlers.resources_rpc [None req-3e6b133d-4502-46a7-a09d-84baa618a581 None None] Pushing event deleted for resources: {'Port': ['ID=8e036024-20e5-4fd1-a3ee-66f0efa33c7d,revision_number=None']} {{(pid=13582) push /opt/stack/new/neutron/neutron/api/rpc/handlers/resources_rpc.py:241}}19:10
cgoncalves^ here it notifies port delete completed19:10
cgoncalvesbut in the middle we try to delete subnet where port is attached19:11
rm_workah you're looking at a different change than i am19:12
rm_worki was looking at the failure on https://review.openstack.org/#/c/550134/19:12
cgoncalvesrm_work: different patch, same failure :)19:13
rm_workyeah prolly19:14
rm_workhmmm so what changed19:17
rm_worksince like, two days ago19:17
rm_workor whenever our last +1 from zuul was19:17
cgoncalvesrm_work: shouldn't we be deleting vip_port_id in https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/tests/tempest/v2/api/base.py#L86 ?19:21
rm_worki think it's created as part of the LB?19:22
rm_workso it should be deleted before the LB is finished and DELETED status19:22
rm_workah19:23
rm_workhttps://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/tests/tempest/v2/api/base.py#L12519:23
rm_workwe're missing a cls._wait_for_load_balancer_status(lb_id)19:23
rm_workafter the delete19:23
rm_workwe need status to go to DELETED19:24
rm_workwe can't just say "delete" and then move on19:24
rm_worki can fix this19:24
cgoncalvesrm_work: cool!19:24
cgoncalvesrm_work: I think there's some cleanup waiter we could use19:24
rm_workyes19:25
rm_workspecifically that one19:25
cgoncalves_wait_for_neutron_port_delete19:25
rm_workno19:25
rm_workcls._wait_for_load_balancer_status(lb_id)19:25
rm_workwe just tell it to wait for status DELETED19:25
rm_work(out of the main loop, one sec)19:25
rm_workalmost have it already19:25
cgoncalvesthat would also work :D19:26
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Wait for LB deletes on cleanup  https://review.openstack.org/55022219:27
rm_work^^19:27
rm_workerr19:28
rm_workdamnit19:28
rm_workbad copy/paste19:28
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Wait for LB deletes on cleanup  https://review.openstack.org/55022219:28
rm_workthere we go19:28
rm_workthough honestly that'd work anyway i think because the deleted=True kinda ends up being the thing that hits19:29
cgoncalvesyeah was about to comment on that ;)19:29
rm_workwhen it is finally deleted, it's not going to come back as status DELETED19:29
rm_workit's going to 40419:29
cgoncalvesrm_work: hmmm operating_status='OFFLINE' too no?19:30
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep  https://review.openstack.org/54929519:30
rm_workno19:30
rm_workoperating status doesn't matter19:30
rm_workit's a user status19:30
cgoncalveshttps://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/tests/tempest/v2/api/base.py#L19819:30
rm_workhmmmmmmmmm19:31
rm_workthat's a bad check I think19:31
rm_workI should patch that out a little differently19:31
cgoncalvesrm_work: one may want to wait for operating_status to toggle to whatever is being desired so in that sense makes sense having it there too19:33
cgoncalvese.g. wait for load balancer to be fully operational19:33
rm_workfixed19:34
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Wait for LB deletes on cleanup  https://review.openstack.org/55022219:34
rm_workupdated the review but the bot didn't mention it19:34
rm_workthat is what it SHOULD do19:34
rm_workah there goes the bot19:35
cgoncalvesLGTM19:35
rm_worki should check whether the tests that do care about operating_status actually check it explicitly as they should though19:36
cgoncalvesI wonder what changed to cause the test failures now19:36
rm_workI could do it kinda "opposite" to maintain backwards compat19:36
rm_workit's dumb, but...19:36
rm_workmaybe would be a lighter touch approach19:36
rm_workyeah this should basically always have been an issue19:37
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Wait for LB deletes on cleanup  https://review.openstack.org/55022219:38
rm_workthere, fixed it to be backwards compatible19:38
cgoncalvesperhaps jobs are running in faster machines now so that the port doesn't get deleted first before LB delete xD19:38
rm_workeven though it's a little dumb19:38
cgoncalvesneutron-lbaas is in life-support...19:38
rm_workok one more and i'm done19:39
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Wait for LB deletes on cleanup  https://review.openstack.org/55022219:39
cgoncalveshmmm yeah dumb but backward compatible now19:39
xgerman_well, we need to make installing Octavia easier…19:39
rm_workbecause that if/else structure was dumb19:39
rm_workyou can depends-on that one19:41
rm_workin the octavia test19:41
rm_workerr, the octavia stestr patch19:41
rm_workshould work19:41
openstackgerritCarlos Goncalves proposed openstack/octavia master: Migrate to stestr  https://review.openstack.org/55013419:42
cgoncalves^^19:42
*** yamamoto has joined #openstack-lbaas19:50
xgerman_cgoncalves: its 9 pm in Germany. Wonder what your working hours are?19:52
xgerman_or is that all happening in some Brauhaus ;-)19:52
rm_workwell, we caught each other on each side of our working hours, i think :P19:53
xgerman_likely, I normally aim for a core of 8-3 sometimes start at 6, sometimes end at 519:54
*** yamamoto has quit IRC19:56
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925920:18
rm_worki aim for a core of 12pm-4pm PST and 12am-4am PST20:32
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925920:51
*** yamamoto has joined #openstack-lbaas20:52
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep  https://review.openstack.org/54929520:52
cgoncalvesxgerman_: heh I could tell you the same :)20:57
cgoncalvesrm_work: your patch fixes 3 out of 4 tempest failures20:57
*** yamamoto has quit IRC20:57
rm_workhmm20:58
xgerman_cgoncalves: I am normally on PST so my team wouldn’t appreciate me switching time zones on them20:58
*** gcheresh has joined #openstack-lbaas20:59
rm_workwhich one still happened20:59
cgoncalvesxgerman_: I see. my excuse is girlfriend being out this week :)20:59
cgoncalvesrm_work: lemme check. I was on mobile monitoring :)20:59
xgerman_those are the most productive weeks :-)21:00
rm_workyeah i mean, my wife and i are in different countries until next week21:00
*** aojea_ has joined #openstack-lbaas21:01
cgoncalvesrm_work: neutron_lbaas.tests.tempest.v2.api.test_health_monitor_admin.TestHealthMonitors21:01
rm_worksame issue tho?21:02
cgoncalveshttp://paste.openstack.org/show/693460/21:02
cgoncalvesyeah, same issue it seems21:02
rm_workhmm21:02
cgoncalvespy27 and py35 also kaput21:03
rm_worki wonder if there's multiple places that happens21:03
rm_worksame failures?21:03
cgoncalveshttp://logs.openstack.org/22/550222/5/check/openstack-tox-py35/d70dfa1/job-output.txt.gz21:03
cgoncalvesbroken pipe21:03
cgoncalvestools/pretty_tox.sh: 12: tools/pretty_tox.sh: subunit-trace: not found21:04
rm_workerrr21:04
rm_workah right21:04
rm_worki think that's because that file doesn't get created unless the tests pass21:04
rm_work... maybe21:04
cgoncalvesok that one is part of ostestr switch to stestr21:05
*** aojea_ has quit IRC21:05
*** prometheanfire has quit IRC21:10
rm_workhmmmm21:10
rm_workthis really should take care of it21:10
-openstackstatus- NOTICE: The infrastructure team is aware of replication issues between review.openstack.org and github.com repositories. We're planning a maintenance to try and address the issue. We recommend using our official supported mirrors instead located at https://git.openstack.org.21:18
*** gcheresh has quit IRC21:23
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925921:35
imacdonnyay... found another way to get a LB into limbo state (immutable)21:44
imacdonnJust create a LB, then try to create a listener for it, specifying a secret container that the octavia user doesn't have ACL access to21:44
*** irenab has left #openstack-lbaas21:47
rm_workthat should reject on the initial create call21:47
imacdonnit didn't21:48
rm_workI thought i had a test specifically for that...21:48
* rm_work looks21:48
imacdonnhttp://paste.openstack.org/show/693467/21:49
rm_workyeah we should expect that21:50
rm_workand reject21:50
rm_workhow sure how that got through to rpc21:50
rm_workthat should reject on the API side21:50
imacdonnshall I create a story for this ?21:53
rm_workhmmm21:53
rm_workah maybe I am thinking of the n-lbaas side21:53
rm_worktracing through21:53
rm_workand writing a test for it now to see what happens21:54
imacdonnk21:54
*** yamamoto has joined #openstack-lbaas21:54
*** threestrands has joined #openstack-lbaas21:54
*** yamamoto has quit IRC21:59
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925922:02
*** aojea_ has joined #openstack-lbaas22:02
*** aojea_ has quit IRC22:07
*** aojea_ has joined #openstack-lbaas22:12
*** aojea_ has quit IRC22:16
*** rcernin has joined #openstack-lbaas22:20
*** jdavis has joined #openstack-lbaas22:27
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Periodic job to build + publish diskimage  https://review.openstack.org/54925922:31
*** AlexeyAbashkin has joined #openstack-lbaas22:48
openstackgerritCarlos Goncalves proposed openstack/neutron-lbaas master: Switch to stestr  https://review.openstack.org/55028222:49
cgoncalvesrm_work: ^ will hopefully fix the py{27,35} test failures22:49
*** AlexeyAbashkin has quit IRC22:52
*** yamamoto has joined #openstack-lbaas22:56
*** yamamoto has quit IRC23:02
*** atoth has quit IRC23:03
*** atoth has joined #openstack-lbaas23:04
*** yamamoto has joined #openstack-lbaas23:06
openstackgerritCarlos Goncalves proposed openstack/neutron-lbaas master: Switch to stestr  https://review.openstack.org/55028223:10
rm_workcgoncalves: i think they'll have to be in the same patch23:11
rm_worksinc we'll need one patch that will pass for everything23:12
rm_workotherwise no merging23:12
*** slaweq has quit IRC23:13
cgoncalvesrm_work: infinite depends-on loop :D23:14
cgoncalvesrm_work: I guess you're right. I can't think of other way to have them separated23:15
openstackgerritGerman Eichberger proposed openstack/octavia master: Periodic job to build + publish diskimage  https://review.openstack.org/54925923:16
cgoncalvesd'oh! stupid .gitignore :P23:20
openstackgerritCarlos Goncalves proposed openstack/neutron-lbaas master: Switch to stestr  https://review.openstack.org/55028223:22
openstackgerritGerman Eichberger proposed openstack/octavia master: Periodic job to build + publish diskimage  https://review.openstack.org/54925923:24
rm_workcgoncalves: either copy in my change to yours and add a Co-Authored-By line for me23:47
rm_workor copy your change into mine and add a Co-Authored-By line for you23:47
rm_workand abandon one of the two23:47
*** jdavis has quit IRC23:48
rm_workwe're going to need them to be the same change23:48
cgoncalvesrm_work: agreed, I can do that23:48
cgoncalvespy27 and py35 are suceeding with my latest patch set btw23:48
openstackgerritCarlos Goncalves proposed openstack/neutron-lbaas master: Switch to stestr and wait for LB delete on cleanup  https://review.openstack.org/55028223:52
*** fnaval has quit IRC23:53
cgoncalvesrm_work: ^23:53
rm_workk23:55
rm_workstill need to figure out how that port is persisting tho23:55
cgoncalvesI need to go now. I'll catch up in 7h-ish23:56
imacdonnpfft .. part-timers23:57
cgoncalves:D23:57
imacdonn:) g'nite23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!