openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: build-container-image: support sibling copy https://review.opendev.org/697936 | 00:00 |
---|---|---|
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: build-docker-image: fix up siblings copy https://review.opendev.org/697614 | 00:00 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: build-container-image: support sibling copy https://review.opendev.org/697936 | 00:14 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: build-docker-image: fix up siblings copy https://review.opendev.org/697614 | 00:14 |
openstackgerrit | Merged zuul/zuul-jobs master: Fixes tox log fetching when envlist is set to 'ALL' https://review.opendev.org/696531 | 00:18 |
*** irclogbot_0 has joined #openstack-infra | 00:22 | |
*** irclogbot_0 has quit IRC | 00:40 | |
*** jamesmcarthur has joined #openstack-infra | 00:40 | |
*** sgw has quit IRC | 01:11 | |
*** jamesmcarthur has quit IRC | 01:17 | |
*** stevebaker_ has joined #openstack-infra | 01:20 | |
*** irclogbot_0 has joined #openstack-infra | 01:26 | |
*** stevebaker_ has quit IRC | 01:33 | |
*** yamamoto has joined #openstack-infra | 01:39 | |
*** stevebaker_ has joined #openstack-infra | 01:41 | |
*** irclogbot_0 has quit IRC | 01:48 | |
*** ddurst has quit IRC | 01:50 | |
*** ddurst has joined #openstack-infra | 01:52 | |
*** yamamoto has quit IRC | 02:34 | |
*** irclogbot_1 has joined #openstack-infra | 02:46 | |
*** slaweq has joined #openstack-infra | 02:54 | |
*** slaweq has quit IRC | 03:02 | |
*** ykarel|away has joined #openstack-infra | 03:03 | |
*** slaweq has joined #openstack-infra | 03:11 | |
*** slaweq has quit IRC | 03:15 | |
*** ramishra has joined #openstack-infra | 03:23 | |
*** slaweq has joined #openstack-infra | 03:32 | |
*** slaweq has quit IRC | 03:37 | |
*** yamamoto has joined #openstack-infra | 03:41 | |
*** slaweq has joined #openstack-infra | 03:49 | |
*** slaweq has quit IRC | 03:54 | |
*** ricolin has joined #openstack-infra | 04:02 | |
*** slaweq has joined #openstack-infra | 04:09 | |
*** jamesmcarthur has joined #openstack-infra | 04:11 | |
*** ykarel|away has quit IRC | 04:12 | |
*** jamesmcarthur has quit IRC | 04:16 | |
*** rishabhhpe has joined #openstack-infra | 04:24 | |
*** udesale has joined #openstack-infra | 04:25 | |
*** ykarel|away has joined #openstack-infra | 04:27 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add roles for a basic static server https://review.opendev.org/697587 | 04:28 |
*** factor has quit IRC | 04:31 | |
*** factor has joined #openstack-infra | 04:32 | |
*** ykarel|away is now known as ykarel | 04:39 | |
ianw | AJaeger: when you get a second, can you double check my comment in https://review.opendev.org/#/c/697587/2/playbooks/roles/static/files/50-governance.openstack.org.conf that we're publishing all of this to the right location | 04:41 |
*** jamesmcarthur has joined #openstack-infra | 04:42 | |
*** slaweq has quit IRC | 04:45 | |
*** jamesmcarthur has quit IRC | 04:48 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add roles for a basic static server https://review.opendev.org/697587 | 04:52 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add roles for a basic static server https://review.opendev.org/697587 | 05:07 |
*** jamesmcarthur has joined #openstack-infra | 05:15 | |
*** surpatil has joined #openstack-infra | 05:15 | |
*** jamesmcarthur has quit IRC | 05:20 | |
*** jamesmcarthur has joined #openstack-infra | 05:25 | |
*** jamesmcarthur has quit IRC | 05:30 | |
*** soniya29 has joined #openstack-infra | 05:33 | |
*** janki has joined #openstack-infra | 05:39 | |
*** rishabhhpe has quit IRC | 05:42 | |
*** rishabhhpe has joined #openstack-infra | 05:42 | |
*** raukadah is now known as chkumar|ruck | 06:04 | |
*** jamesmcarthur has joined #openstack-infra | 06:26 | |
*** jamesmcarthur has quit IRC | 06:31 | |
*** jchhatbar has joined #openstack-infra | 07:01 | |
*** janki has quit IRC | 07:01 | |
*** jchhatbar has quit IRC | 07:01 | |
*** apetrich has joined #openstack-infra | 07:03 | |
*** jamesmcarthur has joined #openstack-infra | 07:05 | |
*** apetrich has quit IRC | 07:08 | |
*** AJaeger has quit IRC | 07:09 | |
*** jamesmcarthur has quit IRC | 07:11 | |
*** rcernin has quit IRC | 07:12 | |
*** AJaeger has joined #openstack-infra | 07:13 | |
AJaeger | ianw: left a comment on the change, looks like you missed "/projects" | 07:22 |
AJaeger | https://docs.opendev.org/opendev/infra-specs/latest/specs/retire-static.html also has "/afs/openstack.org/project/governance.openstack.org" but I don't see that /project in the change | 07:23 |
*** slaweq has joined #openstack-infra | 07:25 | |
*** pgaxatte has joined #openstack-infra | 07:34 | |
*** surpatil is now known as surpatil|lunch | 07:35 | |
*** ykarel is now known as ykarel|lunch | 07:38 | |
*** adriant has quit IRC | 07:52 | |
*** adriant has joined #openstack-infra | 07:52 | |
*** igordc has joined #openstack-infra | 08:01 | |
*** rishabhhpe has quit IRC | 08:02 | |
*** jamesmcarthur has joined #openstack-infra | 08:07 | |
*** tkajinam has quit IRC | 08:07 | |
*** pkopec has joined #openstack-infra | 08:09 | |
*** sshnaidm|off is now known as sshnaidm | 08:11 | |
*** jamesmcarthur has quit IRC | 08:11 | |
*** ykarel|lunch is now known as ykarel | 08:11 | |
*** rishabhhpe has joined #openstack-infra | 08:12 | |
ianw | AJaeger: ok, thanks for that, i'll look at all the paths more closely. but do you agree we publish /elections /tc /uc /sigs (i.e. you change catches all of these publish paths?) | 08:15 |
*** jtomasek has joined #openstack-infra | 08:15 | |
*** witek has joined #openstack-infra | 08:15 | |
*** dchen has quit IRC | 08:18 | |
AJaeger | yes, that is fine as far as I can see... | 08:20 |
*** tosky has joined #openstack-infra | 08:25 | |
*** tesseract has joined #openstack-infra | 08:30 | |
*** iurygregory has joined #openstack-infra | 08:31 | |
*** surpatil|lunch is now known as surpatil | 08:31 | |
*** tesseract has quit IRC | 08:31 | |
*** tesseract has joined #openstack-infra | 08:31 | |
*** igordc has quit IRC | 08:36 | |
*** jamesmcarthur has joined #openstack-infra | 08:46 | |
*** jamesmcarthur has quit IRC | 08:51 | |
*** piotrowskim has joined #openstack-infra | 08:57 | |
*** ralonsoh has joined #openstack-infra | 08:57 | |
*** udesale has quit IRC | 09:01 | |
*** ykarel_ has joined #openstack-infra | 09:03 | |
*** ykarel has quit IRC | 09:06 | |
*** yolanda has joined #openstack-infra | 09:09 | |
*** rcernin has joined #openstack-infra | 09:11 | |
*** lucasagomes has joined #openstack-infra | 09:11 | |
*** rpittau|afk is now known as rpittau | 09:15 | |
*** gfidente has joined #openstack-infra | 09:15 | |
*** yamamoto has quit IRC | 09:17 | |
ianw | AJaeger: great, thanks, will fix up those paths tomorrow, hopefully should be working | 09:24 |
*** kopecmartin|off is now known as kopecmartin | 09:27 | |
*** kjackal has joined #openstack-infra | 09:27 | |
*** hashar has joined #openstack-infra | 09:29 | |
*** Xuchu has joined #openstack-infra | 09:34 | |
*** derekh has joined #openstack-infra | 09:38 | |
*** ijw has joined #openstack-infra | 09:39 | |
*** jonher has quit IRC | 09:40 | |
*** jonher has joined #openstack-infra | 09:41 | |
*** apetrich has joined #openstack-infra | 09:42 | |
*** ijw has quit IRC | 09:43 | |
*** jamesmcarthur has joined #openstack-infra | 09:47 | |
*** jamesmcarthur has quit IRC | 09:52 | |
*** ykarel__ has joined #openstack-infra | 09:54 | |
*** ykarel__ is now known as ykarel | 09:54 | |
*** ykarel_ has quit IRC | 09:56 | |
*** dtantsur|afk is now known as dtantsur | 10:02 | |
*** yamamoto has joined #openstack-infra | 10:03 | |
*** yamamoto has quit IRC | 10:04 | |
*** udesale has joined #openstack-infra | 10:08 | |
*** witek has quit IRC | 10:10 | |
*** rcernin has quit IRC | 10:14 | |
*** ssbarnea has quit IRC | 10:16 | |
*** jamesmcarthur has joined #openstack-infra | 10:28 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: enqueue: make trigger deprecated https://review.opendev.org/695446 | 10:30 |
*** udesale has quit IRC | 10:31 | |
*** jamesmcarthur has quit IRC | 10:33 | |
*** Xuchu_ has joined #openstack-infra | 10:49 | |
*** Xuchu has quit IRC | 10:51 | |
*** Xuchu_ is now known as Xuchu | 10:51 | |
*** jtomasek has quit IRC | 10:54 | |
*** ssbarnea has joined #openstack-infra | 10:55 | |
*** jtomasek has joined #openstack-infra | 10:57 | |
*** ykarel is now known as ykarel|afk | 10:58 | |
*** yamamoto has joined #openstack-infra | 10:58 | |
*** yamamoto has quit IRC | 11:03 | |
*** rishabhhpe has quit IRC | 11:05 | |
*** witek has joined #openstack-infra | 11:11 | |
*** yamamoto has joined #openstack-infra | 11:13 | |
*** Lucas_Gray has joined #openstack-infra | 11:20 | |
*** jamesmcarthur has joined #openstack-infra | 11:29 | |
*** jamesmcarthur has quit IRC | 11:33 | |
*** Xuchu has quit IRC | 11:39 | |
*** pcaruana has joined #openstack-infra | 11:46 | |
*** yamamoto has quit IRC | 11:49 | |
*** yamamoto has joined #openstack-infra | 11:50 | |
*** hashar has quit IRC | 11:56 | |
openstackgerrit | Albin Vass proposed zuul/nodepool master: Aws cloud-image is referred to from pool labels section https://review.opendev.org/697998 | 11:56 |
*** witek has quit IRC | 12:03 | |
*** jamesmcarthur has joined #openstack-infra | 12:06 | |
*** yamamoto has quit IRC | 12:08 | |
*** jamesmcarthur has quit IRC | 12:11 | |
*** ykarel|afk is now known as ykarel | 12:11 | |
*** dave-mccowan has joined #openstack-infra | 12:19 | |
*** dave-mccowan has quit IRC | 12:25 | |
*** udesale has joined #openstack-infra | 12:33 | |
*** hashar has joined #openstack-infra | 12:37 | |
*** yamamoto has joined #openstack-infra | 12:45 | |
*** witek has joined #openstack-infra | 12:54 | |
*** rlandy has joined #openstack-infra | 12:59 | |
*** jamesmcarthur has joined #openstack-infra | 13:00 | |
*** rosmaita has joined #openstack-infra | 13:02 | |
lennyb | clarkb: pls review small ovs-br create patch https://review.opendev.org/#/c/693850/ | 13:09 |
donnyd | Just so there is some sort of status update on the FN log storage. I am still cleaning up the swift servers and getting them back to a state where they are actually ready for production. Thanks to all in the swift channel who have been super helpful thus far | 13:11 |
*** jamesmcarthur has quit IRC | 13:15 | |
*** jamesmcarthur has joined #openstack-infra | 13:15 | |
*** rh-jelabarre has joined #openstack-infra | 13:16 | |
*** Xuchu has joined #openstack-infra | 13:17 | |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: Pagure: remove connectors burden and simplify code https://review.opendev.org/696134 | 13:19 |
*** goldyfruit_ has quit IRC | 13:19 | |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: Pagure: remove connectors burden and simplify code https://review.opendev.org/696134 | 13:20 |
*** apetrich has quit IRC | 13:21 | |
*** Lucas_Gray has quit IRC | 13:26 | |
*** zbr has quit IRC | 13:27 | |
*** kjackal has quit IRC | 13:29 | |
*** kjackal has joined #openstack-infra | 13:30 | |
*** zbr has joined #openstack-infra | 13:32 | |
openstackgerrit | Matthieu Huin proposed zuul/zuul master: enqueue: make trigger deprecated https://review.opendev.org/695446 | 13:33 |
*** jamesmcarthur has quit IRC | 13:34 | |
*** jamesden_ has joined #openstack-infra | 13:35 | |
*** Goneri has joined #openstack-infra | 13:35 | |
*** jamesdenton has quit IRC | 13:36 | |
*** lmiccini has joined #openstack-infra | 13:43 | |
*** jamesmcarthur has joined #openstack-infra | 13:45 | |
*** ociuhandu has joined #openstack-infra | 13:51 | |
*** Xuchu has quit IRC | 13:57 | |
*** mriedem has joined #openstack-infra | 13:58 | |
*** jamesmcarthur has quit IRC | 14:15 | |
*** jamesmcarthur has joined #openstack-infra | 14:20 | |
*** soniya29 has quit IRC | 14:22 | |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Add back-off mechanism for image deletions https://review.opendev.org/698023 | 14:27 |
*** jamesmcarthur has quit IRC | 14:33 | |
*** ykarel is now known as ykarel|afk | 14:36 | |
*** eharney has joined #openstack-infra | 14:41 | |
*** lpetrut has joined #openstack-infra | 14:41 | |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Add back-off mechanism for image deletions https://review.opendev.org/698023 | 14:44 |
*** goldyfruit has joined #openstack-infra | 14:44 | |
*** soniya29 has joined #openstack-infra | 14:44 | |
*** jamesmcarthur has joined #openstack-infra | 14:46 | |
*** jamesmcarthur has quit IRC | 14:46 | |
*** jamesmcarthur has joined #openstack-infra | 14:46 | |
*** soniya29 has quit IRC | 14:52 | |
*** surpatil has quit IRC | 14:54 | |
*** beekneemech is now known as bnemec | 15:00 | |
sshnaidm | The message you sent to legal-discuss@lists.openstack.org hasn't been delivered yet due to: Recipient email address is possibly incorrect. Domain has no MX records or is invalid | 15:09 |
sshnaidm | is it right address of mail list? | 15:09 |
*** sgw has joined #openstack-infra | 15:09 | |
*** jamesmcarthur_ has joined #openstack-infra | 15:09 | |
sshnaidm | Who does maintain openstack maillists? | 15:10 |
*** ociuhandu has quit IRC | 15:11 | |
*** jamesmcarthur has quit IRC | 15:13 | |
*** jamesden_ is now known as jamesdenton | 15:15 | |
*** chkumar|ruck is now known as raukadah | 15:19 | |
openstackgerrit | Albin Vass proposed zuul/nodepool master: Keys must be defined for host-key-checking: false https://review.opendev.org/698029 | 15:19 |
*** erobtom has joined #openstack-infra | 15:21 | |
openstackgerrit | Albin Vass proposed zuul/nodepool master: Keys must be defined for host-key-checking: false https://review.opendev.org/698029 | 15:25 |
*** ykarel|afk is now known as ykarel|away | 15:25 | |
*** sgw has quit IRC | 15:28 | |
mordred | sshnaidm: https://docs.openstack.org/infra/system-config/lists.html | 15:29 |
mordred | however - wow: host -t MX lists.openstack.org | 15:29 |
mordred | lists.openstack.org has no MX record | 15:29 |
mordred | that doesn't seem good | 15:29 |
*** ykarel|away has quit IRC | 15:31 | |
*** ykarel|away has joined #openstack-infra | 15:31 | |
erobtom | Hi, I need help with something. | 15:32 |
*** ykarel|away has quit IRC | 15:32 | |
erobtom | It looks like Gerrit at review.opendev.org stopped sending Gerrit event around 22 Nov. | 15:32 |
erobtom | Do I need to contact somebody to enable Gerrit event for our account? | 15:32 |
sshnaidm | mordred, and I don't see this list here: https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/manifests/lists.pp | 15:33 |
sshnaidm | mordred, so no legal-discuss list anymore? :( | 15:34 |
mordred | sshnaidm: those docs are out of date I think, sorry. playbooks/host_vars/lists.openstack.org.yaml is where the lists are defiend | 15:34 |
*** pgaxatte has quit IRC | 15:35 | |
sshnaidm | legal-discuss-owner: spam | 15:35 |
sshnaidm | great | 15:35 |
mordred | but - the bigger issue here is that there is no MX record in dns for lists.o.o currently | 15:35 |
mordred | I just checked the rax cloud dns for openstack.org and it does not list an MX entry ... I'm not sure what has happened here | 15:36 |
mordred | fungi: you up and lurking by any chance? | 15:36 |
mordred | infra-root: ^^ | 15:36 |
*** udesale has quit IRC | 15:36 | |
mordred | I can obviously add an MX record back - but I'm not 100% up to speed on the moves we've been making WRT dns upgrades and don't want to break anything further | 15:37 |
Shrews | mordred: the last archive date for that list is from july 2019. was it discontinued? | 15:38 |
Shrews | http://lists.openstack.org/pipermail/legal-discuss/ | 15:38 |
fungi | mordred: yeah, looking | 15:38 |
mordred | it's just typically pretty low traffic | 15:38 |
mordred | fungi: I confirned there's no record in the rax dashboard that I get to via logging in as openstackinfra | 15:38 |
mordred | (which I think is the correct produciton dns for openstack.org yes?) | 15:38 |
fungi | sshnaidm: mordred: lists.openstack.org has never had an mx record that i know of | 15:39 |
fungi | smtp says deliver to teh mx record *if* there is one, otherwise use the address record | 15:39 |
sshnaidm | fungi, so maybe address of mail list is wrong? | 15:39 |
fungi | it's been working this way forever | 15:39 |
mordred | oh. duh | 15:39 |
mordred | sorry - I';m still just on first coffee of the day | 15:39 |
fungi | mx is meant to be an override in case you want mail for a system delivered somewhere other than the address of its canonical name | 15:40 |
fungi | it says "i don't handle e-mail directly, my mail exchanger is over here instead" | 15:40 |
sshnaidm | fungi, can you post something to legal-discuss@lists.openstack.org ? | 15:40 |
*** yamamoto has quit IRC | 15:40 | |
fungi | mailservers refusing to deliver to a domain name which doesn't correspond to an mx record are broken, and rfc-noncompliant, though this is the first time i've ever heard of one doing something so absurd | 15:41 |
fungi | sshnaidm: i expect i can. i have many times in the past. i'm hesitant to send a test message to the list and bother all its subscribers though | 15:42 |
openstackgerrit | Albin Vass proposed zuul/nodepool master: Keys must be defined for host-key-checking: false https://review.opendev.org/698029 | 15:42 |
sshnaidm | fungi, well, I tried twice and I can't | 15:42 |
sshnaidm | will try from different provider maybe.. | 15:43 |
*** lpetrut has quit IRC | 15:46 | |
*** ijw has joined #openstack-infra | 15:49 | |
fungi | sshnaidm: some web searches turn up only people sending through microsoft exchange servers reporting this specific sort of ndr, so it's possible this is a common "feature" some exchange administrators turn on without realizing they're breaking their ability to deliver e-mail from their users | 15:50 |
*** ociuhandu has joined #openstack-infra | 15:51 | |
sshnaidm | fungi, I'm using RH mail, it's going via Gmail | 15:51 |
fungi | very strange, does it go directly to gmail or does it get relayed through another mta? what is the reporting mta in the ndr? it's the same dns records which handle all the list addresses on lists.openstack.org so if you're having trouble getting a message to one, you'd in theory have trouble getting a message to any of them | 15:52 |
mordred | sshnaidm: if this is the topic I think it's about - wanna paste the email you've written and i'll send it for you? that way it'll be both a test message and an appropriate list message if it goes through | 15:52 |
mordred | sshnaidm: (and it would be just as reasonble for it to come from me) | 15:53 |
*** ijw has quit IRC | 15:53 | |
fungi | can you possibly provide the entirety of the ndr? there might be more detail, perhaps it's not the actual error and is merely guidance to the user masking the response from some remote mta | 15:53 |
*** ijw has joined #openstack-infra | 15:53 | |
mordred | I mena- I have an opinion on the answer - but I can still ask the question :) | 15:53 |
sshnaidm | mordred, trying now from my private gmail, if won't go - let's do it | 15:53 |
mordred | sshnaidm: cool. | 15:53 |
*** witek has quit IRC | 15:54 | |
corvus | sshnaidm: did you send the legal-discuss message the same way you did the openstack-discuss message? | 15:54 |
sshnaidm | corvus, yep | 15:55 |
sshnaidm | it worked from my private gmail | 15:55 |
corvus | sshnaidm: can you paste the entire bounce message you got originally? | 15:55 |
sshnaidm | but didn't from my RH mail | 15:55 |
*** ociuhandu has quit IRC | 15:55 | |
sshnaidm | corvus, http://paste.openstack.org/show/787323/ | 15:56 |
fungi | ahh, yeah, i realize now not everyone knows what "ndr" means, sorry :/ | 15:57 |
*** erobtom has quit IRC | 15:58 | |
fungi | sshnaidm: aha! i saw a lot of hits from confused mimecast when searching the web for that error too | 15:58 |
fungi | "Powered by Mimecast" right there in the non-delivery report | 15:58 |
corvus | it seems very weird to have different behavior for the different recipient addresses | 15:58 |
fungi | er, hits from confused mimecast users | 15:58 |
fungi | "Mimecast is an international company specializing in cloud-based email management for Microsoft Exchange and Microsoft Office 365, including security, archiving, and continuity services to protect business mail." | 15:59 |
fungi | so for whatever reason, that message got shunted through a mimecast service, maybe not all your messages do? | 16:00 |
mordred | wow | 16:00 |
sshnaidm | fungi, it's weird, maybe it's on openstack mail list somewhere? | 16:00 |
mordred | sshnaidm: are you sending the email via smtp from a mail client vs. through the web interface? | 16:00 |
sshnaidm | mordred, web | 16:00 |
mordred | weird. that whole received chain is fascinating | 16:01 |
fungi | sshnaidm: Received: from mimecast-mx02.redhat.com | 16:01 |
fungi | pretty sure that's *not* us | 16:01 |
sshnaidm | fungi, I see.. | 16:01 |
*** goldyfruit_ has joined #openstack-infra | 16:01 | |
*** ijw_ has joined #openstack-infra | 16:01 | |
sshnaidm | fungi, what can we do to satisfy mimecast? :) | 16:01 |
mordred | Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C435E18E9779 for <sshnaidm@gapps.redhat.com>; Mon, | 16:01 |
fungi | sshnaidm: i recommend talking to redhat's mail sysadmins | 16:02 |
fungi | postmaster@redhat.com might reach them, i have no idea. it's not as safe a bet these days as in the golden age of the 'net | 16:02 |
sshnaidm | it's weird, I post to openstack-discuss w/o any problems.. | 16:02 |
mordred | sshnaidm: I agree - this is quite odd | 16:03 |
sshnaidm | fungi, when was the golden age? :) | 16:03 |
fungi | i dunno, sometime before now ;) | 16:03 |
mordred | sshnaidm: back when we used to ftp into public ftp servers to download software? | 16:03 |
mordred | good old prep.ai.mit.edu ... | 16:03 |
fungi | in the 90s i had pretty good luck reaching sysadmins of mailservers by sending to the ietf-mandated role alias addresses at their domains | 16:04 |
*** goldyfruit has quit IRC | 16:04 | |
corvus | mordred: that time is now | 16:04 |
mordred | corvus: that's a great point | 16:04 |
clarkb | could it be content of the email? | 16:04 |
*** ijw has quit IRC | 16:04 | |
clarkb | you can probably check headers for the -discuss bound emails to see if they made it through the same mimecast service | 16:04 |
* corvus worked hard to ensure that still worked, and it looks like the current sysadmins do too :) | 16:05 | |
fungi | clarkb: yeah, likely that or frequency with which people on redhat's internal mail relays send to which addresses | 16:05 |
mordred | corvus, fungi: you know - I don't think we've ever discussed running a public ftp server for openstack software ... | 16:05 |
fungi | maybe messages for more common recipient addresses don't get the third degree | 16:05 |
corvus | sshnaidm: if you email the rh mail admins, feel free to cc me (using my rh address) if you like | 16:05 |
sshnaidm | corvus, ack, what it is? | 16:05 |
corvus | sshnaidm: jeblair@redhat.com | 16:06 |
sshnaidm | corvus, great | 16:06 |
mordred | sshnaidm: add mordred@redhat.com too just for kicks | 16:06 |
sshnaidm | ok | 16:06 |
clarkb | infra-root I think we should plan to restart our zuul (and maybe nodepool? shrews have an opinion?) services todayish. There were a few changes that went in friday that would be good to confirm are as happy in production as they were in testing | 16:06 |
corvus | if they have any questions, we can check server logs, etc | 16:06 |
clarkb | I can help with that once I've bootstrapped my day a bit more | 16:06 |
mordred | clarkb: I submitted the request for an sdk release to get the ovh thing in - should we wai ton that? | 16:07 |
clarkb | mordred: I think we can update those venvs independent of updating and restarting zuul. But a downtime might be a good safe time to do that yes | 16:07 |
Shrews | clarkb: hrm, i just recall restarting nodepool during the ptg in anticipation of a release for tobiash, but i don't think we ever released. will need to look what went in since | 16:07 |
clarkb | mordred: I'll wait for sdk release to happen | 16:09 |
clarkb | as getting that tested and updated in base-jobs would be excellent too | 16:09 |
Shrews | i apparently did not log the commit sha for the restart :/ | 16:11 |
mordred | clarkb: we're in luck - the release is out | 16:11 |
Shrews | clarkb: i think restarting nodepool is probably a good idea anyway | 16:13 |
clarkb | Shrews: k | 16:13 |
Shrews | that would grab the new upload hook feature | 16:13 |
clarkb | mordred: re sdk I'm trying to figure out if we install it into the ansible venvs and don't see where we do that. We do install it globally on the executors though | 16:13 |
clarkb | oh wait maybe it is in zuul itself /me looks | 16:13 |
clarkb | yup | 16:14 |
clarkb | zuul/lib/ansible-config.conf | 16:14 |
clarkb | I think that means we want to stop services, run the zuul ansible upgrade command then start services. | 16:14 |
clarkb | ok time to finish breakfast and make an upgrade plan | 16:15 |
pabelanger | speaking of zuul/lib/ansible-config.conf can we land ansible 2.9 support in zuul this week? | 16:15 |
pabelanger | https://review.opendev.org/#/q/status:open+project:zuul/zuul+branch:master+topic:multi-ansible-wip | 16:15 |
*** yamamoto has joined #openstack-infra | 16:15 | |
pabelanger | I'd like to start using it for zuul.a.c, if possible | 16:16 |
*** soniya29 has joined #openstack-infra | 16:20 | |
Shrews | clarkb: i'm going to go ahead and restart the nodepool builders for us to grab the most recent updates. nb01 has *several* dib processes that have been running for more than a month | 16:20 |
Shrews | infra-root: ^^ | 16:20 |
Shrews | that might require a system reboot to clear things out | 16:20 |
clarkb | ok | 16:21 |
fungi | thanks Shrews! | 16:21 |
pabelanger | sorry, that was posted in wrong channel | 16:22 |
pabelanger | moving to #zuul | 16:22 |
fungi | we're going to have to manually upgrade openstacksdk on the zuul executors, right? | 16:22 |
mordred | sshnaidm: I have replied to your email - so you'll at least get one response :) | 16:22 |
fungi | to clear the swift config change hurdle for ovh i mean | 16:22 |
clarkb | fungi: yes, I think we need to run the manage ansible command with the upgrade flag | 16:23 |
fungi | ahh | 16:24 |
Shrews | seems the nb01 processes cleaned up with a nodepool-builder shutdown. skipping system reboot | 16:25 |
openstackgerrit | Fabien Boucher proposed zuul/zuul master: Pagure: remove connectors burden and simplify code https://review.opendev.org/696134 | 16:25 |
fungi | clarkb: right, because it's in the ansible envs | 16:25 |
sshnaidm | mordred, thanks | 16:27 |
sshnaidm | mordred, I think we agreed at least to rename them, so it will be anyway "new" modules, old one will be frozen and deprecated for 2 years | 16:27 |
mordred | sshnaidm: well - yes - from a consumer point of view it will be "new" in the sense that they'll need to update names - but the code itself will be largely the same and will have a direct lineage to the current code | 16:28 |
sshnaidm | mordred, that's possible, although we can go from scratch too | 16:29 |
mordred | we can - but I think that's going to have a pretty high cost- the exisitng modules have tons of real world use - I think making people update playbooks to use new names is one thing, but having modules maybe start behaving slightly differently for things that are being used for ops tasks would have a high user impact for basically no value | 16:31 |
Shrews | infra-root: wow, /opt is at 100% on nb01 and nb02. /mnt on nb03 (configured differently?JJ) is also at 100% | 16:31 |
mordred | Shrews: that's non-awesome | 16:31 |
clarkb | Shrews: that is the image build leaks | 16:31 |
clarkb | /opt/dib_tmp fills | 16:31 |
sshnaidm | mordred, yeah, that makes sense | 16:31 |
clarkb | Shrews: I havent been able to trace it back to specific behavio but Im guessing it happens when dib dies in a way its exithandlers dont run | 16:32 |
Shrews | clarkb: that's unfortunate. is it safe to just remove everything unter /opt/dib_tmp ? | 16:32 |
clarkb | what I've done before is stop the builder and disable the service. Reboot which unmounts any stale mounts then rm the contents of dib_tmp then enable service and reboot | 16:33 |
Shrews | clarkb: gotcha | 16:33 |
clarkb | Shrews: it is except not with dib running | 16:33 |
mordred | sshnaidm: so basically - in the absence of someone saying it *must* be done (which I find highly unlikely) - I think our best path to success is to keep the collections split out changes minimal and focused on the collections structure related things (and some cleanups) ... but let's see what people on legal-discuss@ say for sure! | 16:33 |
sshnaidm | mordred, yeah, let's wait till mtg in Thu | 16:35 |
mordred | ++ | 16:35 |
*** rpioso is now known as rpioso|afk | 16:37 | |
*** jamesmcarthur has joined #openstack-infra | 16:40 | |
*** ociuhandu has joined #openstack-infra | 16:41 | |
*** jamesmcarthur_ has quit IRC | 16:41 | |
*** jamesmcarthur_ has joined #openstack-infra | 16:41 | |
*** hashar has quit IRC | 16:44 | |
*** jamesmcarthur has quit IRC | 16:45 | |
*** witek has joined #openstack-infra | 16:46 | |
*** rpittau is now known as rpittau|afk | 16:46 | |
*** lmiccini has quit IRC | 16:47 | |
*** soniya29 has quit IRC | 16:47 | |
*** iurygregory has quit IRC | 16:53 | |
*** ricolin has quit IRC | 16:56 | |
*** ociuhandu has quit IRC | 16:58 | |
*** lucasagomes has quit IRC | 16:59 | |
*** priteau has joined #openstack-infra | 17:05 | |
Shrews | #status log /opt DIB workspace on nb0[123] was full. Removed all data in dib_tmp directory on each and restarted all builders on commit e391572495a18b8053a18f9b85beb97799f1126d | 17:09 |
openstackstatus | Shrews: finished logging | 17:09 |
Shrews | clarkb: builders are done. some are complaining about vexxhost volume conflicts again. I'll look into cleaning those up after lunch. | 17:13 |
clarkb | infra-root: our zuul restart playbook will do the manage ansible -u command while executors are stopped. However, I think I would like to test that command in the foreground manually on an executor before running it globally | 17:15 |
clarkb | this is because one of the changes that went in was to the manage ansible code path | 17:15 |
clarkb | any objection to me stopping ze01 now, running the update then starting it again? | 17:16 |
*** gyee has joined #openstack-infra | 17:16 | |
clarkb | then it will get done again when we run the playbook | 17:16 |
corvus | clarkb: that sounds fine | 17:18 |
fungi | no objection, sounds like a wise precaution | 17:18 |
*** dtroyer has joined #openstack-infra | 17:18 | |
*** ykarel has joined #openstack-infra | 17:19 | |
*** ykarel is now known as ykarel|away | 17:19 | |
openstackgerrit | Monty Taylor proposed openstack/project-config master: Announce ansible-collections-openstack in openstack-ansible-sig https://review.opendev.org/698046 | 17:21 |
clarkb | ok stopping ze01 executor now | 17:22 |
openstackgerrit | Albin Vass proposed zuul/nodepool master: Keys must be defined for host-key-checking: false https://review.opendev.org/698029 | 17:23 |
*** jamesmcarthur_ has quit IRC | 17:30 | |
*** pcaruana has quit IRC | 17:31 | |
*** jamesmcarthur has joined #openstack-infra | 17:31 | |
clarkb | not a quick stop fwiw | 17:33 |
*** dtantsur is now known as dtantsur|afk | 17:33 | |
*** jamesmcarthur has quit IRC | 17:36 | |
*** hashar has joined #openstack-infra | 17:36 | |
*** kjackal has quit IRC | 17:38 | |
*** ijw has joined #openstack-infra | 17:42 | |
*** ijw_ has quit IRC | 17:45 | |
clarkb | infra-root: `/usr/lib/zuul/ansible/2.8/bin/pip freeze | grep sdk` says openstacksdk==0.39.0 after running `ANSIBLE_EXTRA_PACKAGES=gear zuul-manage-ansible -u` as root on ze01 | 17:45 |
clarkb | that looks correct to me | 17:46 |
clarkb | I'm going to start ze01 now | 17:46 |
clarkb | I think that means we can run system-config/playbooks/zuul_restart.yaml whenever we are ready | 17:46 |
clarkb | is there anything else that should be checked prior to ^ | 17:46 |
fungi | that looks right to me | 17:47 |
clarkb | I'll let the release team know | 17:47 |
corvus | clarkb: i'm guessing you'll watch ze01 for a sanity check to make sure it's able to execute ansible correctly? | 17:47 |
clarkb | corvus: ya | 17:47 |
corvus | k. then yeah, when that checks out, i think we can do the whole shebang | 17:48 |
clarkb | currently its cleaning up stale build dirs | 17:48 |
clarkb | waiting for it to start grabbing jobs | 17:48 |
fungi | and once the full restart is done, we should be ready to recheck the ovh swift config change test | 17:48 |
corvus | i think we(tobiash) finally fixed the stale dir cleanup, so it may actually be doing something now | 17:49 |
fungi | might be a good idea to take a look at the disk utilization graphs in cacti to see if there was a significant change at restart | 17:49 |
*** igordc has joined #openstack-infra | 17:49 | |
clarkb | it appears to not be the quickest thing so ya likely is doing stuff now | 17:50 |
*** diablo_rojo has joined #openstack-infra | 17:51 | |
clarkb | it is starting to process jobs now | 17:53 |
*** priteau has quit IRC | 17:54 | |
*** priteau has joined #openstack-infra | 17:55 | |
clarkb | so far ansible has only exited 0 according to the log file | 17:57 |
clarkb | I'll leave a tail -f | grep on that for a bit to sanity check any non zero results | 17:58 |
*** priteau has quit IRC | 17:58 | |
*** jklare has quit IRC | 17:58 | |
*** jklare has joined #openstack-infra | 18:01 | |
*** derekh has quit IRC | 18:01 | |
*** hashar has quit IRC | 18:02 | |
*** rpioso|afk is now known as rpioso | 18:10 | |
*** gfidente is now known as gfidente|afk | 18:11 | |
*** hashar has joined #openstack-infra | 18:11 | |
*** hashar has quit IRC | 18:11 | |
clarkb | just had our first failure. It occurred on opensuse-15, I'm guessing that platform is less stable with its jobs as it gets less attention? | 18:13 |
clarkb | otherwise things have been looking stable | 18:13 |
fungi | honestly, surprised it took this long to hit a failure | 18:14 |
fungi | i figured our cosmic background radiation there was stronger than that | 18:14 |
*** rishabhhpe has joined #openstack-infra | 18:16 | |
rishabhhpe | Hello All , Need you inputs on below -: | 18:16 |
rishabhhpe | From Saturday onwards our openstack CI is failing for ISCSI driver at very early stage of devstack setup and for FC driver it is getting queued. Please let us know if there is any changes are done from community end. | 18:16 |
clarkb | oh thats neat. That job then went on to be unreachable when the cleanup playbook tried to run against it (that checks df and networking) | 18:17 |
clarkb | I think that host got properly unhappy | 18:17 |
clarkb | rishabhhpe: the openstack QA and cinder teams may be better points of contact | 18:17 |
clarkb | QA is responsible for devstack and cinder for iscsi related things typiocally | 18:17 |
*** witek has quit IRC | 18:18 | |
clarkb | rishabhhpe: also it generally helps if you can provide more debugging information like log snippets when asking for help to debug failures | 18:20 |
*** ijw has quit IRC | 18:21 | |
clarkb | rishabhhpe: one guess off the top of my head is maybe you are trying to do python2.7 when now only python3 issupported | 18:21 |
rishabhhpe | We are using Python3.7 only for our setup but from our analysis here i posted the success and failure log snippet | 18:23 |
rishabhhpe | http://paste.openstack.org/show/787341/ | 18:23 |
*** pcaruana has joined #openstack-infra | 18:23 | |
fungi | i heard a number of devstack plugins were encountering issues with calls to cli tools installed with pip for both python2 and python3 where the entrypoint for whichever was installed second overwrote the first and ended with them calling into the wrong version of python | 18:32 |
fungi | no idea if this is an example | 18:32 |
*** ralonsoh has quit IRC | 18:32 | |
clarkb | fungi: I think this is simply a case of devstack deciding xenial is too old for itself | 18:33 |
clarkb | (conversation happening in the qa channel) | 18:33 |
fungi | ahh, good | 18:33 |
fungi | and yeah, i saw the message in the paste but wasn't clear whether that was a warning or a hard error | 18:34 |
clarkb | infra-root I've got a root screen started on bridge. I'll be running `ansible-playbook -f 20 /opt/system-config/playbooks/zuul_restart.yaml` from there shortly | 18:34 |
fungi | attaching | 18:34 |
clarkb | the zuul restart doesn't capture queues. Does someone else want to do that bit of the restart? | 18:34 |
fungi | i should be able to | 18:35 |
clarkb | k give me a few minutes to grab something to drink and then I'll be ready to start | 18:35 |
fungi | so far we've only been preserving contents for the check and gate pipelines in the openstack tenant | 18:36 |
fungi | do we need to expand that? | 18:36 |
*** igordc has quit IRC | 18:37 | |
*** kjackal has joined #openstack-infra | 18:37 | |
fungi | i have a root screen session going on zuul.o.o for the queue capture and replay | 18:37 |
*** ijw has joined #openstack-infra | 18:42 | |
clarkb | fungi: I think the automated every 30 second capture may do all tenants now | 18:42 |
clarkb | but ya we should capture all tenants | 18:42 |
*** kjackal has quit IRC | 18:43 | |
clarkb | fungi: I'm ready to start running the playbook now and have given the release team warning and checked they don't have anything in flight | 18:43 |
clarkb | fungi: let me know when you've got queues captured and I will start the playbook | 18:44 |
fungi | hrm, the command we've been using is going to the whitebox zuul.openstack.org api | 18:44 |
fungi | but it is also passing the tenant name, so maybe the hostname there is irrelevant | 18:44 |
fungi | i'll try to nab the other tenants with it too | 18:45 |
clarkb | it is because the openstack.org api is whiteboxed with redirects that assume a tenant | 18:45 |
clarkb | I don't think you can pass in any other tenant there | 18:45 |
*** rishabhhpe has quit IRC | 18:45 | |
clarkb | fungi: `python /opt/zuul/tools/zuul-changes.py http://zuul.openstack.org openstack check` is the command right? I think you can change that to https://zuul.opendev.org openstack check, zuul check, opendev check, etc ? | 18:46 |
clarkb | and append all of those files together? | 18:47 |
clarkb | (and maybe that should become an opendev script so we don't have to remember for next time) | 18:47 |
fungi | running | 18:47 |
fungi | i did a loop, but yeah | 18:48 |
clarkb | I won't start the playbook until you confirm you are happy with the results of ^ | 18:48 |
fungi | also worth noting, tools/zuul-changes.py doesn't work with python3, encoding issues | 18:48 |
fungi | clarkb: another fun detail, looks like i got openstack tenant queue items in the zuul tenant check pipeline dump | 18:49 |
clarkb | hrm is that a zuul api bug? | 18:50 |
clarkb | fungi: or were you talking to zuul.openstack.org ? | 18:50 |
clarkb | (whcih always assumes openstack tenant) | 18:50 |
fungi | python /opt/zuul/tools/zuul-changes.py http://zuul.opendev.org zuul check | 18:50 |
*** kjackal has joined #openstack-infra | 18:50 | |
fungi | that contains a bunch of tripleo items, as well as other stuff | 18:50 |
fungi | but maybe it's not doing sni and so hitting a default vhost? | 18:51 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: DNM: Add quick and dirty api crawler for ansible versions https://review.opendev.org/698062 | 18:51 |
clarkb | fungi: ya could be we need to use requests in there instead of urllib? | 18:52 |
clarkb | fungi: maybe try http then? | 18:52 |
fungi | it's using urllib2.urlopen() from python 2.7.12 | 18:52 |
fungi | it was http, but i tried switching to https and that didn't change anything | 18:53 |
*** ykarel|away has quit IRC | 18:54 | |
fungi | yeah, i suspect we need to rewrite this tool, should i take a quick stab at it now or just be satisfied with current behavior at the moment since openstack is the only tenant with queue items anyway according to the top level view at https://zuul.opendev.org/tenants | 18:54 |
clarkb | for now if openstack is the only one with queue items we are probably fine, but we should definitely fix this | 18:54 |
clarkb | oh I should double check the zuul install version on all hosts before I run the playbook | 18:55 |
fungi | okay, check and gate for openstack dumped, roll the update | 18:55 |
fungi | ahh, yeah, i can take another once you confirm | 18:55 |
fungi | up side to using urllib2 is that you can run this script without a virtualenv and without requests installed | 18:56 |
fungi | possible urrlib in python3 has sni support since a while, worth checking | 18:56 |
openstackgerrit | Kendall Nelson proposed openstack/cookiecutter master: Update CONTRIBUTING.rst template https://review.opendev.org/696001 | 18:56 |
fungi | might be if we just fix this up to be python3 then it'll simply work | 18:56 |
clarkb | double checked they all have zuul==3.11.2.dev72 # git sha 57aa3a0 | 18:57 |
clarkb | fungi: ok ready now when you are | 18:57 |
*** harlowja has quit IRC | 18:58 | |
fungi | new snapshots obtained, go for it | 18:58 |
clarkb | it is running | 18:58 |
mordred | sshnaidm, corvus: I replied to the RH reply | 18:58 |
*** pkopec has quit IRC | 18:59 | |
clarkb | ok web didn't clean up its pid | 18:59 |
clarkb | the process isn't running through so I will manually rm that file | 18:59 |
clarkb | corvus: ^ any idea why that happens? | 18:59 |
sshnaidm | mordred, great, I was composing a long polite request.. | 19:00 |
fungi | clarkb: any traceback in its log? | 19:00 |
fungi | maybe it died ungracefully when something it was connecting to went away | 19:00 |
mordred | sshnaidm: mine was a little short - tl;dr - "we are upstream admins and don't see any problems - what problems do you think there are?" | 19:01 |
*** harlowja has joined #openstack-infra | 19:01 | |
clarkb | fungi: goes from 2019-12-09 18:58:55,408 DEBUG zuul.web: Websocket close: 4011 Error with Gearman to 2019-12-09 19:00:21,548 DEBUG zuul.Web: Configured logging: 3.11.2.dev72 | 19:01 |
clarkb | fungi: no errors | 19:02 |
fungi | hrm | 19:02 |
fungi | so might have died in its sleep | 19:02 |
corvus | clarkb: yeah, there's a perms problem | 19:02 |
corvus | the web pid always needs to be manually deleted after it stops | 19:02 |
fungi | does zuul-web drop privileges after starting or something? | 19:04 |
clarkb | also it seems that the scheduler may remove its pid before it fully exits | 19:04 |
corvus | yep | 19:04 |
fungi | that would explain it | 19:04 |
clarkb | it was still around on my first ps listing to check on web | 19:04 |
clarkb | but gone on a subsequent one | 19:04 |
clarkb | currently we are in the wait for executors phase of the restart | 19:05 |
corvus | clarkb: well, it removes it right before it exists | 19:05 |
fungi | likely all the cleanup they're now doing | 19:05 |
clarkb | corvus: ah | 19:05 |
corvus | clarkb: maybe some swapping or something made that long enough to be noticable | 19:05 |
corvus | rather "right before it exits" | 19:05 |
fungi | the scheduler uses a ton of memory, so yeah freeing all that could take some seconds | 19:06 |
fungi | both of those could be worked around by having a parent supervisor process responsible for the pidfile reaping, but that's added complexity and one more thing to go wrong on you | 19:07 |
corvus | i don't think it's a problem in the scheduler case (if you wanted to start the scheduler after the pidfile is removed, that should work :). we do need to do something about web though. | 19:08 |
clarkb | fungi: I think you can reenqueue changes now | 19:08 |
fungi | yeah, if the scheduler has closed all its listeners/descriptors and isn't going to interact with anything else after the pidfile is removed, the most trouble it might cause is briefly insufficient memory | 19:09 |
clarkb | based on zuul web showing at least one change has snuck in | 19:09 |
fungi | clarkb: thanks reqnqueuing now | 19:09 |
clarkb | (there isn't anything to run the jobs yet but we can put them in the queue) | 19:09 |
fungi | nope, not yet | 19:09 |
fungi | rpc errors | 19:09 |
clarkb | how are those changes in there then | 19:10 |
fungi | er, no, not rpc errors | 19:10 |
clarkb | executors are stopping now | 19:10 |
*** ociuhandu has joined #openstack-infra | 19:10 | |
clarkb | half have stopped | 19:11 |
fungi | the `zuul enqueue` subcommand's syntax has changed compared to what tools/zuul-changes.py produces now | 19:11 |
clarkb | fungi: I thought that change was backward compatible.? | 19:11 |
clarkb | is it complaining about the trigger option? | 19:11 |
fungi | aha, no, it recorded a tenant of None | 19:12 |
fungi | will sed these real quick | 19:12 |
clarkb | k | 19:12 |
fungi | that's better | 19:12 |
fungi | will work out whatever's causing that in the python3ification | 19:13 |
clarkb | (there was a change to make --trigger optional but it should have been backward compat so good to hear that it was a different problem) | 19:13 |
*** igordc has joined #openstack-infra | 19:14 | |
clarkb | executors are all stopped and now are getting their ansible venvs updated | 19:15 |
*** ociuhandu has quit IRC | 19:15 | |
clarkb | oh tahts done now and they have started. playbook is completed | 19:15 |
*** ociuhandu has joined #openstack-infra | 19:15 | |
clarkb | no errors other than manually rm'ing the web pid file | 19:15 |
clarkb | spot check on ze10's 2.8 ansible install shows an openstacksdk of 0.39.0 | 19:16 |
Shrews | mnaser: fyi, i'm seeing something odd with some vexxhost instances in sjc1. There are instances that are either in the 'active' or 'error' state, power state is 'running', but the task state is 'deleting'. The ones I'm looking at seem to have all been created on nov 13 | 19:16 |
clarkb | fungi: I'll recheck my base-test tester change now | 19:16 |
*** ociuhandu has quit IRC | 19:16 | |
fungi | reenqueuing is still underway | 19:16 |
clarkb | https://review.opendev.org/#/c/680178/ is the base-test tester change | 19:17 |
fungi | cool | 19:17 |
fungi | reenqueue completed too, time to #status log? | 19:17 |
clarkb | expect that some executors will be busy rm'ing their stale buiild dirs for a few minutes on startup | 19:17 |
*** gmann is now known as gmann_afk | 19:17 | |
clarkb | though ze10 is into running jobs at this point so we may be generally past that step | 19:18 |
clarkb | fungi: ++ did you want to do it or should I? | 19:18 |
fungi | oh, i guess still waiting for them to start fully, yeah | 19:18 |
fungi | status log completed full zuul and nodepool restart at 19:00 utc for updated openstacksdk 0.39.0 release | 19:19 |
fungi | that look reasonable? | 19:19 |
fungi | s/completed/performed/ maybe | 19:19 |
corvus | can we throw in some shas? | 19:19 |
clarkb | fungi: should include the zuul version too, zuul==3.11.2.dev72 # git sha 57aa3a0, and make note of the depends-on and github driver changes? | 19:19 |
fungi | oh, yeah good thinkin | 19:19 |
fungi | right-o | 19:19 |
fungi | nodepool rev too? | 19:19 |
Shrews | mnaser: in case i lose my data set, sample instances include: 5045d5dc-cbf3-4f62-962d-4d47dc625567, 717a819c-78a9-4b18-974d-2f5a2616ed54, 447a7e88-f338-4c1b-8777-44360d7a6972, 39128606-8abb-411c-9b54-60264139105c | 19:20 |
clarkb | fungi: shrews did nodepool independently | 19:20 |
clarkb | (it isn't part of this playbook) | 19:20 |
Shrews | i did not do launchers today | 19:20 |
clarkb | Shrews: k | 19:20 |
fungi | ahh, okay, just builders | 19:20 |
fungi | status log completed full zuul restart at 19:00 utc with zuul==3.11.2.dev72 (57aa3a0) for updated openstacksdk 0.39.0 release and depends-on and github driver changes | 19:21 |
clarkb | fungi: ++ | 19:21 |
fungi | still forgot to s/completed/performed/ | 19:21 |
fungi | any elaboration on depends-on and github driver changes needed? | 19:21 |
clarkb | fungi: for both I would say something like "to avoid looking up extra unneeded changes via code review APIs" | 19:22 |
clarkb | they were related changes | 19:23 |
fungi | status log Performed full Zuul restart at 19:00 UTC with zuul==3.11.2.dev72 (57aa3a0) for updated OpenStackSDK 0.39.0 release and Depends-On and GitHub driver improvements to avoid looking up extra unneeded changes via code review APIs | 19:25 |
fungi | better? | 19:25 |
clarkb | yup | 19:25 |
fungi | #status log Performed full Zuul restart at 19:00 UTC with zuul==3.11.2.dev72 (57aa3a0) for updated OpenStackSDK 0.39.0 release and Depends-On and GitHub driver improvements to avoid looking up extra unneeded changes via code review APIs | 19:25 |
openstackstatus | fungi: finished logging | 19:25 |
*** yamamoto has quit IRC | 19:26 | |
clarkb | fungi: http://zuul.opendev.org/t/zuul/stream/56f9a111ff6e4cdf8323ca9752458552?logfile=console.log to watch base-test testing | 19:29 |
Shrews | infra-root: fyi, with the scheduler restart, we *should* see autohold held nodes sorted by build ID (for new holds going forward) | 19:32 |
Shrews | s/sorted/grouped/ | 19:32 |
corvus | way cool for multinode jobs | 19:33 |
stevebaker_ | hey, I'm researching setting up a CI failure bot for #openstack-ironic, but I can't find any other project doing this (other than the now inactive hubbot) | 19:33 |
corvus | much more useful than the previous "here is a pile of nodes! good luck!" approach :) | 19:33 |
fungi | clarkb: so initial stab, zuul-changes.py works under python3 as long as we .decode('utf-8') the bytestreams coming out of urlopen().read(), but that doesn't seem to solve the failure to differentiate tenants, so deeper investigation is warranted | 19:34 |
corvus | stevebaker_: gerritbot has a broken feature to announce verified -2 reports (ie, gate failures). fixing that is probably not more than a line or two of code. | 19:34 |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Add missing release note about post-upload-hook https://review.opendev.org/698072 | 19:35 |
clarkb | fungi: we should rule out (or in) that zuul api gives us the correct json blob as expected | 19:35 |
corvus | stevebaker_: https://opendev.org/opendev/gerritbot/src/branch/master/gerritbot/bot.py#L208 | 19:35 |
stevebaker_ | corvus: interesting, thanks | 19:35 |
fungi | clarkb: well, i mean, i can use the zuul dashboard to browse the zuul tenant status so that's hitting the api and getting correct responses | 19:36 |
corvus | stevebaker_: i think the issue is mostly just that the approval type changed from "VRIF" to "Verified". or something like that. | 19:36 |
clarkb | fungi: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd5/680178/3/check/tox-py27/fd5c36e/ I think it worked. Note the url is bhs not bhs1 :) | 19:36 |
clarkb | similar story with https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_56f/680178/3/check/tox-py35/56f9a11/ | 19:37 |
*** strigazi has quit IRC | 19:37 | |
clarkb | I'll go ahead and get base-jobs change pushed up to have this be new production state | 19:37 |
*** tesseract has quit IRC | 19:37 | |
fungi | clarkb: yep! lgtm now | 19:40 |
*** strigazi has joined #openstack-infra | 19:40 | |
mnaser | Shrews: ok ill check those out, thanks for pinging me | 19:43 |
mnaser | we added monitoring recently to catch ERROR state instances :) | 19:43 |
mnaser | so anything that's ERROR for over 24 hours alerts us | 19:44 |
*** hashar has joined #openstack-infra | 19:44 | |
openstackgerrit | Clark Boylan proposed opendev/base-jobs master: Push OVH region update to production https://review.opendev.org/698074 | 19:44 |
Shrews | mnaser: \o/ | 19:44 |
mnaser | Shrews: actually its available for anyone :) | 19:45 |
clarkb | infra-root ^ ovh change should be ready now I think | 19:45 |
mnaser | been working on this https://github.com/openstack-exporter :) -- the helmcharts have all the alerts | 19:45 |
*** strigazi has quit IRC | 19:46 | |
fungi | clarkb: okay, somewhat good news on the zuul-changes.py script. it's actually sorta working on python 3.6 and later, apparently urlopen().read() produces a data type json.loads() can ingest just fine. it breaks on python 3.5, which is the default python3 on zuul.opendev.org (ubuntu xenial) | 19:52 |
fungi | however, the current behavior of the script when it hits a multi-tenant zuul api is to output changes matching the specified pipeline for all tenants, not filtered to the tenant provided in its command-line arguments | 19:52 |
fungi | i have a feeling it's been this way since multi-tenancy support was introduced in the script | 19:53 |
clarkb | oh neat I think that behavior works better for our use case? | 19:53 |
fungi | yeah, albeit confusing | 19:54 |
clarkb | infra-root I am going to upgrade openstacksdk to 0.39.0 on nl04 then restart nodepool launcher there | 19:54 |
clarkb | I'm choosing nl04 as the canary ecause it talks to ovh and sdk update affects ovh | 19:54 |
mordred | clarkb: ++ | 19:54 |
fungi | so if you `python3 tools/zuul-changes.py https://zuul.opendev.org zuul check` you get the contents of any pipeline named "check" across all your zuul tenants (with the corresponding tenant names in the zuul enqueue(-ref) command-line, so that's safe at least | 19:55 |
clarkb | alright nl04 has had that done to it | 19:56 |
clarkb | Shrews: ^ fyi | 19:56 |
clarkb | 0013298874 built in bhs1 and went in-use not long ago | 20:00 |
clarkb | first indications seem to be good given that | 20:00 |
clarkb | mordred: ^ can you confirm that would cover the bug fix that you wrote for nodepool on the microversion side of things too? | 20:00 |
mordred | clarkb: 0.39.0 definitely would | 20:01 |
mordred | oh - wait - the nodepool fix | 20:01 |
clarkb | ya I'm testing two things. First is that ovh continues to work after the sdk json updates (seems to) and the other is whether or not your microversion fix is working (so we can make a release) | 20:02 |
clarkb | I believe that ovh booting instances would cover your microversion fix if their microversions support the thing you were fixing (I don't know how to determine that bit of info) | 20:02 |
mordred | clarkb: let me check them real quick | 20:03 |
*** eharney has quit IRC | 20:04 | |
mordred | clarkb: nope. | 20:05 |
mordred | {'versions': [{'status': 'SUPPORTED', 'min_version': '', 'updated': '2011-01-21T11:33:21Z', 'links': [{'rel': 'self', 'href': 'https://compute.bhs1.cloud.ovh.net/v2/'}], 'id': 'v2.0', 'version': ''}, {'status': 'CURRENT', 'min_version': '2.1', 'updated': '2013-07-23T11:33:21Z', 'links': [{'rel': 'self', 'href': 'https://compute.bhs1.cloud.ovh.net/v2.1/'}], 'id': 'v2.1', 'version': '2.38'}]} | 20:05 |
mordred | 2.38 is as new as they have there | 20:05 |
clarkb | mordred: can you check if vexxhost or fn supports it? | 20:05 |
clarkb | I can restart that launcher next | 20:05 |
mordred | I am 100% certain vexxhost does | 20:05 |
clarkb | alright I'll do vexxhost next then | 20:05 |
mordred | they're running train :) | 20:05 |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Add missing release notes https://review.opendev.org/698072 | 20:06 |
clarkb | nl03 has been restarted now as well with sdk 0.39.0 | 20:07 |
clarkb | this should cover mordreds microversion fix for nova in nodepool | 20:07 |
mordred | clarkb: actually - what region are we using with vexxhost? ca-ymq-1 ? | 20:07 |
clarkb | mordred: sjc1 | 20:07 |
fungi | clarkb: yet more investigation indicates we get the multi-tenant behavior on zuul.o.o under python 2.7 as well if we use the main url and not the whitebox one | 20:07 |
clarkb | e391572495a18b8053a18f9b85beb97799f1126d is the nodepool commit I've restarted 03 and 04 on | 20:07 |
mordred | {'version': {'status': 'CURRENT', 'min_version': '2.1', 'updated': '2013-07-23T11:33:21Z', 'media-types': [{'type': 'application/vnd.openstack.compute+json;version=2.1', 'base': 'application/json'}], 'links': [{'rel': 'self', 'href': 'https://compute-sjc1.vexxhost.us/v2.1/'}, {'type': 'text/html', 'rel': 'describedby', 'href': 'http://docs.openstack.org/'}], 'id': 'v2.1', 'version': '2.72'}} | 20:08 |
mordred | clarkb: so yes - they are running 2.72 | 20:08 |
clarkb | ok we have sucessfully built and put into use vexxhost nodes since the restart with newer sdk. That implies to me that mordreds microversion fix is working | 20:10 |
clarkb | I'll restart 02 and 01 now with new sdk as well | 20:11 |
clarkb | all 4 launchers are now restarted at nodepool==3.9.1.dev11 # git sha e391572 | 20:13 |
clarkb | things look stable. I'mgoing to figure out lunch now | 20:16 |
openstackgerrit | David Shrewsbury proposed zuul/zuul master: Sort autoholds by request ID https://review.opendev.org/698077 | 20:16 |
*** Lucas_Gray has joined #openstack-infra | 20:21 | |
Shrews | clarkb: awesome, thx | 20:22 |
*** ociuhandu has joined #openstack-infra | 20:43 | |
*** ociuhandu has quit IRC | 20:48 | |
openstackgerrit | Merged opendev/base-jobs master: Push OVH region update to production https://review.opendev.org/698074 | 20:50 |
openstackgerrit | Monty Taylor proposed opendev/storyboard master: Build container images https://review.opendev.org/611191 | 20:58 |
mordred | fungi, diablo_rojo: ^^ I had to respin that due to an error | 20:59 |
corvus | mordred, fungi: i'm surprised to see the opendevzuul key in that change... what's the goal? | 21:01 |
*** eharney has joined #openstack-infra | 21:01 | |
*** hashar has quit IRC | 21:02 | |
fungi | uploading docker images of storyboard builds to the opendev namespace on dockerhub... which key should we use for that? | 21:03 |
fungi | as a precursor to switching our storyboard deployments from puppety to ansible+containers | 21:03 |
*** Lucas_Gray has quit IRC | 21:04 | |
corvus | fungi: that's the right key and the way we do that. i was unsure whether the opendev/ namespace was the target, and if not, but the opendevzuul key was intended, then i would expect not to require a copy of the key there. but it is, so it is. | 21:08 |
*** Lucas_Gray has joined #openstack-infra | 21:08 | |
corvus | long story short, that change occupied about 3 states of quantum superposition in my mind, but has collapsed to one :) | 21:08 |
fungi | no worries, could/should we do that in project-config instead of spreading copies of the key around? | 21:09 |
fungi | either way, it's probably something we should call particular attention to in reviews, i agree | 21:10 |
corvus | i think there is a way to reduce copies of it (which we didn't see fit to do earlier since we thought images in opendev/ would come only from system-config). but we can futz with that later. | 21:12 |
fungi | we could also do the image building via system-config with a little bit of rejiggering, i expect | 21:13 |
fungi | this is probably verging into ianw's earlier topic wrt opendev images for dib/nodepool except in this case it's opendev image of an opendev project... but maybe we want to treat that case similarly? | 21:14 |
fungi | consistency could be a good thing there, i suppose | 21:15 |
corvus | yeah, in nodepool's case it's a different org and a different responsible party. in this case (and probably for dib) it seems we're saying both are the same, so the answer is different. | 21:17 |
openstackgerrit | Steve Baker proposed opendev/gerritbot master: Fix event comment-added https://review.opendev.org/698089 | 21:18 |
corvus | mordred: re https://github.com/bgeesaman/registry-search-order from ianw -- when we use podman-compose we should make sure that we do not have a search line in registries.conf. | 21:18 |
fungi | gonna try to grab an early dinner out, should be back soonish | 21:19 |
*** ociuhandu has joined #openstack-infra | 21:21 | |
corvus | i mean, the fully-qualified names should render us safe, but double checking that we don't have that configured is a good belts+suspenders thing | 21:22 |
*** jamesmcarthur has joined #openstack-infra | 21:22 | |
clarkb | ianw: did you want to keep the dib container topic on the infra meeting agenda? | 21:24 |
clarkb | (I'm not sure if that has been sufficiently resolved with the siblings work) | 21:24 |
*** yamamoto has joined #openstack-infra | 21:25 | |
clarkb | #status log Restarted Nodepool Launchers at nodepool==3.9.1.dev11 (e391572) and openstacksdk==0.39.0 to pick up OVH profile updates as well as fixes in nodepool to support newer openstacksdk | 21:26 |
openstackstatus | clarkb: finished logging | 21:26 |
*** ociuhandu has quit IRC | 21:26 | |
openstackgerrit | Steve Baker proposed openstack/project-config master: IRC #openstack-ironic gerritbot CI failed messages https://review.opendev.org/698091 | 21:27 |
stevebaker_ | corvus: thanks, I've proposed this https://review.opendev.org/698089 | 21:28 |
openstackgerrit | Merged zuul/zuul master: Sort autoholds by request ID https://review.opendev.org/698077 | 21:30 |
*** pcaruana has quit IRC | 21:31 | |
corvus | stevebaker_: that looks good to me -- looks like we'll have to dust off a few cobwebs with the dependencies, but that should be a tractable problem | 21:32 |
corvus | stevebaker_: https://review.opendev.org/545502 looks like it may be worth a look | 21:32 |
stevebaker_ | corvus: readable event names would be nice. I'd be happy with that change, then reworking mine to just add x-all-comments | 21:35 |
stevebaker_ | corvus: or landing my change then I can rebase https://review.opendev.org/545502 | 21:38 |
*** nicolasbock has joined #openstack-infra | 21:40 | |
openstackgerrit | Bernard Cafarelli proposed openstack/openstack-zuul-jobs master: Update openstack-python-jobs-neutron templates https://review.opendev.org/698095 | 21:44 |
*** jtomasek has quit IRC | 21:45 | |
*** jamesmcarthur has quit IRC | 21:48 | |
*** gmann_afk is now known as gmann | 21:48 | |
openstackgerrit | Steve Baker proposed opendev/gerritbot master: Fix event comment-added https://review.opendev.org/698089 | 21:49 |
clarkb | zuul and nodepool continue to look stable. I'm popping out for a bike ride. Back in a bit | 21:52 |
*** ijw_ has joined #openstack-infra | 21:55 | |
*** Goneri has quit IRC | 21:56 | |
*** ijw has quit IRC | 21:58 | |
ianw | clarkb: yeah, i think we can drop the topic, we have wip | 22:00 |
*** ociuhandu has joined #openstack-infra | 22:04 | |
*** slaweq has quit IRC | 22:05 | |
*** ociuhandu has quit IRC | 22:10 | |
donnyd | did someone restart nl02? | 22:12 |
donnyd | duh.. it was right in front of me | 22:12 |
donnyd | clarkb: over time nl02 thinks there are a bunch of nodes in "ready", and for whatever reason it counts them against quota | 22:13 |
*** cgoncalves has quit IRC | 22:16 | |
*** rcernin has joined #openstack-infra | 22:16 | |
*** Lucas_Gray has quit IRC | 22:17 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add roles for a basic static server https://review.opendev.org/697587 | 22:19 |
*** cgoncalves has joined #openstack-infra | 22:23 | |
*** cgoncalves has quit IRC | 22:26 | |
*** cgoncalves has joined #openstack-infra | 22:27 | |
*** cgoncalves has quit IRC | 22:27 | |
*** cgoncalves has joined #openstack-infra | 22:28 | |
openstackgerrit | Merged zuul/nodepool master: Add missing release notes https://review.opendev.org/698072 | 22:34 |
*** rcernin has quit IRC | 22:36 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: mirror: remove debug output of apache config https://review.opendev.org/698104 | 22:38 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add roles for a basic static server https://review.opendev.org/697587 | 22:44 |
*** rkukura has quit IRC | 22:45 | |
*** rkukura has joined #openstack-infra | 22:46 | |
fungi | donnyd: those are probably part of the min-ready count for various labels | 22:54 |
openstackgerrit | Merged opendev/system-config master: mirror jobs: copy acme.sh output https://review.opendev.org/696208 | 22:54 |
fungi | we might want to consider whether we should drop min-ready to 0 for infrequently-used node labels | 22:55 |
donnyd | I should just take mine down to 0 and see if it stops doing that | 22:55 |
fungi | well, they're a distributed global count | 22:55 |
fungi | for frequently-used node labels it's not a problem because they'll be used to satisfy a node request not too long after they're booted | 22:56 |
donnyd | I should take them down to 0 in the FN config because it only take 30-40 seconds for an instance to launch | 22:56 |
fungi | yeah, trying to say those aren't per-region/per-provider values | 22:57 |
donnyd | I see Ubuntu and centos instances brought up and not used for weeks sometimes | 22:57 |
fungi | which releases of ubuntu and centos? because that would be wierd | 22:57 |
fungi | weird | 22:57 |
donnyd | ohic | 22:58 |
donnyd | bionic and 7 | 22:58 |
donnyd | i think it happens when a new image is loaded | 22:58 |
fungi | the idea is that nodepool is configured to boot some number of each label in advance to try to keep a pool of instantly-available nodes for some requests, the usual problem comes when we add a new node type or one falls out of general use and we pre-boot some which sit around for weeks waiting on a build to request them | 22:59 |
fungi | but centos-7 and ubuntu-bionic nodes should be getting used straight away | 22:59 |
fungi | so maybe this is indicative of a leak of some kind | 22:59 |
donnyd | that is not what always happens | 22:59 |
donnyd | Next time I see it I will hit you up so maybe we can diagnose why, but they surely do that | 23:00 |
donnyd | I usually kilt them, but then the "ready" node never gets released back to NL | 23:00 |
*** kjackal has quit IRC | 23:00 | |
donnyd | I know a couple months back FN readys were at like 15 nodes | 23:00 |
fungi | i'm plumbing the depths of the node list now | 23:01 |
*** kjackal has joined #openstack-infra | 23:01 | |
donnyd | which didn't actually exist, but nl was tracking that they did | 23:01 |
fungi | right now there are no "ready" nodes in fn for any label | 23:02 |
donnyd | http://grafana.openstack.org/d/3Bwpi5SZk/nodepool-fortnebula?orgId=1&from=now-1w%2Fw&to=now-1w%2Fw | 23:02 |
donnyd | that is because nl was just restarted, but give it a few weeks and it will slowly creep up | 23:02 |
fungi | (according to the `nodepool list` command anyway) | 23:02 |
fungi | okay, certainly sounds like a leak of some kind, would definitely be good to get to the bottom of, because what you're describing doesn't sound like behavior we expect | 23:03 |
donnyd | We talked about this last time it was super high | 23:03 |
fungi | but yeah, we'll probably need to wait for it to crop back up | 23:03 |
fungi | i doubt i was also super high at the time, but for the life of me i can't remember what we investigated/discovered | 23:04 |
donnyd | we restarted nl and it went away last time too | 23:04 |
donnyd | nl02 that is | 23:05 |
fungi | yeah, that's the launcher handling fn (and a few other providers) | 23:05 |
fungi | hopefully next time before an nl02 restart we can dig into logs around some example nodes stuck in "ready" state there | 23:05 |
donnyd | its strange for sure. I will keep a closer eye on it and report back when it's doing that again | 23:06 |
fungi | thanks | 23:06 |
donnyd | probably take another month though | 23:06 |
donnyd | Hope all is well with you fungi :) | 23:06 |
fungi | oh, yep, tourists are gone, no more 'canes headed our way this year probably, time to kick back and relax | 23:08 |
fungi | i hope things are well with you too! | 23:08 |
ianw | does bridge.o.o really need to do " - name: Clone puppet modules to /etc/puppet/modules"? they're not used by bridge.o.o i don't think? | 23:10 |
*** diablo_rojo has quit IRC | 23:12 | |
*** tkajinam has joined #openstack-infra | 23:12 | |
*** ramishra has quit IRC | 23:12 | |
fungi | does it copy those to remote nodes or only tell remote nodes to retrieve them? | 23:13 |
fungi | at one time we pushed copies of puppet modules to remote nodes | 23:13 |
clarkb | fungi: donnyd I think min ready is only served by rax right now | 23:14 |
clarkb | because nodepool doesn't know how to distribute that | 23:14 |
clarkb | is it possible we are leaking instances in fn and they count against our quota but nova api doesn't show them to nodepool anymore? | 23:14 |
clarkb | ianw: I believe we copy the master modules copies on bridge to the individual nodes | 23:14 |
*** ociuhandu has joined #openstack-infra | 23:15 | |
*** diablo_rojo has joined #openstack-infra | 23:15 | |
*** rcernin has joined #openstack-infra | 23:17 | |
ianw | ok, that could be right. not going to get sidetracked :) | 23:19 |
*** ociuhandu has quit IRC | 23:20 | |
*** ociuhandu has joined #openstack-infra | 23:24 | |
*** harlowja has quit IRC | 23:33 | |
*** ociuhandu has quit IRC | 23:33 | |
*** ociuhandu has joined #openstack-infra | 23:34 | |
*** harlowja has joined #openstack-infra | 23:35 | |
*** mriedem has quit IRC | 23:37 | |
*** dchen has joined #openstack-infra | 23:37 | |
*** ociuhandu has quit IRC | 23:39 | |
*** ijw_ has quit IRC | 23:41 | |
openstackgerrit | Merged zuul/zuul-jobs master: build-container-image: support sibling copy https://review.opendev.org/697936 | 23:42 |
*** rlandy has quit IRC | 23:43 | |
clarkb | when I was doing nodepool launcher restarts I noticed that we may have leaked some volumes in vexxhost sjc1. I think some of those leaks are related to the instances that shrews found won't delete but there are others that don't appear to be tied to specific instances anymore | 23:44 |
clarkb | I'm going to try and clean those unassociated volumes now | 23:44 |
*** ociuhandu has joined #openstack-infra | 23:44 | |
openstackgerrit | Merged opendev/system-config master: mirror: remove debug output of apache config https://review.opendev.org/698104 | 23:45 |
openstackgerrit | Merged zuul/zuul-jobs master: build-docker-image: fix up siblings copy https://review.opendev.org/697614 | 23:49 |
*** ociuhandu has quit IRC | 23:49 | |
openstackgerrit | Matt McEuen proposed openstack/project-config master: New projectt: ansible-role-airship https://review.opendev.org/698114 | 23:50 |
*** goldyfruit_ has quit IRC | 23:50 | |
*** ociuhandu has joined #openstack-infra | 23:54 | |
*** tosky has quit IRC | 23:56 | |
openstackgerrit | Matt McEuen proposed openstack/project-config master: New project: go-redfish https://review.opendev.org/698115 | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!