*** jamesmcarthur has joined #openstack-meeting | 00:03 | |
*** macz has quit IRC | 00:05 | |
*** jamesmcarthur has quit IRC | 00:08 | |
*** mattw4 has quit IRC | 00:12 | |
*** mmethot_ has quit IRC | 00:21 | |
*** mmethot has joined #openstack-meeting | 00:22 | |
*** jamesmcarthur has joined #openstack-meeting | 00:37 | |
*** jamesmcarthur has quit IRC | 00:42 | |
*** Liang__ has joined #openstack-meeting | 01:02 | |
*** Liang__ is now known as LiangFang | 01:04 | |
*** nanzha has joined #openstack-meeting | 01:10 | |
*** ociuhandu has joined #openstack-meeting | 01:12 | |
*** igordc has quit IRC | 01:13 | |
*** nanzha has quit IRC | 01:19 | |
*** nanzha has joined #openstack-meeting | 01:19 | |
*** ociuhandu has quit IRC | 01:28 | |
*** jamesmcarthur has joined #openstack-meeting | 01:39 | |
*** ociuhandu has joined #openstack-meeting | 01:40 | |
*** yaawang has joined #openstack-meeting | 01:41 | |
*** jamesmcarthur has quit IRC | 01:43 | |
*** ociuhandu has quit IRC | 01:44 | |
*** ricolin has joined #openstack-meeting | 01:46 | |
*** eharney has quit IRC | 01:49 | |
*** armax has quit IRC | 01:57 | |
*** nanzha has quit IRC | 01:58 | |
*** nanzha has joined #openstack-meeting | 02:10 | |
*** jamesmcarthur has joined #openstack-meeting | 02:10 | |
*** brinzhang has joined #openstack-meeting | 02:15 | |
*** jamesmcarthur has quit IRC | 02:15 | |
*** mhen has quit IRC | 02:24 | |
*** bbowen has joined #openstack-meeting | 02:25 | |
*** armax has joined #openstack-meeting | 02:31 | |
*** yaawang has quit IRC | 02:36 | |
*** ociuhandu has joined #openstack-meeting | 02:38 | |
*** yaawang has joined #openstack-meeting | 02:38 | |
*** EmilienM|PTO is now known as EmilienM | 02:42 | |
*** ociuhandu has quit IRC | 02:48 | |
*** jamesmcarthur has joined #openstack-meeting | 02:52 | |
*** nanzha has quit IRC | 02:52 | |
*** nanzha has joined #openstack-meeting | 02:53 | |
*** jamesmcarthur has quit IRC | 02:57 | |
*** tonyb has joined #openstack-meeting | 03:00 | |
*** diablo_rojo has quit IRC | 03:02 | |
*** ricolin has quit IRC | 03:03 | |
*** apetrich has quit IRC | 03:09 | |
*** artom has quit IRC | 03:20 | |
*** artom has joined #openstack-meeting | 03:21 | |
*** jamesmcarthur has joined #openstack-meeting | 03:29 | |
*** jamesmcarthur has quit IRC | 03:34 | |
*** ociuhandu has joined #openstack-meeting | 03:43 | |
*** ociuhandu has quit IRC | 03:47 | |
*** nanzha has quit IRC | 03:50 | |
*** ociuhandu has joined #openstack-meeting | 03:53 | |
*** nanzha has joined #openstack-meeting | 03:55 | |
*** tpatil has joined #openstack-meeting | 03:56 | |
*** hongbin has joined #openstack-meeting | 04:00 | |
*** kiyofujin has joined #openstack-meeting | 04:01 | |
*** ociuhandu has quit IRC | 04:01 | |
*** tashiromt has joined #openstack-meeting | 04:02 | |
*** ociuhandu has joined #openstack-meeting | 04:02 | |
tpatil | #startmeeting masakari | 04:05 |
---|---|---|
openstack | Meeting started Tue Nov 26 04:05:41 2019 UTC and is due to finish in 60 minutes. The chair is tpatil. Information about MeetBot at http://wiki.debian.org/MeetBot. | 04:05 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 04:05 |
*** openstack changes topic to " (Meeting topic: masakari)" | 04:05 | |
openstack | The meeting name has been set to 'masakari' | 04:05 |
tpatil | Roll call? | 04:05 |
*** ociuhandu has quit IRC | 04:07 | |
*** ricolin has joined #openstack-meeting | 04:08 | |
tashiromt | Hi | 04:09 |
kiyofujin | hi | 04:09 |
tpatil | kiyofujin, tashiromt: Hi | 04:10 |
tpatil | Do you have anything for discussion? | 04:12 |
tpatil | As samP is not available, let's end this meeting. If there is something urgent, please use openstack-discuss ML or #openstack-masakari IRC. | 04:14 |
tashiromt | I dont have any topic today. | 04:14 |
tpatil | Ok | 04:14 |
tpatil | #endmeeting | 04:15 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 04:15 | |
openstack | Meeting ended Tue Nov 26 04:15:01 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 04:15 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.html | 04:15 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.txt | 04:15 |
openstack | Log: http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.log.html | 04:15 |
*** kiyofujin has quit IRC | 04:16 | |
*** ricolin has quit IRC | 04:29 | |
*** dtrainor_ is now known as dtrainor | 04:29 | |
*** jamesmcarthur has joined #openstack-meeting | 04:31 | |
*** Lucas_Gray has joined #openstack-meeting | 04:33 | |
*** jamesmcarthur has quit IRC | 04:35 | |
*** Lucas_Gray has quit IRC | 04:39 | |
*** Wryhder has joined #openstack-meeting | 04:39 | |
*** Wryhder is now known as Lucas_Gray | 04:40 | |
*** tpatil has quit IRC | 04:42 | |
*** tashiromt has quit IRC | 04:43 | |
*** nanzha has quit IRC | 05:20 | |
*** Lucas_Gray has quit IRC | 05:21 | |
*** nanzha has joined #openstack-meeting | 05:24 | |
*** ociuhandu has joined #openstack-meeting | 05:30 | |
*** brinzhang_ has joined #openstack-meeting | 05:31 | |
*** ociuhandu has quit IRC | 05:35 | |
*** brinzhang has quit IRC | 05:35 | |
*** brinzhang has joined #openstack-meeting | 05:36 | |
*** brinzhang_ has quit IRC | 05:36 | |
*** brinzhang has quit IRC | 05:37 | |
*** brinzhang has joined #openstack-meeting | 05:37 | |
*** brinzhang has quit IRC | 05:46 | |
*** markmcclain has quit IRC | 05:56 | |
*** markmcclain has joined #openstack-meeting | 05:57 | |
*** links has joined #openstack-meeting | 05:59 | |
*** nanzha has quit IRC | 06:05 | |
*** nanzha has joined #openstack-meeting | 06:06 | |
*** jamesmcarthur has joined #openstack-meeting | 06:07 | |
*** jamesmcarthur has quit IRC | 06:12 | |
*** hongbin has quit IRC | 06:18 | |
*** pcaruana has joined #openstack-meeting | 06:32 | |
*** nanzha has quit IRC | 06:45 | |
*** whoami-rajat__ has joined #openstack-meeting | 06:48 | |
*** nanzha has joined #openstack-meeting | 06:54 | |
*** nanzha has quit IRC | 06:59 | |
*** nanzha has joined #openstack-meeting | 07:08 | |
*** keiko-k has joined #openstack-meeting | 07:17 | |
*** nanzha has quit IRC | 07:20 | |
*** nanzha has joined #openstack-meeting | 07:21 | |
*** slaweq has joined #openstack-meeting | 07:23 | |
*** keiko-k has quit IRC | 07:23 | |
*** keiko-k has joined #openstack-meeting | 07:27 | |
*** slaweq has quit IRC | 07:28 | |
*** apetrich has joined #openstack-meeting | 07:29 | |
*** apetrich has quit IRC | 07:34 | |
*** nanzha has quit IRC | 07:44 | |
*** tpatil has joined #openstack-meeting | 07:56 | |
*** slaweq has joined #openstack-meeting | 07:57 | |
*** nanzha has joined #openstack-meeting | 07:58 | |
*** takahashi-tsc has joined #openstack-meeting | 07:59 | |
*** ykatabam has quit IRC | 07:59 | |
*** brinzhang has joined #openstack-meeting | 08:03 | |
*** tesseract has joined #openstack-meeting | 08:16 | |
*** tpatil has quit IRC | 08:23 | |
*** rubasov has quit IRC | 08:26 | |
*** rubasov has joined #openstack-meeting | 08:27 | |
*** takahashi-tsc has quit IRC | 08:27 | |
*** nanzha has quit IRC | 08:33 | |
*** brinzhang has quit IRC | 08:38 | |
*** keiko-k has quit IRC | 08:41 | |
*** nanzha has joined #openstack-meeting | 08:43 | |
*** ralonsoh has joined #openstack-meeting | 08:53 | |
*** hyunsikyang has joined #openstack-meeting | 08:56 | |
*** priteau has joined #openstack-meeting | 08:58 | |
*** macz has joined #openstack-meeting | 08:59 | |
*** macz has quit IRC | 09:03 | |
*** tssurya has joined #openstack-meeting | 09:04 | |
*** ociuhandu has joined #openstack-meeting | 09:04 | |
*** apetrich has joined #openstack-meeting | 09:04 | |
*** ociuhandu has quit IRC | 09:08 | |
*** nanzha has quit IRC | 09:09 | |
*** nanzha has joined #openstack-meeting | 09:10 | |
*** nanzha has quit IRC | 09:29 | |
*** nanzha has joined #openstack-meeting | 09:33 | |
*** ociuhandu has joined #openstack-meeting | 09:37 | |
*** SotK has quit IRC | 09:37 | |
*** ociuhandu has quit IRC | 09:43 | |
*** ociuhandu has joined #openstack-meeting | 09:44 | |
*** ykatabam has joined #openstack-meeting | 09:44 | |
*** SotK has joined #openstack-meeting | 09:46 | |
*** ociuhandu has quit IRC | 09:48 | |
*** nanzha has quit IRC | 09:59 | |
*** ociuhandu has joined #openstack-meeting | 10:00 | |
*** LiangFang has quit IRC | 10:09 | |
*** nanzha has joined #openstack-meeting | 10:09 | |
*** electrofelix has joined #openstack-meeting | 10:09 | |
*** e0ne has joined #openstack-meeting | 10:22 | |
*** apetrich has quit IRC | 10:49 | |
*** lpetrut has joined #openstack-meeting | 10:52 | |
*** takamatsu has quit IRC | 10:55 | |
*** ociuhandu has quit IRC | 11:03 | |
*** nanzha has quit IRC | 11:07 | |
*** nanzha has joined #openstack-meeting | 11:10 | |
*** nanzha has quit IRC | 11:17 | |
*** tetsuro has quit IRC | 11:18 | |
*** priteau has quit IRC | 11:19 | |
*** nanzha has joined #openstack-meeting | 11:24 | |
*** apetrich has joined #openstack-meeting | 11:27 | |
*** apetrich has quit IRC | 11:27 | |
*** apetrich has joined #openstack-meeting | 11:29 | |
*** nanzha has quit IRC | 11:29 | |
*** nanzha has joined #openstack-meeting | 11:29 | |
*** whoami-rajat__ has quit IRC | 11:35 | |
*** rcernin has quit IRC | 11:40 | |
*** ociuhandu has joined #openstack-meeting | 11:41 | |
*** macz has joined #openstack-meeting | 11:41 | |
*** electrofelix has quit IRC | 11:43 | |
*** electrofelix has joined #openstack-meeting | 11:43 | |
*** macz has quit IRC | 11:45 | |
*** ociuhandu has quit IRC | 11:46 | |
*** e0ne has quit IRC | 11:48 | |
*** whoami-rajat__ has joined #openstack-meeting | 11:48 | |
*** e0ne has joined #openstack-meeting | 11:48 | |
*** ykatabam has quit IRC | 11:51 | |
*** raildo has joined #openstack-meeting | 11:53 | |
*** whoami-rajat__ has quit IRC | 11:54 | |
*** rfolco has joined #openstack-meeting | 12:05 | |
*** kopecmartin has joined #openstack-meeting | 12:06 | |
*** apetrich has quit IRC | 12:23 | |
*** enriquetaso has joined #openstack-meeting | 12:25 | |
*** Lucas_Gray has joined #openstack-meeting | 12:26 | |
*** nanzha has quit IRC | 12:45 | |
*** enriquetaso has quit IRC | 12:45 | |
*** jawad_axd has joined #openstack-meeting | 12:48 | |
*** enriquetaso has joined #openstack-meeting | 12:49 | |
*** enriquetaso has quit IRC | 13:08 | |
*** apetrich has joined #openstack-meeting | 13:14 | |
*** ociuhandu has joined #openstack-meeting | 13:20 | |
*** liuyulong has joined #openstack-meeting | 13:23 | |
*** ociuhandu has quit IRC | 13:26 | |
*** raildo has quit IRC | 13:28 | |
*** raildo has joined #openstack-meeting | 13:29 | |
*** mriedem has joined #openstack-meeting | 13:35 | |
*** jamesdenton has joined #openstack-meeting | 13:46 | |
*** rfolco has quit IRC | 13:47 | |
*** rfolco has joined #openstack-meeting | 13:49 | |
*** rfolco has quit IRC | 13:50 | |
*** bbowen has quit IRC | 14:17 | |
*** bbowen has joined #openstack-meeting | 14:20 | |
*** ociuhandu has joined #openstack-meeting | 14:22 | |
*** ociuhandu has quit IRC | 14:27 | |
*** rfolco has joined #openstack-meeting | 14:28 | |
*** links has quit IRC | 14:32 | |
*** enriquetaso has joined #openstack-meeting | 14:40 | |
*** Lucas_Gray has quit IRC | 14:45 | |
*** jawad_axd has quit IRC | 14:48 | |
*** ociuhandu has joined #openstack-meeting | 14:49 | |
*** ociuhandu has quit IRC | 14:54 | |
*** rbudden has joined #openstack-meeting | 15:15 | |
*** ociuhandu has joined #openstack-meeting | 15:19 | |
*** ociuhandu has quit IRC | 15:28 | |
*** ociuhandu has joined #openstack-meeting | 15:42 | |
*** diablo_rojo has joined #openstack-meeting | 15:53 | |
*** mriedem has quit IRC | 15:55 | |
*** rsimai is now known as rsimai_away | 15:58 | |
*** mriedem has joined #openstack-meeting | 15:59 | |
slaweq | #startmeeting neutron_ci | 16:00 |
openstack | Meeting started Tue Nov 26 16:00:04 2019 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:00 |
*** openstack changes topic to " (Meeting topic: neutron_ci)" | 16:00 | |
openstack | The meeting name has been set to 'neutron_ci' | 16:00 |
njohnston | o/ | 16:00 |
slaweq | hi | 16:00 |
bcafarel | o/ | 16:01 |
slaweq | liuyulong: ralonsoh: CI meeting, are You around? | 16:02 |
ralonsoh | hi sorry | 16:03 |
slaweq | ok, lets start | 16:03 |
slaweq | Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate | 16:03 |
slaweq | please open the link first | 16:03 |
slaweq | #topic Actions from previous meetings | 16:04 |
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)" | 16:04 | |
slaweq | first one is | 16:04 |
slaweq | njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop | 16:04 |
njohnston | Done! Do you want to go over it now or in the stadium section | 16:04 |
njohnston | ? | 16:05 |
njohnston | #link https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop | 16:05 |
slaweq | thx njohnston | 16:06 |
slaweq | we can go over this list in stadium section | 16:06 |
njohnston | sounds good | 16:06 |
slaweq | we have also second action item | 16:07 |
slaweq | njohnston to check failing NetworkMigrationFromHA in multinode dvr job | 16:07 |
njohnston | working on that this morning, haven't gotten to the bottom of it yet | 16:07 |
slaweq | ok, sure | 16:08 |
njohnston | I didn't get to it until about 30 minutes ago :( | 16:08 |
slaweq | ok | 16:08 |
slaweq | can I assign it to You for next week also as a reminder? | 16:09 |
njohnston | sure | 16:09 |
slaweq | #action njohnston to check failing NetworkMigrationFromHA in multinode dvr job | 16:09 |
slaweq | thx | 16:09 |
slaweq | and that are all actions from last week for today | 16:09 |
slaweq | #topic Stadium projects | 16:10 |
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)" | 16:10 | |
slaweq | tempest-plugins migration | 16:11 |
slaweq | njohnston: recently pushed phase 2 patch for neutron-dynamic-routing: https://review.opendev.org/#/c/695014/ | 16:11 |
slaweq | I already +2 it | 16:11 |
slaweq | please also review this one | 16:11 |
slaweq | after this will be merged we will only need vpnaas to do still | 16:11 |
slaweq | and I know mlavalle is making some progress with it | 16:12 |
bcafarel | the removal step is usually easier yes | 16:12 |
slaweq | bcafarel: exactly | 16:13 |
njohnston | So just to show the link again: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop | 16:13 |
slaweq | so this should be basically all about this migration of plugins | 16:13 |
slaweq | yes, lets go with Your link njohnston | 16:14 |
njohnston | I went through all the projects looking for zuul legacy jobs for zuulv3 | 16:14 |
*** ijw has joined #openstack-meeting | 16:14 | |
njohnston | For py27, I wanted to note that the goal is just to drop testing and other flags (like in setup.cfg), not necessarily to merge code hostile to py27 at this time | 16:14 |
njohnston | Also there is agreed upon a phased approach, so I designate each project in a phase for when to remove support | 16:15 |
njohnston | We are in phase 1, phase 2 starts at week R-22 | 16:15 |
njohnston | Obviously there is a lot of content in the etherpad, are there specific points or projects people would like to discuss? | 16:16 |
njohnston | I specifically exempted networking-ovn since master development is going to cease there | 16:16 |
njohnston | and networking-tempest-plugin because the branchless tempest makes py27 something dependent on tempest and I need to check exactly where we are on that | 16:17 |
slaweq | one thing about legacy to zuulv3 convertion | 16:17 |
slaweq | all grenade jobs are sill legacy | 16:17 |
slaweq | so we will need to convert them but it has to be done first in grenade repo | 16:17 |
njohnston | What work do we need to do in order to convert them, if the job definition comes from the grenade repo? | 16:17 |
slaweq | for jobs which are defined in grenade repo I think we don't need anything to do | 16:18 |
slaweq | but we have e.g. neutron-grenade-multinode job which is defined in neutron repo | 16:18 |
njohnston | For this survey I only examined jobs that are natively defined in the repo, so I did not consider grenade at all except where jobs were defined natively for grenade like networking-midonet-grenade-ml2 or networking-odl-grenade | 16:18 |
slaweq | and we will need to convert it at some point | 16:19 |
bcafarel | any specific work items we should start looking into first? or just pick one/add our name/mark as done steps | 16:19 |
*** jamesmcarthur has joined #openstack-meeting | 16:19 | |
njohnston | pick anything in a phase 1 project | 16:19 |
slaweq | ok, I will try to pick some of those soon too | 16:20 |
njohnston | thanks! | 16:20 |
bcafarel | ack I'll clean sfc py2 stuff as a start | 16:21 |
slaweq | thanks for preparing this etherpad - it's great list of things todo | 16:21 |
slaweq | :) | 16:21 |
njohnston | this is a good example of a py27 change: https://review.opendev.org/#/c/692031/ | 16:21 |
njohnston | I'll link that at the top | 16:21 |
bcafarel | +1 I was looking at neutron removal, but better if we have a sample for stadium project | 16:22 |
slaweq | thx | 16:22 |
*** tesseract has quit IRC | 16:23 | |
slaweq | ok, do You have anything else regarding stadium projects? | 16:24 |
slaweq | or can we move on? | 16:24 |
njohnston | no, I think that is all. | 16:24 |
slaweq | ok, so lets move on | 16:25 |
slaweq | #topic Grafana | 16:25 |
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)" | 16:25 | |
slaweq | #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate | 16:25 |
slaweq | neutron-grenade job is going to high failure rate recently | 16:26 |
slaweq | but this is now removed from queues as part of dropping of py27 support | 16:26 |
njohnston | we'll have to see if the spike in e.g. check queue unit test jobs continues today | 16:29 |
slaweq | yes, and also functiona/fullstack tests | 16:29 |
slaweq | but for functional and fullstack I saw at least couple of changes today where job failed due to patch on which it was run | 16:30 |
slaweq | so those were not an issues with job itself | 16:30 |
njohnston | right, so hopefully we can wait and see it revert to mean | 16:30 |
slaweq | yes | 16:30 |
slaweq | from what I was checking today | 16:31 |
slaweq | I noticed only one issue which hits as quite often | 16:31 |
slaweq | https://bugs.launchpad.net/neutron/+bug/1850557 | 16:31 |
openstack | Launchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq) | 16:31 |
slaweq | I just raised Importance of this bug to High | 16:31 |
slaweq | I was trying to reproduce it locally but I wasn't able to do that | 16:31 |
slaweq | but I will continue investigating this issue | 16:32 |
slaweq | and that's all what I have about grafana for today | 16:33 |
slaweq | anything else You want to add/ask? | 16:33 |
*** ociuhandu has quit IRC | 16:33 | |
ralonsoh | in https://bugs.launchpad.net/neutron/+bug/1850557 | 16:33 |
openstack | Launchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq) | 16:33 |
ralonsoh | we have only one compute node | 16:33 |
ralonsoh | actually this is functional | 16:34 |
slaweq | there is one compute and one "all-in-one" node | 16:34 |
*** ociuhandu has joined #openstack-meeting | 16:34 | |
ralonsoh | ok so the OF rules should be the same | 16:34 |
slaweq | yes, and as I wrote in comment #2 DHCP request comes from VM to the DHCP server after migration | 16:35 |
slaweq | so connectivity at least in this direction works fine | 16:35 |
slaweq | I can't say where this response is lost exactly | 16:35 |
slaweq | #action slaweq to continue investigating issue https://bugs.launchpad.net/neutron/+bug/1850557 | 16:38 |
openstack | Launchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq) | 16:38 |
slaweq | ok, lets move on | 16:38 |
slaweq | #topic Tempest/Scenario | 16:38 |
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)" | 16:38 | |
slaweq | as I said already I'm aware only about this one issue which imacts us now | 16:39 |
slaweq | but ralonsoh did You maybe had chance to check my proposal of merging/dropping some tempest/neutron-tempest-plugin jobs? | 16:39 |
njohnston | cool | 16:39 |
ralonsoh | slaweq, yes but not enough time | 16:40 |
ralonsoh | slaweq, I created a list of tests, from both jobs | 16:40 |
ralonsoh | to see how many of them can be merged | 16:40 |
ralonsoh | but I didn't finish sorry | 16:40 |
*** rbudden has quit IRC | 16:40 | |
slaweq | ralonsoh: no problem, if You will have some time, please reply to my email about it and we will continue this discussion there | 16:41 |
ralonsoh | sure | 16:41 |
slaweq | thx a lot | 16:41 |
slaweq | and one last thing from me | 16:41 |
slaweq | like a heads-up | 16:41 |
slaweq | we are starting process of merging networking-ovn driver to neutron | 16:42 |
njohnston | +100 | 16:42 |
slaweq | so I pushed patch https://review.opendev.org/#/c/696094/ to move ovn jobs to neutron repo | 16:42 |
slaweq | it's not ready yet but please be aware of it and take a look at this patch when it will be not WIP anymore | 16:42 |
njohnston | ok | 16:42 |
slaweq | also I'm not sure if we shouldn't move from .zuul.yaml file to zuul.d/ directory | 16:42 |
slaweq | and add definition of different jobs there | 16:43 |
slaweq | as this .zuul.yaml file is getting to be nightmare now | 16:43 |
slaweq | what do You think about such potential change? | 16:43 |
ralonsoh | slaweq, we can move it once we finish the migration | 16:43 |
njohnston | I think it's a good idea to do | 16:43 |
ralonsoh | but yes, it's better to have several files in a directory | 16:43 |
slaweq | ok, thx for supporting this idea | 16:44 |
slaweq | ralonsoh: I will try to do it "in parallel" to ovn-merge work | 16:44 |
ralonsoh | sure | 16:44 |
slaweq | if this will be merged first I will rebase my networking-ovn jobs patch | 16:44 |
slaweq | or I will rebase this "zuul.d dir" patch if needed :) | 16:44 |
slaweq | #action slaweq to move job definitions to zuul.d directory | 16:45 |
bcafarel | +1 to move to split files in zuul.d later it is easier to read | 16:45 |
*** macz has joined #openstack-meeting | 16:45 | |
slaweq | ok, that's all what I have for today | 16:45 |
slaweq | periodic jobs are working very well since few days at least | 16:46 |
slaweq | so we are good there | 16:46 |
slaweq | do You have anything else to talk about now? | 16:46 |
slaweq | or if not, we can finish a bit earlier today | 16:46 |
ralonsoh | just 10 secs, to "sell" my patches, related to bugs in Neutron | 16:46 |
ralonsoh | https://review.opendev.org/#/c/695060/ | 16:46 |
ralonsoh | njohnston, ^^ as an expert in DBs | 16:46 |
ralonsoh | that's all | 16:46 |
njohnston | ralonsoh: I'll take a look right away | 16:46 |
slaweq | I even set it as "important change" to review :) | 16:47 |
ralonsoh | thanks!! | 16:47 |
njohnston | thanks for a great meeting all | 16:47 |
slaweq | ok, so I think we can finish earlier today | 16:48 |
slaweq | thx for attending | 16:48 |
slaweq | and see You online | 16:48 |
slaweq | o/ | 16:48 |
njohnston | o/ | 16:48 |
slaweq | #endmeeting | 16:48 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 16:48 | |
openstack | Meeting ended Tue Nov 26 16:48:18 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:48 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.html | 16:48 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.txt | 16:48 |
openstack | Log: http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.log.html | 16:48 |
ralonsoh | bye | 16:48 |
*** rbudden has joined #openstack-meeting | 16:59 | |
*** ociuhandu has quit IRC | 17:15 | |
*** njohnston is now known as njohnston|lunch | 17:19 | |
*** igordc has joined #openstack-meeting | 17:20 | |
*** jamesmcarthur has quit IRC | 17:22 | |
*** tobiash_ is now known as tobiash | 17:47 | |
*** tssurya has quit IRC | 18:12 | |
*** armax has quit IRC | 18:14 | |
*** ijw has quit IRC | 18:26 | |
*** njohnston|lunch is now known as njohnston | 18:33 | |
*** ijw has joined #openstack-meeting | 18:37 | |
*** ijw has quit IRC | 18:40 | |
*** ijw has joined #openstack-meeting | 18:40 | |
*** electrofelix has quit IRC | 18:50 | |
*** ralonsoh has quit IRC | 18:51 | |
*** e0ne has quit IRC | 18:51 | |
clarkb | anyone else here for the infra meeting? we'll get started shortly | 19:00 |
fungi | i'm here for something | 19:00 |
fungi | i suspect it's probably the infra meeting | 19:01 |
*** mkolesni has joined #openstack-meeting | 19:01 | |
ianw | o/ | 19:01 |
clarkb | #startmeeting infra | 19:01 |
mkolesni | hi | 19:01 |
openstack | Meeting started Tue Nov 26 19:01:17 2019 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. | 19:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 19:01 |
*** openstack changes topic to " (Meeting topic: infra)" | 19:01 | |
openstack | The meeting name has been set to 'infra' | 19:01 |
clarkb | #link http://lists.openstack.org/pipermail/openstack-infra/2019-November/006528.html Our Agenda | 19:01 |
*** artom has quit IRC | 19:01 | |
clarkb | #topic Announcements | 19:01 |
*** openstack changes topic to "Announcements (Meeting topic: infra)" | 19:01 | |
clarkb | Just a note that this is a big holiday week(end) for those of us in the USA | 19:01 |
clarkb | I know many are already afk and I'll be afk no later than thursday :) | 19:02 |
fungi | i'll likely be increasingly busy for the next two days as well | 19:02 |
fungi | (much to my chagrin) | 19:03 |
*** ijw_ has joined #openstack-meeting | 19:03 | |
clarkb | I guess I should also mention that OSF individual board member elections are coming up and now is the nomination period | 19:03 |
*** artom has joined #openstack-meeting | 19:03 | |
clarkb | #topic Actions from last meeting | 19:04 |
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)" | 19:04 | |
clarkb | thank you fungi for running last week's meeting, I failed at accounting for DST changes when scheduling dentist visit | 19:04 |
clarkb | #link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-19-19.05.txt minutes from last meeting | 19:04 |
clarkb | No actions recorded. Let's move on | 19:04 |
fungi | no sweat | 19:04 |
clarkb | #topic Priority Efforts | 19:04 |
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)" | 19:04 | |
clarkb | #topic OpenDev | 19:05 |
fungi | i half-assed it in hopes nobody would ask me to run another meeting ;) | 19:05 |
*** openstack changes topic to "OpenDev (Meeting topic: infra)" | 19:05 | |
clarkb | #link https://etherpad.openstack.org/p/rCF58JvzbF Governance email draft | 19:05 |
clarkb | I've edited that draft after some input at the ptg | 19:05 |
clarkb | I think it is ready to go out and I'm mostly waiting for monday (to avoid getting lost in the holdiay) and for thoughts on who to send it to? | 19:05 |
*** dgroisma has joined #openstack-meeting | 19:06 | |
clarkb | I had thought about sending it to all the top level projects that are involved (at their -discuss mailing lists) | 19:06 |
clarkb | but worry that might create too many separate discussions? we unfortunatley don't have a good centralized mechanism yet (though this proposal aims to create one) | 19:06 |
*** ijw has quit IRC | 19:06 | |
clarkb | I don't need an answer now, but if you have thoughts on the destination fo that email please leave a note in the etherpad | 19:06 |
fungi | you could post it to the infra ml, and then post notices to other related project mailing lists suggesting discussion on the infra ml | 19:07 |
fungi | and linking to the copy in the infra ml archive | 19:07 |
fungi | that should hopefully funnel discussion into one place | 19:07 |
clarkb | I'm willing to try that. If there are no objects I'll give that a go monday ish | 19:07 |
clarkb | ianw: any movement on the gitea git clone thing tonyb has run into? | 19:08 |
fungi | we upgraded gitea and can still reproduce it, right? | 19:08 |
ianw | not really, except it still happens with 1.9.6 | 19:08 |
clarkb | fungi: yes | 19:08 |
*** lpetrut has quit IRC | 19:08 | |
ianw | tonyb uploaded a repo that replicates it for all who tried | 19:09 |
clarkb | across different git version and against specific and different backends | 19:09 |
ianw | it seems we know what happens; git dies on the gitea end (without anything helpful) and the remote end doesn't notice and sits waiting ~ forever | 19:09 |
clarkb | whcih I think points to a bug in the gitea change to use go-git | 19:09 |
clarkb | ianw: and tcpdump doesn't show any Fins from gitea | 19:10 |
clarkb | we end up playing ack pong with each side acking bits that were previous transferred (to keep the tcp connection open) | 19:10 |
ianw | i'm not sure on go-git; it seems it's a result of "git upload-pack" dying, which is (afaics) basically just system() called out to | 19:10 |
clarkb | ah | 19:10 |
fungi | how long is that upload-pack call running, do we know? | 19:11 |
fungi | could it be that go-git decides it's taking too long and kills it? | 19:11 |
clarkb | when I reproduce it takes about 20 seconds to hit the failure case | 19:11 |
*** dgroisma has quit IRC | 19:11 | |
clarkb | I don't think that is long enough for gitea to be killing it. However we can double check those timeout values | 19:12 |
ianw | from watching what happens, it seems to chunk the calls and so about 9 go through, then the 10th (or so) fails quickly | 19:12 |
ianw | we see the message constantly in the logs; but there don't seem to be that many reports of issues, though, only tonyb | 19:12 |
fungi | this is observed straight to the gitea socket, no apache or anything proxying it right? | 19:12 |
clarkb | fungi: correct | 19:12 |
fungi | ianw: there can be only one tonyb | 19:13 |
clarkb | (there is no apache in our gitea setup. just haproxy to gitea fwiw) | 19:13 |
fungi | that's what i thought, thanks | 19:13 |
ianw | i think we probably need custom builds with better debugging around the problem area to make progress | 19:15 |
clarkb | I guess the next step is to try and see why upload-pack fails (strace it maybe?) and then trace back up through gitea to see if it is the cause or simply not handling the failure properly? | 19:15 |
clarkb | I would expect that gitea should close the tcp connection if the git process under it failed | 19:15 |
ianw | yeah, i have an strace in the github bug, that was sort of how we got started | 19:15 |
clarkb | ah | 19:15 |
ianw | it turns out the error message is ascii bytes in decimal, which when you decode is actually a base-64 string, which when decoded, shows the same message captured by the strace :) | 19:16 |
*** dgroisma has joined #openstack-meeting | 19:16 | |
clarkb | wow | 19:16 |
ianw | i know mordred already has 1.10 patches up | 19:17 |
ianw | i'm not sure if we want to spend effort on old releases? | 19:17 |
clarkb | yeah there were a few issues he had to work through, but maybe we address those and get to 1.10 then try to push upstream to help us debug further? | 19:17 |
fungi | that sounds good | 19:17 |
clarkb | seems like a good next step. lets move on | 19:18 |
clarkb | #topic Update Config Management | 19:18 |
*** openstack changes topic to "Update Config Management (Meeting topic: infra)" | 19:18 | |
clarkb | zbr_ had asked about helping out on the mailing list and I tried to point to this topic | 19:18 |
clarkb | Long story short if you'd like to help us uplift our puppet into ansible and containers we appreciate the help greatly. Also most of the work can be done without root as we have a fairly robust testing system set up which will allow you test it all before merging anything | 19:19 |
fungi | it was a great read | 19:19 |
clarkb | Then once merged an infra-root can help deploy to production | 19:19 |
ianw | ++ i think most tasks there stand-alone, have templates (i should reply with some prior examples) and are gate-testable with our cool testinfra setup | 19:19 |
clarkb | That was all I had on this topic. Anyone have other related items? | 19:20 |
*** eharney has joined #openstack-meeting | 19:20 | |
clarkb | #topic Storyboard | 19:21 |
*** openstack changes topic to "Storyboard (Meeting topic: infra)" | 19:21 | |
clarkb | fungi: diablo_rojo anything to mention about storyboard? | 19:21 |
fungi | the api support for attachments merged | 19:22 |
fungi | next step there is to negotiate and create a swift container for storyboard-dev to use | 19:22 |
clarkb | exciting | 19:22 |
fungi | then the storyboard-webclient draft builds of the client side implementation for story attachments should be directly demonstrable | 19:23 |
fungi | (now tat we've got the drafts working correctly again after the logs.o.o move) | 19:23 |
fungi | i guess we can also mention that the feature to allow regular expressions for cors and webclient access in the api merged | 19:24 |
fungi | since that's what we needed to solve that challenge | 19:24 |
fungi | so storyboard-dev.openstack.org now allows webclient builds to connect and be usable from anywhere, including your local system i suspect | 19:25 |
*** enriquetaso has quit IRC | 19:25 | |
fungi | (though i haven't tested that bit, not sure if it needs to be a publicly reachable webclient to make openid work correctly) | 19:25 |
clarkb | sounds like good progress ona couple fronts there | 19:25 |
fungi | any suggestions on where we should put the attachments for storyboard-dev? | 19:26 |
fungi | i know we have a few places we're using for zuul build logs now | 19:26 |
clarkb | maybe vexxhost would be willing to host storyboard attachments as I expect there will be much less of them than job log files? | 19:26 |
fungi | for production we need to make sure it's a container which has public indexing disabled | 19:27 |
fungi | less critical for storyboard-dev but important for production | 19:27 |
clarkb | fungi: I think we control that at a container level | 19:27 |
clarkb | (via x-meta settings) | 19:27 |
fungi | (to ensure just anyone can't browse the container and find attachments for private stories) | 19:28 |
fungi | cool | 19:28 |
fungi | and yeah, again for storyboard-dev i don't think we care if we lose attachment objects | 19:28 |
fungi | for production there wouldn't be an expiration on them though, unlike build logs | 19:28 |
fungi | maybe we should work out a cross-cloud backup solution for that | 19:29 |
fungi | to guard against unexpected data loss | 19:29 |
clarkb | I think swift supports that somehow too, but maybe we also have storybaord write twice? | 19:29 |
fungi | yeah, we could probably fairly easily make it write a backup to a second swift endpoint/container | 19:30 |
*** dtroyer has joined #openstack-meeting | 19:30 | |
fungi | that at least gets us disaster recovery (though not rollback) | 19:31 |
fungi | certainly enough to guard against a provider suddenly going away or suffering a catastrophic issue though | 19:31 |
fungi | anyway, that's probably it for storyboard updates | 19:32 |
clarkb | #topic General Topics | 19:32 |
*** openstack changes topic to "General Topics (Meeting topic: infra)" | 19:32 | |
clarkb | fungi: anything new re wiki? | 19:32 |
fungi | nope, keep failing to find time to move it forward | 19:33 |
clarkb | ianw: for static replacement are we ready to start creating new volumes? | 19:34 |
clarkb | I think afs server is fully recovered from the outage? and we are releasing volumes successfully | 19:34 |
ianw | yes, i keep meaning to do it, maybe give me an action item so i don't forget again | 19:34 |
fungi | yeah, some releases still take a *really* long time, but they're not getting stuck any longer | 19:34 |
clarkb | #action ianw create AFS volumes for static.o.o replacement | 19:35 |
fungi | though on a related note, we need to get reprepro puppetry translated to ansible so we can move our remaining mirroring to the mirror-update server. none of the reprepro mirrors currently take advantage of the remote release mechanism | 19:35 |
ianw | fungi: yeah, the "wait 20 minutes from last write" we're trying with fedora isn't working | 19:35 |
ianw | yeah, i started a little on reprepro but not pushed it yet, i don't think it's too hard | 19:36 |
fungi | i think it shouldn't be too hard, it's basically a package, a handful of templated configs, maybe some precreated directories, and then cronjobs | 19:36 |
*** jamesmcarthur has joined #openstack-meeting | 19:36 | |
clarkb | its mostly about getting files in the correct places | 19:36 |
clarkb | there are a lot of files but otherwise not too bad | 19:36 |
fungi | and a few wrapper scripts | 19:36 |
clarkb | Next up is the tox python version default changing due to python used to install tox | 19:37 |
clarkb | #link http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010957.html | 19:37 |
clarkb | ianw: fwiw I agree that the underlying issue is tox targets that require a specific python version and don't specify what that is | 19:37 |
clarkb | these tox configs are broken anywhere someone has installed tox with python3 instead of 2 | 19:38 |
ianw | yeah, i just wanted to call out that there wasn't too much of a response, so i think we can leave it as is | 19:38 |
clarkb | wfm | 19:38 |
fungi | yep, with my openstack hat on (not speaking for the stable reviewers though) i feel like updating stable branch tox.ini files to be more explicit shouldn't be a concern | 19:38 |
fungi | there's already an openstack stable branch policy carve-out for updating testing-related configuration in stable branches | 19:39 |
clarkb | I think we're just going to have to accept there will be bumps on the way to migrating away from python2 | 19:39 |
clarkb | and we've run into other bumps too so this isn't unique | 19:39 |
clarkb | And that takes us to mkolesni's topic | 19:40 |
clarkb | Hosting submariner on opendev.org. | 19:40 |
mkolesni | thanks | 19:40 |
clarkb | I think we'd be happy to have you but there were questions about CI? | 19:40 |
mkolesni | dgroisma, you there? | 19:40 |
fungi | mkolesni: thanks for sticking around through 40 minutes of other discussion ;) | 19:40 |
mkolesni | fungi, no prob :) | 19:40 |
mkolesni | let me wake dgroisma ;) | 19:40 |
dgroisma | we wanted to ask what it takes to move some of our repos to opendev.org | 19:41 |
mkolesni | currently we have all our ci in travis on the github | 19:41 |
clarkb | The git import is usually pretty painless. We point out gerrit management scripts at an existing repo source that is publicly accessible and suck the existing repo content into gerrit. This does not touch the existing PRs or issues though | 19:42 |
dgroisma | there are many question around ci, main one if we could keep using travis | 19:42 |
clarkb | For CI we run Zuul and as far as I know travis doesn't integrate with gerrit | 19:42 |
mkolesni | clarkb, yeah for sure the prs will have to be manually migrated | 19:42 |
clarkb | It may be possible to write zuul jobs that trigger travis jobs | 19:43 |
clarkb | That said my personal opinion is that much of the value in hosting with opendev is zuul | 19:43 |
clarkb | I think it would be a mistake to put effort into continuing to use travis (though maybe it would help us to understand your motiviations for the move if Zuul is not part of that) | 19:43 |
fungi | the short version on moving repos is that you define a short stanza for the repository including information on where to import existing branches/tags from, also define a gerrit review acl (or point to an existing acl) and then automation creates the projects in gerrit and gitea. after that you push up a change into your repo to add ci configuration for zuul so that changes can be merged (this can | 19:43 |
fungi | be a no-op job to just merge whatever you say should be merged) | 19:43 |
mkolesni | btw are there any k8s projects hosted on opendev? | 19:44 |
fungi | how do you define a kubernetes project? | 19:44 |
fungi | airship does a bunch with kubernetes | 19:44 |
fungi | so do some openstack projects like magnum and zun | 19:44 |
clarkb | fungi: as does magnum and mnaser's k8s deployment tooling | 19:44 |
mkolesni | one thats in golang for example :) | 19:44 |
clarkb | there are golang projects | 19:44 |
fungi | yeah, programming language shouldn't matter | 19:45 |
clarkb | bits of airship are golang as an example | 19:45 |
fungi | we have plenty of projects which aren't even in any programming language at all for taht matter | 19:45 |
fungi | for example, projects which contain only documentation | 19:45 |
mkolesni | you guys suggested a zuul first approach | 19:46 |
mkolesni | to transition to zuul and then do a migration | 19:46 |
mkolesni | but there was hesitation for that as well since zuul will have to test github based code for a while | 19:46 |
*** eharney has quit IRC | 19:46 | |
clarkb | mkolesni: dgroisma how many jobs are we talking about and are they complex or do they do simple things like "execute unittests", "build docs", etc? | 19:47 |
fungi | well, it wouldn't have to be the zuul we're running. zuul is free software anyone can run wherever they like | 19:47 |
clarkb | My hunch is they can't be too complex due to travis' limitations | 19:47 |
mkolesni | clarkb, dgroisma knows best and can answer that | 19:48 |
clarkb | and if that is the case quickly adding jobs in zuul after migrating shouldn't be too difficult and is something we can help with | 19:48 |
dgroisma | the jobs are a bit complex, we are dealing with multicluster and require multiple k8s clusters to run for e2e stuff | 19:48 |
clarkb | dgroisma: and does travis provide that or do your jobs interact with external clusters? | 19:48 |
dgroisma | the clusters are kind based (kubernetes in docker), so its just running bunch of containers | 19:48 |
fungi | is that a travis feature, or something you've developed and happens as part of your job payload? | 19:48 |
mkolesni | fungi, well currently we rely on github and travis and dont have our own infra so we'd prefer to avoid standing up the infra just for migration sake | 19:49 |
fungi | mkolesni: totally makes sense, just pointing that out for clarity | 19:49 |
mkolesni | ok sure | 19:49 |
dgroisma | its out bash/go tooling | 19:49 |
dgroisma | our tooling not travis feature | 19:50 |
mkolesni | we use dapper images for the environment | 19:50 |
fungi | okay, so from travis's perspective it's just some shell commands being executed in a generic *nix build environment? | 19:50 |
dgroisma | yes | 19:50 |
fungi | in that case, making ansible run the same commands ought to be easy enough | 19:50 |
dgroisma | the migration should be ok, we just run some make commands | 19:50 |
clarkb | fungi: dgroisma mkolesni and we can actually prove that out pre migration | 19:51 |
clarkb | we have a sandbox repo which you can push job configs to which will run your jobs premerge | 19:51 |
clarkb | That is probably the easiest way to make sure zuul will work for you, then if you decide to migrate to opendev simply copy that job config into the repos once they migrate | 19:51 |
clarkb | that should give you some good exposure to gerrit and zuul too which will likely be useful in your decision making | 19:52 |
ianw | yeah, you will probably also find that while you start with basically ansible running shell: mycommand.sh ... you'll find many advantages in getting ansible to do more and more of what mycommand.sh over time | 19:52 |
mkolesni | clarkb, so you mean do initial migration, test the jobs, and if all is good sync up whatever is left and carry on? | 19:52 |
mkolesni | or is the sandbox where we stick the project itself? | 19:52 |
clarkb | mkolesni: no I mean, push jobs into opendev/sandbox which already exists in opendev to run your existing test jobs against your software | 19:52 |
fungi | you could push up a change to the opendev/sandbox repo which replaces all the files with branch content from yours and a zuul config | 19:52 |
fungi | it doesn't need to get approved/merge | 19:52 |
mkolesni | ah ok | 19:53 |
clarkb | Then if you are happy with tose results you can migrate the repos and copy the config you've built in the sandbox repo over to your migrated repos | 19:53 |
clarkb | this way you don't have to commit to much while you test it out and don't have to run your own zuul | 19:53 |
fungi | zuul will test eth change as written, including job configuration | 19:53 |
mkolesni | dgroisma, does that approach sound good to you? for a poc of the CI? | 19:53 |
dgroisma | yes sounds good | 19:53 |
mkolesni | ok cool | 19:53 |
mkolesni | do you guys have any questions for us? | 19:53 |
mkolesni | i think the creators guide covers everything else we need | 19:54 |
fungi | not really. it's all free/libre open source software right? | 19:54 |
clarkb | I'm mostly curious to hear what your motivation is if not CI (most people we talk to are driven by the CI we offer) | 19:54 |
clarkb | also we'd be happy to hear feedback on your experience fiddling with the sandbox repo and don't hesitate to ask questions | 19:54 |
dgroisma | gerrit reviews | 19:54 |
fungi | sounds like the ci is a motivation and they just want a smooth transition from their existing ci? | 19:54 |
*** e0ne has joined #openstack-meeting | 19:54 | |
mkolesni | github sucks for collaborative development :) | 19:54 |
clarkb | oh neat we agree on that too :) | 19:54 |
dgroisma | :) | 19:54 |
mkolesni | and as former openstack devs we're quite farmiliar with gerrit and its many benefits | 19:55 |
fungi | at least i didn't hear any indication they wanted a way to keep using travis any longer than needed | 19:55 |
mkolesni | no i don | 19:55 |
mkolesni | i don't think we're married to travis :) | 19:55 |
clarkb | ok sounds like we have a plan for moving forward. Once again feel free to ask questions as you interact with Zuul | 19:56 |
fungi | welcome (back) to opendev! ;) | 19:56 |
clarkb | I'm going to quickly try to get to the last couple topics before our hour is up | 19:56 |
mkolesni | ok thanks we'll check out the sandbox repo | 19:56 |
dgroisma | thank you very much | 19:56 |
mkolesni | thanks for your time | 19:56 |
clarkb | ianw: want to tldr the dib container image fun? | 19:56 |
clarkb | mkolesni: dgroisma you're welcome | 19:56 |
ianw | i would say my idea is that we have Dockerfile.opendev Dockefile.zuul Dockerfile.<insertwhateverhere> | 19:57 |
clarkb | ianw: if I read your email correctly it is that layering doesn't work for our needs here and maybe we should just embrace that and have different dockerfiles? | 19:57 |
ianw | and just build layers together that make sense | 19:57 |
ianw | i don't know if everyone else was thinking the same way as me, but I had in my mind that there was one zuul/nodepool-builder image and that was the canonical source of nodepool-builder images | 19:58 |
clarkb | It did make me wonder if a sidecar appraoch would be more appropriate here | 19:58 |
clarkb | but I'm not sure what kind of rpc that would require (and we don't have in nodepool) | 19:58 |
ianw | but i don't think that works, and isn't really the idea of containers anyway | 19:58 |
fungi | and then we would publish container images for the things we're using into the opendev dockerhub namespace, even if there are images for that software in other namespaces too, as long as those images don't do what we specifically need? (opendev/gitea being an existing example)> | 19:58 |
clarkb | fungi: ya that was how I read it | 19:58 |
ianw | fungi: yep, that's right ... opendev namespace is just a collection of things that work together | 19:58 |
fungi | i don't have any objection to this line of experimentation | 19:59 |
clarkb | with the sidecar idea I had it was don't try to layer everything but instead incorporate the various bits as separate containers | 19:59 |
ianw | it may be useful for others, if they buy into all the same base bits opendev is built on | 19:59 |
clarkb | nodepool builder would run in its own container context then execute dib in another container context and somehow get the results (shared bind mount?) | 19:59 |
fungi | yeah, putting those things in different containers make sense when they're services | 19:59 |
fungi | but putting openstacksdk in a different container from dib and nodepool in yet another container wouldn't work i don't think? | 20:00 |
clarkb | We are at time now | 20:00 |
clarkb | The last thing I wanted to mention is I've started to take some simple notes on maybe retiring some services? | 20:00 |
ianw | no, adding openstacksdk does basically bring you to multiple inheritance, which complicates matters | 20:00 |
clarkb | #link https://etherpad.openstack.org/infra-service-list | 20:00 |
fungi | thanks clarkb | 20:00 |
clarkb | ianw: fungi ya I don't think the sidecar is a perfect fit | 20:00 |
clarkb | re opendev services, if you have a moment over tea/coffee/food it would be great for a quick look and thoughts | 20:01 |
clarkb | I think if we can identify a small number of services then we can start to retire them in a controlled fashion | 20:01 |
clarkb | (mostly the ask stuff is what brought this up in my head because it comes up periodically that ask stops working and we really don't have the time to keep it working) | 20:01 |
clarkb | thanks everyone! | 20:02 |
clarkb | #endmeeting | 20:02 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 20:02 | |
openstack | Meeting ended Tue Nov 26 20:02:03 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 20:02 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.html | 20:02 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.txt | 20:02 |
openstack | Log: http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.log.html | 20:02 |
fungi | thanks! | 20:02 |
clarkb | Feel free to continue discussion in #openstack-infra or on the openstack-infra mailing list | 20:02 |
*** dgroisma has quit IRC | 20:02 | |
*** zaneb has joined #openstack-meeting | 20:09 | |
*** jakecoll has joined #openstack-meeting | 20:16 | |
*** mkolesni has quit IRC | 20:18 | |
*** dtroyer has left #openstack-meeting | 20:18 | |
*** enriquetaso has joined #openstack-meeting | 20:27 | |
*** simon-AS559 has joined #openstack-meeting | 20:27 | |
*** senrique_ has joined #openstack-meeting | 20:29 | |
*** enriquetaso has quit IRC | 20:32 | |
*** trandles has joined #openstack-meeting | 20:41 | |
*** senrique__ has joined #openstack-meeting | 20:42 | |
*** senrique_ has quit IRC | 20:43 | |
*** oneswig has joined #openstack-meeting | 20:57 | |
*** janders has joined #openstack-meeting | 21:00 | |
oneswig | #startmeeting scientific-sig | 21:00 |
openstack | Meeting started Tue Nov 26 21:00:08 2019 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. | 21:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 21:00 |
*** openstack changes topic to " (Meeting topic: scientific-sig)" | 21:00 | |
openstack | The meeting name has been set to 'scientific_sig' | 21:00 |
oneswig | I remembered the runes this week... | 21:00 |
oneswig | #link agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_November_29th_2019 | 21:00 |
janders | hey Stig! | 21:00 |
trandles | Hello all | 21:00 |
oneswig | Hi janders trandles | 21:00 |
slaweq | hi | 21:00 |
oneswig | I understand we'll be missing a few due to Thanksgiving | 21:01 |
oneswig | Hi slaweq, welcome! | 21:01 |
oneswig | Thanks for coming | 21:01 |
slaweq | thx for invite me :) | 21:01 |
oneswig | np, I appreciate it. | 21:02 |
oneswig | Let's go straight to that - hopefully jomlowe will catch us up | 21:02 |
oneswig | #topic Helping supporting Linuxbridge | 21:03 |
*** openstack changes topic to "Helping supporting Linuxbridge (Meeting topic: scientific-sig)" | 21:03 | |
slaweq | ok | 21:03 |
*** armax has joined #openstack-meeting | 21:03 | |
oneswig | Hi slaweq, so what's the context? | 21:03 |
slaweq | so, first of all sorry if my message after PTG was scary for anyone | 21:03 |
slaweq | basically in neutron developers team we had feeling that linuxbridge agent is almost not used as there was no almost none development of this driver | 21:04 |
slaweq | and as we want not to include ovn driver as one of in-tree drivers | 21:04 |
slaweq | our idea was to maybe start thinking slowly about deprecating linuxbridge agent | 21:04 |
slaweq | but apparently it is used by many deployments | 21:05 |
slaweq | so we will for sure not deprecate it | 21:05 |
slaweq | You don't need to worry about it | 21:05 |
oneswig | I think it is popular in this group because it's quite simple and faster | 21:05 |
oneswig | slaweq: thanks! | 21:05 |
rbudden | hello | 21:05 |
oneswig | Is it causing overhead to keep linuxbridge in tree? | 21:05 |
slaweq | as we have clear signal that this driver is used by people who needs simple solution and don't want other, more advanced features | 21:05 |
oneswig | hi rbudden | 21:05 |
slaweq | but we don't have almost nobody in our devs team who would take care of LB agent and mech driver | 21:06 |
oneswig | does it need much work? | 21:06 |
janders | apologies for a dumb question - we're only talking about the mech driver here, not the LB between the instance and the hypervisor where secgroups are applied right? | 21:06 |
rbudden | i was going to ask, what’s the overhead to continue support? what’s needed? | 21:06 |
slaweq | so if You are using it, it would be great if someone of You could be like point of contact in case when there are e.g. LB related gate issues or things like that | 21:07 |
slaweq | janders: that's correct | 21:07 |
slaweq | oneswig: no, I don't think this would require a lot of work | 21:07 |
*** noggin143 has joined #openstack-meeting | 21:07 | |
noggin143 | Tim joining | 21:07 |
oneswig | slaweq: does that mean joining openstack-neutron etc? | 21:07 |
oneswig | Hi noggin143, evening | 21:08 |
*** b1airo has joined #openstack-meeting | 21:08 | |
slaweq | oneswig: yes, basically that's what is needed | 21:08 |
slaweq | and I would like to put someone on the list in https://docs.openstack.org/neutron/latest/contributor/policies/bugs.html | 21:08 |
b1airo | o/ | 21:08 |
slaweq | as point of contact for linuxbridge stuff | 21:08 |
oneswig | Hi b1airo | 21:08 |
oneswig | #chair b1airo | 21:09 |
openstack | Current chairs: b1airo oneswig | 21:09 |
b1airo | morning | 21:09 |
slaweq | so sometimes we can ping that person to triage some LB related bug or gate issue etc. | 21:09 |
*** jmlowe has joined #openstack-meeting | 21:09 | |
oneswig | Is there a backlog of LB bugs somewhere that would be good examples? | 21:09 |
slaweq | than I would be sure that it's really "maintained" | 21:09 |
b1airo | oneswig: sorry, could you repeat your question to slaweq - i assume it was something about what is needed from neutron team's perspective to avoid deprecation of linuxbridge ? | 21:09 |
oneswig | b1airo: that was it, only it has been decided not to deprecate. It would still be good to help out with its maintenance though. | 21:10 |
slaweq | oneswig: list of bugs with linuxbridge tag is here: | 21:10 |
slaweq | https://bugs.launchpad.net/neutron/?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commente | 21:10 |
slaweq | r=&field.subscriber=&field.structural_subscriber=&field.tag=linuxbridge&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_b | 21:10 |
slaweq | lueprints=on | 21:10 |
janders | what's the performance advantage of LB over ovs, roughly? are we talking 20% or 500%? | 21:11 |
slaweq | b1airo: yes, we now have clear signal that this driver is widely used so we will not plan to deprecate it | 21:11 |
rbudden | good to hear! | 21:11 |
noggin143 | it is good to have a simple driver, even if it is not so functional for those who liked nova-network | 21:11 |
slaweq | so basically that's all from my side | 21:12 |
oneswig | performance data I have | 21:12 |
oneswig | #link presentation from 2017 https://docs.google.com/presentation/d/1MSTquzyPaDUyqW2pAmYouP6qdYAi6G1n7IJqBzqPCr8/edit?usp=sharing | 21:12 |
slaweq | if someone of You would like to be such linuxbridge contact person, please reach out to me on irc or email, or speak now if You want :) | 21:12 |
oneswig | Slide 30 | 21:12 |
jmlowe | woohoo! | 21:12 |
oneswig | Thanks slaweq, appreciated. | 21:13 |
noggin143 | +1 | 21:13 |
b1airo | i suspect someone from Nectar might be happy to be a contact - LB is still widely used there as the default plugin with more advanced networking provided by Midonet and more performant local networking done with passthrough | 21:14 |
*** rfolco has quit IRC | 21:14 | |
oneswig | Midonet still going? Forgive my ignorance. | 21:14 |
b1airo | no sorrison online to ping at the moment though... | 21:14 |
oneswig | b1airo: seconded :-) | 21:15 |
b1airo | I think Midonet looked to be in trouble but then got bought by Ericsson, who are presumably using Midonet in their deployments | 21:15 |
oneswig | b1airo: good to hear it found a safe harbour of sorts. | 21:16 |
b1airo | so possibly an 11th hour reprieve | 21:16 |
b1airo | it's a really nice platform, despite java | 21:16 |
jmlowe | oof, documentation stops at Newton | 21:18 |
oneswig | Hi jmlowe | 21:18 |
jmlowe | Hi | 21:18 |
oneswig | Perhaps it was just, you know, completed with nothing more to add? | 21:19 |
jmlowe | Could be, always a concern though | 21:19 |
oneswig | Just kidding :-) | 21:20 |
oneswig | OK, shall we move on? | 21:20 |
oneswig | #topic Supercomputing round-up | 21:21 |
*** openstack changes topic to "Supercomputing round-up (Meeting topic: scientific-sig)" | 21:21 | |
oneswig | 13300 people apparently - it was big and busy! | 21:21 |
oneswig | We met a few new people who may find their way to this SIG | 21:22 |
janders | great! where from? | 21:23 |
b1airo | fingers crossed - did anyone get the name of that guy asking about performance issues? | 21:23 |
oneswig | No, he didn't hang around afterwards unfortunately, afaict | 21:23 |
jmlowe | wow, I guess my dreams of going back to Austin are forever dashed with those kinds of attendance numbers | 21:23 |
noggin143 | sounds worse than kubecon | 21:24 |
jmlowe | Don't worry b1airo he'll be back next year, he was there last year | 21:24 |
trandles | It was interesting to see where cloud and cloud-like tech is starting to show up in product lines. In many cases, cloud is creeping in quietly. It's only when you ask about specifics that vendors tell you things like k8s and some SDN is being used as part of their plumbing. | 21:24 |
b1airo | yeah, i remember :-) | 21:25 |
oneswig | noggin143: by my calculation, it was as big as kubecon + openstack shanghai combined! | 21:26 |
janders | what would be the top3 most interesting k8s use cases? | 21:26 |
trandles | janders: most "interesting" or "scariest?" | 21:26 |
janders | let's look at one of each? :) | 21:27 |
oneswig | janders: the Cray Shasta control plane is hosted on K8S, for example... | 21:27 |
oneswig | file in whichever category you like! | 21:27 |
janders | does Bright Computing have anything to do with that? | 21:27 |
trandles | That covers the scary use case, oneswig ;) | 21:27 |
oneswig | janders: perhaps they use that to provision the controller nodes but it wasn't disclosed. | 21:28 |
*** raildo has quit IRC | 21:28 | |
trandles | AFAIK Bright doesn't have anything to do with it...but I'm not 100% sure about that | 21:28 |
jmlowe | Shasta is the scariest | 21:28 |
janders | off the record don't be surprised if you see similar arch by Brigth | 21:28 |
janders | this came up in a discussion I had with them a while back after we had a really bad scaling problem with our BCM | 21:29 |
janders | how about the interesting bits? | 21:29 |
janders | (now that we have scary ticked off) | 21:30 |
*** jonmills_nasa has joined #openstack-meeting | 21:30 | |
*** raub has joined #openstack-meeting | 21:30 | |
oneswig | So i was interested in these very loosely coherent filesystems developed for Summit | 21:30 |
trandles | Interesting use case: someone had a poster talking about k8s + OpenMPI | 21:30 |
oneswig | trandles: not using ORTE again? | 21:30 |
janders | trandles: do you recall how did they approach the comms challange? | 21:30 |
noggin143 | oneswig: any references ? | 21:30 |
oneswig | looking... | 21:31 |
trandles | oneswig: that I'm not sure about...the poster was vague and the author standing there was getting picked apart by someone who seemed to know all of the gory comms internals | 21:31 |
noggin143 | oneswig: for loosely coherent filesystems, that is. | 21:31 |
trandles | I didn't stay to hear details, I just kept wandering off thinking "wow, look for more on that at sc20" | 21:31 |
janders | for sure | 21:32 |
janders | shameless plug: https://www.youtube.com/watch?v=sPfZGHWnKNM | 21:32 |
trandles | janders: thx for the link ;) | 21:33 |
janders | I can confirm there's a lot of interesting work on K8s-RDMA and there's more coming | 21:33 |
*** igordc has quit IRC | 21:33 | |
oneswig | noggin143: it was this session - two approaches presented https://sc19.supercomputing.org/presentation/?id=pap191&sess=sess167 | 21:33 |
oneswig | There's a link to a paper on the research. | 21:33 |
*** senrique__ has quit IRC | 21:34 | |
janders | whoa... these numbers make io500 look like a sandpit :) | 21:34 |
*** senrique__ has joined #openstack-meeting | 21:34 | |
oneswig | It was interesting because a lot of talk to this point has been on burst buffers, but this work was all about writing data to local NVME and then draining it to GPFS afterwards. The coherency is loose to the point of non-existence but it may work if the appilcation knows it's not there. | 21:35 |
b1airo | janders: re Bright, I spoke to them (including CTO - Martjian?), and got the impression they had not done any significant work on containerising control plane with k8s or anything else - they had looked at it a couple of years ago but decided it was too fragile at the time... which is kinda laughable when in the next breath they tell you their upgrade strategy for Bright OpenStack is a complete reinstall and | 21:35 |
b1airo | migration | 21:35 |
noggin143 | janders: quite an expensive alternative to the sandpit :-) | 21:35 |
janders | indeed :) | 21:36 |
janders | b1airo: when did that conversation take place? | 21:36 |
*** ijw_ has quit IRC | 21:36 | |
janders | I had two myself, one similar to what you described and another one some time later where I had an impression of a 180deg turn | 21:36 |
b1airo | last week :-) | 21:37 |
janders | oh dear... another 180? | 21:37 |
janders | someone might be going in circles | 21:37 |
oneswig | On a related matter, I went to the OpenHPC Bof. There was a guy from ARM/Linaro who was interested in cloud-native deployments of OpenHPC but I think the steering committee are lukewarm at best on this. | 21:38 |
b1airo | they don't seem to have moved past saying "OpenStack is too hard to upgrade" as an excuse | 21:38 |
oneswig | b1airo: you're in that boat with Bright aren't you? | 21:38 |
*** d34dh0r53 has quit IRC | 21:39 | |
noggin143 | oneswig: +1 on the lukewarm for ARM in HPC/HTC here too | 21:39 |
*** igordc has joined #openstack-meeting | 21:39 | |
janders | repo repoint; yum -y update; db-sync; service restart - I wonder what's so hard about that... | 21:39 |
rbudden | last i recall they used warewulf for provisioning bare metal | 21:39 |
janders | and with containers it should be even easier or so I hear | 21:39 |
rbudden | i forget if they still maintain an openstack/cloud release tool | 21:39 |
oneswig | noggin143: au contraire, on the ARM front there appears to be much interest around the new Fujitsu A64FX | 21:39 |
rbudden | they did at one point | 21:39 |
*** d34dh0r53 has joined #openstack-meeting | 21:39 | |
noggin143 | oneswig: is the Fujitsu thing real ? I'd heard 202[2-9] | 21:40 |
b1airo | yep, it's really ugly. i haven't dived depth-first into the details yet, but they are basically saying that to get from 8.1 to 9.0 we should completely reinstall | 21:40 |
janders | +1 | 21:40 |
oneswig | rbudden: All kinds of deployment covered - both warewulf and xcat :-) | 21:40 |
trandles | Looks like the 2020 KubeCon North America event once again overlaps SC perfectly. I heard twice from vendors at SC19 who said "I don't know, my kubernetes guy is at KubeCon this week." ...sigh... | 21:40 |
janders | trandles: thank you for the early heads-up... Boston this time I see | 21:41 |
janders | nice to see Open Infra back in Vancouver too | 21:41 |
oneswig | noggin143: I spoke to some people at Riken, they are expecting samples imminently, deployment through next year. It might have been lost in translation but that was the gist afaict | 21:41 |
janders | looks like I've got my base conference schedule set :) | 21:41 |
b1airo | trandles: better than "Hi, did you know Kubernetes is replacing traditional HPC..." | 21:42 |
noggin143 | Kubecon had an interesting launch of the "Research SIG" which was kind of k8s version of the the Scientific WG | 21:42 |
oneswig | noggin143: were you there? | 21:42 |
trandles | b1airo: HA! I didn't see anyone from Trilio at SC... | 21:42 |
janders | noggin143: unfortunately I wasn't able to attend this one - would you be happy to tell us about it? | 21:42 |
janders | quite interested | 21:42 |
noggin143 | oneswig: no, but Ricardo was (https://www.youtube.com/watch?v=g9FQxzK9E_M&list=PLj6h78yzYM2NDs-iu8WU5fMxINxHXlien&index=108&t=0s) | 21:43 |
b1airo | trandles: they were there actually, but just a couple of guys lurking - all the sales folks were at kubecon | 21:43 |
*** senrique__ has quit IRC | 21:43 | |
oneswig | I wonder how Kubecon compared to OpenStack Paris, ah those heady days | 21:44 |
janders | I wasn't in Paris but San Diego was massive | 21:44 |
janders | 10k ppl | 21:44 |
noggin143 | kubecon had lots of Helm v3, Machine Learning, GPUs, binder etc. Parties were up to the Austin level | 21:44 |
jonmills_nasa | We are still trying to use Trilio... | 21:44 |
b1airo | interesting choice of words jonmills_nasa ... | 21:44 |
noggin143 | We'll be going to the Kubecon Amsterdam one in 2020 | 21:45 |
oneswig | We should probably pay a bit more lip-service to the agenda... any more on SC / Kubecon? | 21:46 |
janders | Kubecon had a significant ML footprint | 21:46 |
janders | quite interesting | 21:46 |
noggin143 | the notebook area seems very active too | 21:46 |
oneswig | janders: does seem a good environment for rapid development in ML. | 21:46 |
janders | both from Web Companies side (Ubers of the World) | 21:46 |
janders | and more traditional HPC (Erez gave a good talk on GPUs) | 21:47 |
oneswig | OK let's move on | 21:47 |
janders | I was happy to add RDMA support into the picture with my talk | 21:47 |
oneswig | #topic issues mixing baremetal and virt | 21:47 |
*** openstack changes topic to "issues mixing baremetal and virt (Meeting topic: scientific-sig)" | 21:47 | |
oneswig | janders: good to hear that. | 21:47 |
janders | it's good/interesting to hear the phrase "K8s as de-facto standard for ML workloads" | 21:47 |
*** jakecoll has quit IRC | 21:48 | |
janders | I feel this is driving K8s away from being just-a-webhosting-platform | 21:48 |
janders | and there's now buy-in for that - good for our group I suppose | 21:48 |
oneswig | quick one this - back at base the team hit and fixed some issues with nova-compute-ironic last week while we were off gallavanting with the conference crowd | 21:49 |
noggin143 | janders: yup, tensorflow, keras, pytorch - lots of opportunities for data scientists | 21:49 |
oneswig | As I understand it, if you're running multiple instances of nova-compute-ironic using a hash ring to share the work, you can sometimes lose resources. | 21:49 |
jmlowe | I'm somewhat interested in mixing baremetal and virt, anybody doing this right now? | 21:49 |
noggin143 | onswig: do you have a pointer ? We're looking to start sharding for scale with Ironic | 21:50 |
rbudden | +1 is this production ready? | 21:50 |
oneswig | jmlowe: we have some clients doing this... | 21:50 |
raub | jmlowe: you are not alone | 21:50 |
janders | +1, I am very interested, too | 21:50 |
oneswig | #link brace of patchs available here https://review.opendev.org/#/c/694802/ | 21:50 |
* trandles too | 21:50 | |
oneswig | I think there were 4-5 distinct patches in the end. | 21:50 |
janders | oneswig: can you elaborate more on "losing resources"? | 21:51 |
janders | how does this manifest itself? | 21:51 |
janders | nodes never getting scheduled to? | 21:51 |
janders | what scale are we thinking? | 21:51 |
noggin143 | oneswig: BTW, we're working with upstream in several ironic areas such as incremental inspection (good for firmware level checks) and benchmarking/burn in as part of the standard cleaning process | 21:51 |
oneswig | janders: As I understand it, nodes would become lost from the resource tracker. But I only have the sketchiest details. | 21:52 |
jmlowe | I have let's say 400 ironic instances in mind, 20-30 sharing with virt | 21:52 |
janders | jmlowe: sounds like you're progressing with the nova-compute-as-a-workload architecture? :) | 21:53 |
oneswig | It only occurs when using multiple instances of nova-compute-ironic, afaik | 21:53 |
noggin143 | jmlowe: we're running about 4000 at the moment - Arne's presentation from Shanghai at https://twiki.cern.ch/twiki/pub/CMgroup/CmPresentationList/From_hire_to_retire!_Server_Life-cycle_management_with_Ironic_at_CERN_-_OpenInfraSummit_SHANGHAI_06NOV2019.pptx | 21:53 |
jmlowe | janders: that's what I have in mind | 21:53 |
janders | jmlowe: great to hear! :) | 21:53 |
oneswig | janders: what do you do for tagged interfaces - only 1 network to the hypervisor, or something crafty with the IB? | 21:53 |
janders | 1 network per port | 21:54 |
janders | I had dual and quad port nodes so that was enough | 21:54 |
janders | but trunking vlans/pkeys is definitely something I want going forward | 21:54 |
oneswig | I thought it was something like that. A good use case for trunking Ironic ports (if that's ever a thing). | 21:55 |
janders | (if you're wondering why/how I had quad port IB in servers - the answer is Dell blades) | 21:55 |
oneswig | We've let the slug of time crawl away from us again. | 21:55 |
oneswig | 5 mins on Gnocchi? | 21:55 |
noggin143 | oneswig: hah! | 21:56 |
janders | ok! | 21:56 |
oneswig | #topic Gnocchi | 21:56 |
*** openstack changes topic to "Gnocchi (Meeting topic: scientific-sig)" | 21:56 | |
b1airo | mmm soft fluffy potato | 21:56 |
janders | and I propose we re-add the virt+bm again to next week's agenda | 21:56 |
oneswig | OK, wasn't expecting that | 21:56 |
*** diablo_rojo has quit IRC | 21:56 | |
oneswig | janders: happy to do so | 21:56 |
janders | great, thanks oneswig | 21:56 |
b1airo | i suggest shunting Gnocchi to next week | 21:57 |
janders | good idea, but maybe let | 21:57 |
jmlowe | when paired with the carbonara backend it was delicious | 21:57 |
jonmills_nasa | fair enough | 21:57 |
oneswig | b1airo: makes sense to me. | 21:57 |
janders | s quickly draw context? | 21:57 |
rbudden | yep thanks oneswig | 21:57 |
b1airo | i could ask Sam or someone else from Nectar to come talk about that and LB + Mido... | 21:57 |
oneswig | b1airo: would be great, thanks. | 21:57 |
jmlowe | gnocchi isn't getting any more dead, so second to pushing | 21:57 |
b1airo | :'-D | 21:58 |
oneswig | Next week I'm also hoping johnthetubaguy will join us to talk about unified limits (quotas for mixed baremetal and virt resources, hierarchical projects, among other things) | 21:58 |
janders | gnocchi is good at killing ceph afaik | 21:58 |
noggin143 | oneswig: :-) | 21:58 |
janders | even if almost dead itself | 21:58 |
jmlowe | oooh, I'd love to quota vgpu's, I think that's part of it | 21:58 |
noggin143 | now, back to InfluxDB tuning | 21:58 |
oneswig | noggin143: ha, what a rich seam | 21:59 |
noggin143 | jmlowe: we're looking to do some GPU quota too, another topic ? | 21:59 |
jmlowe | yeah, really need a nvme ceph pool to keep gnocchi from killing things | 21:59 |
janders | jmlowe: indeed. | 21:59 |
oneswig | These are multi-region discussions, clearly! | 21:59 |
janders | okay - almost over time, so thank you all, great chat - I just wish we had another hour! :) | 21:59 |
jmlowe | I found out the hard way | 21:59 |
janders | we shall continue next week | 21:59 |
b1airo | i thought that whole gnocchi on ceph thing looked weird when it first came up | 21:59 |
janders | yep, DDoS-a-a-S! | 22:00 |
oneswig | jmlowe: wasn't it obvious with those billions of tiny writes ... | 22:00 |
b1airo | no-one mentioned iops in the Ceph for HPC BoF at SC19... | 22:00 |
trandles | Bye folks, I'll read next week's log. ;) | 22:00 |
*** jonmills_nasa has quit IRC | 22:00 | |
oneswig | Thanks all, alas we must close | 22:00 |
oneswig | #endmeeting | 22:00 |
b1airo | i attempted a roundabout question on lazy writes | 22:00 |
rbudden | bye all! | 22:00 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 22:00 | |
jmlowe | oneswig: Like most things it starts off well, then all goes to hell about a week after you stop paying attention | 22:00 |
openstack | Meeting ended Tue Nov 26 22:00:42 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 22:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.html | 22:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.txt | 22:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.log.html | 22:00 |
b1airo | bfn | 22:00 |
noggin143 | jmlowe: quote of the week | 22:01 |
*** noggin143 has quit IRC | 22:01 | |
oneswig | jmlowe: ah, so often the case | 22:02 |
oneswig | Thanks all, over and out | 22:02 |
*** oneswig has quit IRC | 22:02 | |
*** diablo_rojo has joined #openstack-meeting | 22:02 | |
*** ociuhandu has joined #openstack-meeting | 22:02 | |
*** rcernin has joined #openstack-meeting | 22:04 | |
*** jmlowe has left #openstack-meeting | 22:05 | |
*** ykatabam has joined #openstack-meeting | 22:06 | |
*** janders has quit IRC | 22:06 | |
*** ociuhandu has quit IRC | 22:08 | |
*** ykatabam has quit IRC | 22:10 | |
*** trandles has quit IRC | 22:11 | |
*** pcaruana has quit IRC | 22:16 | |
*** ykatabam has joined #openstack-meeting | 22:35 | |
*** slaweq has quit IRC | 22:37 | |
*** armax has quit IRC | 22:42 | |
*** simon-AS559 has quit IRC | 22:45 | |
*** slaweq has joined #openstack-meeting | 23:11 | |
*** strobert1 has joined #openstack-meeting | 23:12 | |
*** slaweq has quit IRC | 23:15 | |
*** ociuhandu has joined #openstack-meeting | 23:30 | |
*** ociuhandu has quit IRC | 23:35 | |
*** armax has joined #openstack-meeting | 23:38 | |
*** rfolco has joined #openstack-meeting | 23:47 | |
*** rfolco has quit IRC | 23:51 | |
*** rfolco has joined #openstack-meeting | 23:51 | |
*** rfolco has quit IRC | 23:55 | |
*** rfolco has joined #openstack-meeting | 23:55 | |
*** diablo_rojo has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!