Thursday, 2019-12-05

openstackgerritKendall Nelson proposed openstack/cookiecutter master: Update CONTRIBUTING.rst template  https://review.opendev.org/69600100:00
*** jamesmcarthur has joined #openstack-infra00:04
*** diablo_rojo has quit IRC00:06
*** rfolco has joined #openstack-infra00:07
*** jamesmcarthur has quit IRC00:08
*** tosky has quit IRC00:10
openstackgerritIan Wienand proposed zuul/nodepool master: [wip] move openstack testing to use containerised daemon  https://review.opendev.org/69346400:13
openstackgerritIan Wienand proposed zuul/nodepool master: [wip] move openstack testing to use containerised daemon  https://review.opendev.org/69346400:16
rpiosogit clone is of openstack/requirements.git is taking an unusually long amount of time. It seems to be stuck. Same happened with ironic.git a few minutes ago. Both from the same host. ping of opendev.org succeeds. Ideas?00:29
clarkbrpioso: this is a fresh clone of both repos?00:30
rpiosoclarkb: Yes00:31
clarkbok both work for me currently against whichever backend I am hitting. Now to try the individual backends00:31
rpiosoclarkb: It worked normally at least once against ironic.00:32
clarkbrpioso: if you go to https://opendev.org in your browser and inspect the certification which giteaXX.opendev.org does it say you are talking to ?00:32
clarkbrpioso: this should be run from the same ip that was doing the git clone00:32
rpiosoclarkb: I'm performing a devstack deployment on Ubuntu Bionic. It doesn't appear to have chrome, firefox,  nor google-chrome. Is there a default browser  installed?00:34
clarkbrpioso: that will depend on how you installed your system00:35
clarkbI wouldn't know00:36
clarkbyou can use openssl s_client instead00:36
clarkbI am able to clone ironic and requirements from all 8 backends00:38
*** weshay is now known as weshay_pto00:38
*** ociuhandu has joined #openstack-infra00:39
clarkbif you can identify the backend you are talking to via the cert details that would be helpful then we can dig into that host to see if it has problem00:39
rpiosoclarkb: Not familiar with that approach. I get several errors when I invoke "sudo openssl s_client opendev.org", with the first one being "139963203715520:error:0200206F:system library:connect:Connection refused:../crypto/bio/b_sock2.c:110:"00:40
openstackgerritIan Wienand proposed zuul/nodepool master: Also build sibling container images  https://review.opendev.org/69739300:40
clarkbrpioso: `openssl s_client -host opendev.org -port 443`00:40
clarkbthen look for CN = gitea0X.opendev.org00:41
rpiososubject=CN = gitea02.opendev.org00:41
rpiosoLet me kick off another devstack deployment and see if it gets stuck again.00:42
clarkbk let me run some clones in a loop to try and reproduce. Note you can hit specific backends via https://gitea0X.opendev.org:300000:42
clarkbcacti shows a definite busy period from about 1600UTC to 2000UTC (roughly)00:45
clarkbbut not a super busy period. I would expect the server to be happy during that time00:45
rpiosoclarkb: The new deployment successfully cloned ironic. It's stuck on requirements. Before kicking it off, I confirmed CN still equaled gitea02.opendev.org.00:48
clarkbrpioso: ok my local attempts haven't failed yet00:48
*** ociuhandu has quit IRC00:48
clarkbI'm running while git clone ; rm -rf git_dir ; done00:48
clarkbagainst requirements00:48
clarkbon gitea0200:48
clarkbI'll need to get my keys out to see if the server logs say anything00:49
rpiosoIt got by requirements \o/00:49
*** rfolco has quit IRC00:50
clarkbok is it just slow then?00:50
rpiosoNow it's failing at a more common location in stack.sh -- "Instlling package prerequisites"00:50
rpiosos/Instlling/Installing/00:50
clarkbI'm going to stop my loop now since it hasn't failed yet00:50
rpiosoclarkb: Cool! Repeatedly invoking stack.sh got me past that failure. I hope it's idempotent. It's cranking along now. We'll see.00:51
rpiosoThank you!00:51
clarkbmordred: looking at cacti graphs it seems that gitea 1.10 is far more tcp open efficient00:52
clarkbhttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66631&rra_id=all looks like 1.9 regressed there?00:52
clarkband now 1.10 is happier00:52
openstackgerritIan Wienand proposed zuul/nodepool master: Also build sibling container images  https://review.opendev.org/69739301:15
donnydcorvus: uggg.. somebody should really start doing something about this fn cloud and its many failures01:15
*** goldyfruit_ has joined #openstack-infra01:20
*** larainema has joined #openstack-infra01:27
*** rfolco has joined #openstack-infra01:29
*** ociuhandu has joined #openstack-infra01:39
*** ociuhandu has quit IRC01:43
*** nicolasbock has quit IRC01:55
*** Xuchu has joined #openstack-infra02:05
*** ricolin has joined #openstack-infra02:11
*** rfolco has quit IRC02:12
*** ykarel|away has joined #openstack-infra02:27
*** slaweq has joined #openstack-infra02:30
*** ykarel|away has quit IRC02:32
*** slaweq has quit IRC02:35
*** ociuhandu has joined #openstack-infra02:38
*** jamesmcarthur has joined #openstack-infra02:45
*** ociuhandu has quit IRC02:47
*** slaweq has joined #openstack-infra02:51
*** igordc has quit IRC02:54
*** slaweq has quit IRC02:56
*** gyee has quit IRC03:03
*** jamesmcarthur has quit IRC03:04
*** diablo_rojo has joined #openstack-infra03:06
*** apetrich has quit IRC03:09
*** tinwood has quit IRC03:10
*** tinwood has joined #openstack-infra03:12
*** jamesmcarthur has joined #openstack-infra03:19
*** armax has quit IRC03:19
*** rlandy|bbl is now known as rlandy03:19
*** rlandy has quit IRC03:23
*** jamesmcarthur has quit IRC03:35
*** tkajinam is now known as tkajinam|lunch03:40
*** gfidente|afk is now known as gfidente03:43
*** diablo_rojo has quit IRC03:46
*** udesale has joined #openstack-infra03:48
*** jamesmcarthur has joined #openstack-infra03:51
*** ykarel|away has joined #openstack-infra03:51
*** ykarel|away is now known as ykarel03:51
*** gfidente has quit IRC03:53
*** jamesmcarthur has quit IRC03:54
*** jamesmcarthur has joined #openstack-infra03:55
*** rosmaita has left #openstack-infra03:55
*** jamesmcarthur has quit IRC04:00
*** armax has joined #openstack-infra04:03
*** armax has quit IRC04:03
*** rh-jelabarre has quit IRC04:18
*** jamesmcarthur has joined #openstack-infra04:25
*** jamesmcarthur has quit IRC04:32
*** slaweq has joined #openstack-infra04:33
*** slaweq has quit IRC04:38
*** rishabhhpe has joined #openstack-infra05:07
*** slaweq has joined #openstack-infra05:25
*** slaweq has quit IRC05:30
*** ociuhandu has joined #openstack-infra05:30
*** ociuhandu has quit IRC05:35
*** pcaruana has joined #openstack-infra05:38
*** pcaruana has quit IRC05:42
*** raukadah is now known as chkumar|ruck05:45
*** slaweq has joined #openstack-infra05:48
*** eernst has quit IRC05:53
*** slaweq has quit IRC05:54
rishabhhpeHi Team , I am facing an issue while uploading a dib-image to the provider .. i had pasted the error and my nodepool.yaml in below link -: http://paste.openstack.org/show/787137/ ..... please do let me know if i am missing anything05:59
*** armax has joined #openstack-infra06:01
ianwrishabhhpe: it's saying "Exception: Unable to find flavor: general1-8"06:02
ianwso it seems you need to choose another flavor name06:02
*** rcernin has quit IRC06:09
*** slaweq has joined #openstack-infra06:13
*** udesale has quit IRC06:13
*** jaicaa has quit IRC06:14
*** udesale has joined #openstack-infra06:14
*** armax has quit IRC06:15
*** ociuhandu has joined #openstack-infra06:16
*** jaicaa has joined #openstack-infra06:16
*** slaweq has quit IRC06:18
*** jtomasek has joined #openstack-infra06:19
*** surpatil has joined #openstack-infra06:23
*** Xuchu has quit IRC06:34
*** ociuhandu has quit IRC06:36
*** Xuchu has joined #openstack-infra06:38
*** ykarel_ has joined #openstack-infra06:42
*** ykarel has quit IRC06:45
*** dciabrin_ has joined #openstack-infra06:48
*** dciabrin has quit IRC06:49
*** ccamacho has quit IRC06:54
*** ralonsoh has joined #openstack-infra06:58
*** ykarel__ has joined #openstack-infra07:01
*** udesale has quit IRC07:02
*** ociuhandu has joined #openstack-infra07:03
*** rishabhhpe has quit IRC07:03
*** udesale has joined #openstack-infra07:03
*** ykarel_ has quit IRC07:03
*** udesale has quit IRC07:06
*** udesale has joined #openstack-infra07:06
*** ykarel__ is now known as ykarel07:08
*** pgaxatte has joined #openstack-infra07:17
*** pcaruana has joined #openstack-infra07:17
*** ramishra has quit IRC07:21
*** ociuhandu has quit IRC07:27
*** ociuhandu has joined #openstack-infra07:27
*** ramishra has joined #openstack-infra07:28
*** jamesmcarthur has joined #openstack-infra07:34
*** ykarel is now known as ykarel|lunch07:37
*** jamesmcarthur has quit IRC07:38
*** pkopec has joined #openstack-infra07:59
*** rcernin has joined #openstack-infra08:01
*** iurygregory has joined #openstack-infra08:01
openstackgerritSimon Westphahl proposed zuul/zuul master: Don't process builds not longer in job graph  https://review.opendev.org/69742008:06
openstackgerritSimon Westphahl proposed zuul/zuul master: Don't process builds not longer in job graph  https://review.opendev.org/69742008:09
*** witek has joined #openstack-infra08:18
*** tesseract has joined #openstack-infra08:21
*** dchen has quit IRC08:23
*** ykarel|lunch is now known as ykarel08:27
*** udesale has quit IRC08:31
*** udesale has joined #openstack-infra08:32
*** rpittau|afk is now known as rpittau08:32
*** pabelanger has quit IRC08:34
*** mudpuppy has quit IRC08:34
*** hemna_ has quit IRC08:34
*** mattoliverau has quit IRC08:34
*** tosky has joined #openstack-infra08:36
*** gfidente has joined #openstack-infra08:37
*** pabelanger has joined #openstack-infra08:38
*** mudpuppy has joined #openstack-infra08:38
*** hemna_ has joined #openstack-infra08:38
*** mattoliverau has joined #openstack-infra08:38
*** gfidente has quit IRC08:41
*** SurajPatil has joined #openstack-infra08:41
*** surpatil has quit IRC08:44
*** iurygregory has quit IRC08:45
*** surpatil has joined #openstack-infra08:46
*** iurygregory has joined #openstack-infra08:47
*** SurajPatil has quit IRC08:49
*** slaweq_ has joined #openstack-infra08:50
*** jpena|off is now known as jpena08:50
*** rishabhhpe has joined #openstack-infra08:50
*** apetrich has joined #openstack-infra08:51
rishabhhpeHi ,  While launching an instance from nodepool i am getting the error pasted in below link -:  http://paste.openstack.org/show/787142/   ...........can anyone please review if i am missing something ..i had attached my nodepool.yaml along with error.08:52
*** udesale has quit IRC08:58
*** slaweq_ has quit IRC08:59
fricklerrishabhhpe: networks must be a list, not a hash, i.e. s/name: 'public'/public/ , see also the docs at https://zuul-ci.org/docs/nodepool/configuration.html09:00
*** Xuchu has quit IRC09:04
*** lpetrut has joined #openstack-infra09:04
*** lucasagomes has joined #openstack-infra09:05
sshnaidmcan someone clear SSH sessions to gerrit for use os-tripleo-ci ? I have an error about max 64 connections09:07
*** ociuhandu has quit IRC09:09
*** tkajinam|lunch has quit IRC09:09
*** udesale has joined #openstack-infra09:12
*** SurajPatil has joined #openstack-infra09:15
*** surpatil has quit IRC09:17
*** surpatil has joined #openstack-infra09:18
*** iurygregory is now known as iurygregory_cour09:20
*** iurygregory_cour is now known as iury_course09:20
*** SurajPatil has quit IRC09:20
openstackgerritMerged opendev/system-config master: bridge.o.o: update to latest Ansible  https://review.opendev.org/69509909:30
*** ccamacho has joined #openstack-infra09:42
*** derekh has joined #openstack-infra09:43
*** pabelanger has quit IRC09:54
*** mudpuppy has quit IRC09:54
*** hemna_ has quit IRC09:54
*** mattoliverau has quit IRC09:54
*** ccamacho has quit IRC09:54
*** lpetrut has quit IRC09:54
*** ramishra has quit IRC09:54
*** ykarel has quit IRC09:54
*** jtomasek has quit IRC09:54
*** tinwood has quit IRC09:54
*** ricolin has quit IRC09:54
*** goldyfruit_ has quit IRC09:54
*** dpawlik has quit IRC09:54
*** hamzy has quit IRC09:54
*** liuyulong has quit IRC09:54
*** dmellado has quit IRC09:54
*** jaicaa has quit IRC09:54
*** Adri2000 has quit IRC09:54
*** Roamer` has quit IRC09:54
*** mugsie has quit IRC09:54
*** donnyd has quit IRC09:54
*** tobias-urdin has quit IRC09:54
*** cloudnull has quit IRC09:54
*** lathiat has quit IRC09:54
*** adam_g has quit IRC09:54
*** lseki has quit IRC09:54
*** beisner has quit IRC09:54
*** Buggys has quit IRC09:54
*** ddurst has quit IRC09:54
*** owalsh has quit IRC09:54
*** nickv1985 has quit IRC09:54
*** TheJulia has quit IRC09:54
*** cjohnston has quit IRC09:54
*** npochet has quit IRC09:54
*** mnasiadka has quit IRC09:54
*** rpittau has quit IRC09:54
*** HenryG has quit IRC09:54
*** gsantomaggio has quit IRC09:54
*** EmilienM has quit IRC09:54
*** frickler has quit IRC09:54
*** rledisez has quit IRC09:54
*** Diabelko has quit IRC09:54
*** seongsoocho has quit IRC09:54
*** masayukig has quit IRC09:54
*** abelur has quit IRC09:54
*** setuid has quit IRC09:54
*** srwilkers has quit IRC09:54
*** surpatil has quit IRC09:54
*** iury_course has quit IRC09:54
*** pgaxatte has quit IRC09:54
*** eharney has quit IRC09:54
*** fdegir has quit IRC09:54
*** jklare has quit IRC09:54
*** trident has quit IRC09:54
*** AJaeger has quit IRC09:54
*** dmsimard has quit IRC09:54
*** d34dh0r53 has quit IRC09:54
*** openstackgerrit has quit IRC09:54
*** SotK has quit IRC09:54
*** jpena has quit IRC09:54
*** antonym has quit IRC09:54
*** amorin has quit IRC09:54
*** brtknr has quit IRC09:54
*** evrardjp has quit IRC09:54
*** pkopec has quit IRC09:54
*** larainema has quit IRC09:54
*** lourot has quit IRC09:54
*** andreykurilin has quit IRC09:54
*** mgoddard has quit IRC09:54
*** fnordahl has quit IRC09:54
*** StevenK has quit IRC09:54
*** cmurphy has quit IRC09:54
*** logan- has quit IRC09:54
*** hwoarang has quit IRC09:54
*** dtantsur|afk has quit IRC09:54
*** kevinz has quit IRC09:54
*** markmcclain has quit IRC09:54
*** smcginnis has quit IRC09:54
*** zxiiro has quit IRC09:54
*** admcleod has quit IRC09:54
*** onovy has quit IRC09:54
*** haleyb has quit IRC09:54
*** calebb has quit IRC09:54
*** yankcrime has quit IRC09:54
*** radez has quit IRC09:54
*** nicholas has quit IRC09:54
*** arif-ali has quit IRC09:54
*** rakhmerov has quit IRC09:54
*** fresta has quit IRC09:54
*** rpioso has quit IRC09:54
*** kota_ has quit IRC09:54
*** davidlenwell has quit IRC09:54
*** cyberpear has quit IRC09:54
*** petevg has quit IRC09:54
*** SergeyLukjanov has quit IRC09:54
*** bradm has quit IRC09:54
*** amotoki has quit IRC09:54
*** zaro has quit IRC09:54
*** Anticimex has quit IRC09:54
*** coreycb has quit IRC09:54
*** clayg has quit IRC09:54
*** davecore has quit IRC09:54
*** knikolla has quit IRC09:54
*** ildikov has quit IRC09:54
*** ttx has quit IRC09:54
*** itxaka has quit IRC09:54
*** Zara has quit IRC09:54
*** spotz has quit IRC09:54
*** dabukalam has quit IRC09:54
*** lucasagomes has quit IRC09:54
*** tosky has quit IRC09:54
*** tesseract has quit IRC09:54
*** pcaruana has quit IRC09:54
*** ralonsoh has quit IRC09:54
*** lennyb has quit IRC09:54
*** yolanda has quit IRC09:54
*** jamesdenton has quit IRC09:54
*** jlvillal has quit IRC09:54
*** jonher has quit IRC09:54
*** factor has quit IRC09:54
*** Jeffrey4l has quit IRC09:54
*** elod has quit IRC09:54
*** tbarron has quit IRC09:54
*** iokiwi has quit IRC09:54
*** irclogbot_3 has quit IRC09:55
*** bnemec has quit IRC09:55
*** benj_ has quit IRC09:55
*** harlowja has quit IRC09:55
*** brwyatt has quit IRC09:55
*** corvus has quit IRC09:55
*** ianw has quit IRC09:55
*** efried has quit IRC09:55
*** icey has quit IRC09:55
*** otherwiseguy_ has quit IRC09:55
*** dosaboy has quit IRC09:55
*** stephenfin has quit IRC09:55
*** aspiers has quit IRC09:55
*** gagehugo has quit IRC09:55
*** zer0c00l has quit IRC09:55
*** szaher has quit IRC09:55
*** larsks has quit IRC09:55
*** gothicmindfood has quit IRC09:55
*** jmccrory has quit IRC09:55
*** strigazi has quit IRC09:55
*** rishabhhpe has quit IRC09:55
*** rcernin has quit IRC09:55
*** Goneri has quit IRC09:55
*** Tengu has quit IRC09:55
*** dirk has quit IRC09:55
*** persia has quit IRC09:55
*** jroll has quit IRC09:55
*** zzehring has quit IRC09:55
*** gouthamr has quit IRC09:55
*** bauzas has quit IRC09:55
*** sshnaidm has quit IRC09:55
*** pots has quit IRC09:55
*** darvon has quit IRC09:55
*** lxkong has quit IRC09:55
*** sparkycollier has quit IRC09:55
*** jamespage has quit IRC09:55
*** MarkMaglana has quit IRC09:55
*** jdelaros1 has quit IRC09:55
*** jrist has quit IRC09:55
*** fungi has quit IRC09:55
*** rajinir has quit IRC09:55
*** kambiz has quit IRC09:55
*** ullbeking has quit IRC09:55
*** beagles has quit IRC09:55
*** Ng has quit IRC09:55
*** gmann has quit IRC09:55
*** diablo_rojo_phon has quit IRC09:55
*** dustinc has quit IRC09:55
*** DinaBelova has quit IRC09:55
*** derekh has quit IRC09:55
*** udesale has quit IRC09:55
*** witek has quit IRC09:55
*** dciabrin_ has quit IRC09:55
*** osmanlicilegi has quit IRC09:55
*** yoctozepto has quit IRC09:55
*** paladox has quit IRC09:55
*** auristor has quit IRC09:55
*** calbers has quit IRC09:55
*** rkukura has quit IRC09:55
*** jistr has quit IRC09:55
*** bstinson has quit IRC09:55
*** panda has quit IRC09:55
*** adriant has quit IRC09:55
*** nhicher has quit IRC09:55
*** kopecmartin has quit IRC09:55
*** tonyb has quit IRC09:55
*** kaisers has quit IRC09:55
*** tobiash has quit IRC09:55
*** kukacz_ has quit IRC09:55
*** bdodd has quit IRC09:55
*** Qiming has quit IRC09:55
*** weshay_ has quit IRC09:55
*** aluria has quit IRC09:55
*** exsdev has quit IRC09:55
*** chkumar|ruck has quit IRC09:55
*** aarents has quit IRC09:55
*** cgoncalves has quit IRC09:55
*** dansmith has quit IRC09:55
*** tobberydberg has quit IRC09:55
*** mnencia has quit IRC09:55
*** pfallenop has quit IRC09:55
*** gary_perkins has quit IRC09:55
*** freerunner has quit IRC09:55
*** ericyoung has quit IRC09:55
*** timburke has quit IRC09:55
*** tristanC has quit IRC09:55
*** arxcruz has quit IRC09:55
*** jhesketh has quit IRC09:55
*** noonedeadpunk has quit IRC09:55
*** rodrigods has quit IRC09:55
*** tomaw has quit IRC09:55
*** lastmikoi has quit IRC09:55
*** andreaf has quit IRC09:55
*** csatari has quit IRC09:55
*** johnsom has quit IRC09:55
*** portdirect has quit IRC09:55
*** jungleboyj has quit IRC09:55
*** snierodz has quit IRC09:55
*** apetrich has quit IRC09:55
*** zbr_ has quit IRC09:55
*** dklyle has quit IRC09:55
*** vesper11 has quit IRC09:55
*** clarkb has quit IRC09:55
*** jrosser has quit IRC09:55
*** thedac has quit IRC09:55
*** kmarc has quit IRC09:55
*** dulek has quit IRC09:55
*** ssbarnea has quit IRC09:55
*** verdurin has quit IRC09:55
*** prometheanfire has quit IRC09:55
*** tdasilva has quit IRC09:55
*** mgagne has quit IRC09:55
*** dtroyer has quit IRC09:55
*** lifeless has quit IRC09:55
*** weshay_pto has quit IRC09:55
*** zzzeek has quit IRC09:55
*** lsell has quit IRC09:55
*** guilhermesp has quit IRC09:55
*** mwhahaha has quit IRC09:55
*** dougwig has quit IRC09:55
*** crodriguez has quit IRC09:55
*** tonyb[m] has quit IRC09:55
*** odyssey4me has quit IRC09:55
*** mordred has quit IRC09:55
*** sdoran has quit IRC09:55
*** arne_wiebalck has quit IRC09:55
*** evgenyl has quit IRC09:55
*** philroche has quit IRC09:55
*** vdrok has quit IRC09:55
*** mnaser has quit IRC09:55
*** cjloader has quit IRC09:55
*** rm_work has quit IRC09:55
*** dancek has quit IRC09:55
*** melwitt has quit IRC09:55
*** Shrews has quit IRC09:55
*** ChanServ has quit IRC09:55
*** mattoliverau has joined #openstack-infra10:04
*** hemna_ has joined #openstack-infra10:04
*** mudpuppy has joined #openstack-infra10:04
*** pabelanger has joined #openstack-infra10:04
*** ykarel has joined #openstack-infra10:04
*** electrofelix has joined #openstack-infra10:04
*** jklare has joined #openstack-infra10:04
*** dmellado has joined #openstack-infra10:04
*** liuyulong has joined #openstack-infra10:04
*** hamzy has joined #openstack-infra10:04
*** dpawlik has joined #openstack-infra10:04
*** goldyfruit_ has joined #openstack-infra10:04
*** ricolin has joined #openstack-infra10:04
*** tinwood has joined #openstack-infra10:04
*** jtomasek has joined #openstack-infra10:04
*** ramishra has joined #openstack-infra10:04
*** lpetrut has joined #openstack-infra10:04
*** ccamacho has joined #openstack-infra10:04
*** srwilkers has joined #openstack-infra10:04
*** setuid has joined #openstack-infra10:04
*** abelur has joined #openstack-infra10:04
*** masayukig has joined #openstack-infra10:04
*** Diabelko has joined #openstack-infra10:04
*** rledisez has joined #openstack-infra10:04
*** frickler has joined #openstack-infra10:04
*** seongsoocho has joined #openstack-infra10:04
*** EmilienM has joined #openstack-infra10:04
*** gsantomaggio has joined #openstack-infra10:04
*** HenryG has joined #openstack-infra10:04
*** rpittau has joined #openstack-infra10:04
*** mnasiadka has joined #openstack-infra10:04
*** npochet has joined #openstack-infra10:04
*** cjohnston has joined #openstack-infra10:04
*** nickv1985 has joined #openstack-infra10:04
*** TheJulia has joined #openstack-infra10:04
*** owalsh has joined #openstack-infra10:04
*** ddurst has joined #openstack-infra10:04
*** Buggys has joined #openstack-infra10:04
*** beisner has joined #openstack-infra10:04
*** lseki has joined #openstack-infra10:04
*** adam_g has joined #openstack-infra10:04
*** lathiat has joined #openstack-infra10:04
*** cloudnull has joined #openstack-infra10:04
*** tobias-urdin has joined #openstack-infra10:04
*** donnyd has joined #openstack-infra10:04
*** mugsie has joined #openstack-infra10:04
*** Adri2000 has joined #openstack-infra10:04
*** Roamer` has joined #openstack-infra10:04
*** jaicaa has joined #openstack-infra10:04
*** evrardjp has joined #openstack-infra10:04
*** brtknr has joined #openstack-infra10:04
*** amorin has joined #openstack-infra10:04
*** antonym has joined #openstack-infra10:04
*** jpena has joined #openstack-infra10:04
*** SotK has joined #openstack-infra10:04
*** openstackgerrit has joined #openstack-infra10:04
*** d34dh0r53 has joined #openstack-infra10:04
*** dmsimard has joined #openstack-infra10:04
*** AJaeger has joined #openstack-infra10:04
*** trident has joined #openstack-infra10:04
*** fdegir has joined #openstack-infra10:04
*** eharney has joined #openstack-infra10:04
*** pgaxatte has joined #openstack-infra10:04
*** iury_course has joined #openstack-infra10:04
*** surpatil has joined #openstack-infra10:04
*** witek has joined #openstack-infra10:04
*** derekh has joined #openstack-infra10:04
*** lucasagomes has joined #openstack-infra10:04
*** apetrich has joined #openstack-infra10:04
*** rishabhhpe has joined #openstack-infra10:04
*** tosky has joined #openstack-infra10:04
*** tesseract has joined #openstack-infra10:04
*** rcernin has joined #openstack-infra10:04
*** pcaruana has joined #openstack-infra10:04
*** ralonsoh has joined #openstack-infra10:04
*** dciabrin_ has joined #openstack-infra10:04
*** osmanlicilegi has joined #openstack-infra10:04
*** lennyb has joined #openstack-infra10:04
*** Goneri has joined #openstack-infra10:04
*** spotz has joined #openstack-infra10:04
*** dabukalam has joined #openstack-infra10:04
*** Zara has joined #openstack-infra10:04
*** itxaka has joined #openstack-infra10:04
*** zbr_ has joined #openstack-infra10:04
*** yolanda has joined #openstack-infra10:04
*** yoctozepto has joined #openstack-infra10:04
*** Tengu has joined #openstack-infra10:04
*** tomaw has joined #openstack-infra10:04
*** dklyle has joined #openstack-infra10:04
*** paladox has joined #openstack-infra10:04
*** auristor has joined #openstack-infra10:04
*** calbers has joined #openstack-infra10:04
*** rkukura has joined #openstack-infra10:04
*** vesper11 has joined #openstack-infra10:04
*** jamesdenton has joined #openstack-infra10:04
*** jhesketh has joined #openstack-infra10:04
*** jlvillal has joined #openstack-infra10:04
*** jonher has joined #openstack-infra10:04
*** dirk has joined #openstack-infra10:04
*** jistr has joined #openstack-infra10:04
*** factor has joined #openstack-infra10:04
*** clarkb has joined #openstack-infra10:04
*** Jeffrey4l has joined #openstack-infra10:04
*** persia has joined #openstack-infra10:04
*** jrosser has joined #openstack-infra10:04
*** thedac has joined #openstack-infra10:04
*** jroll has joined #openstack-infra10:04
*** bstinson has joined #openstack-infra10:04
*** kmarc has joined #openstack-infra10:04
*** dulek has joined #openstack-infra10:04
*** tbarron has joined #openstack-infra10:04
*** elod has joined #openstack-infra10:04
*** lastmikoi has joined #openstack-infra10:04
*** iokiwi has joined #openstack-infra10:04
*** panda has joined #openstack-infra10:04
*** zzehring has joined #openstack-infra10:04
*** ssbarnea has joined #openstack-infra10:04
*** gouthamr has joined #openstack-infra10:04
*** bauzas has joined #openstack-infra10:04
*** arne_wiebalck has joined #openstack-infra10:04
*** kopecmartin has joined #openstack-infra10:04
*** verdurin has joined #openstack-infra10:04
*** adriant has joined #openstack-infra10:04
*** nhicher has joined #openstack-infra10:04
*** sshnaidm has joined #openstack-infra10:04
*** tonyb has joined #openstack-infra10:04
*** kaisers has joined #openstack-infra10:04
*** irclogbot_3 has joined #openstack-infra10:04
*** tobiash has joined #openstack-infra10:04
*** bnemec has joined #openstack-infra10:04
*** benj_ has joined #openstack-infra10:04
*** harlowja has joined #openstack-infra10:04
*** pots has joined #openstack-infra10:04
*** brwyatt has joined #openstack-infra10:04
*** corvus has joined #openstack-infra10:04
*** prometheanfire has joined #openstack-infra10:04
*** tdasilva has joined #openstack-infra10:04
*** mgagne has joined #openstack-infra10:04
*** dtroyer has joined #openstack-infra10:04
*** kukacz_ has joined #openstack-infra10:04
*** ianw has joined #openstack-infra10:04
*** aarents has joined #openstack-infra10:04
*** efried has joined #openstack-infra10:04
*** bdodd has joined #openstack-infra10:04
*** tonyb[m] has joined #openstack-infra10:04
*** lifeless has joined #openstack-infra10:04
*** Qiming has joined #openstack-infra10:04
*** icey has joined #openstack-infra10:04
*** weshay_pto has joined #openstack-infra10:04
*** weshay_ has joined #openstack-infra10:04
*** darvon has joined #openstack-infra10:04
*** lxkong has joined #openstack-infra10:04
*** otherwiseguy_ has joined #openstack-infra10:04
*** zzzeek has joined #openstack-infra10:04
*** lsell has joined #openstack-infra10:04
*** sparkycollier has joined #openstack-infra10:04
*** guilhermesp has joined #openstack-infra10:04
*** dougwig has joined #openstack-infra10:04
*** mwhahaha has joined #openstack-infra10:04
*** jamespage has joined #openstack-infra10:04
*** MarkMaglana has joined #openstack-infra10:04
*** crodriguez has joined #openstack-infra10:04
*** jdelaros1 has joined #openstack-infra10:04
*** odyssey4me has joined #openstack-infra10:04
*** aluria has joined #openstack-infra10:04
*** dosaboy has joined #openstack-infra10:04
*** exsdev has joined #openstack-infra10:04
*** chkumar|ruck has joined #openstack-infra10:04
*** jrist has joined #openstack-infra10:04
*** fungi has joined #openstack-infra10:04
*** rajinir has joined #openstack-infra10:04
*** mordred has joined #openstack-infra10:04
*** stephenfin has joined #openstack-infra10:04
*** jungleboyj has joined #openstack-infra10:04
*** aspiers has joined #openstack-infra10:04
*** gagehugo has joined #openstack-infra10:04
*** jmccrory has joined #openstack-infra10:04
*** cgoncalves has joined #openstack-infra10:04
*** zer0c00l has joined #openstack-infra10:04
*** szaher has joined #openstack-infra10:04
*** kambiz has joined #openstack-infra10:04
*** andreaf has joined #openstack-infra10:04
*** dansmith has joined #openstack-infra10:04
*** larsks has joined #openstack-infra10:04
*** tobberydberg has joined #openstack-infra10:04
*** gothicmindfood has joined #openstack-infra10:04
*** ullbeking has joined #openstack-infra10:04
*** mnencia has joined #openstack-infra10:04
*** strigazi has joined #openstack-infra10:04
*** pfallenop has joined #openstack-infra10:04
*** gary_perkins has joined #openstack-infra10:04
*** diablo_rojo_phon has joined #openstack-infra10:04
*** freerunner has joined #openstack-infra10:04
*** ericyoung has joined #openstack-infra10:04
*** timburke has joined #openstack-infra10:04
*** tristanC has joined #openstack-infra10:04
*** arxcruz has joined #openstack-infra10:04
*** beagles has joined #openstack-infra10:04
*** gmann has joined #openstack-infra10:04
*** Ng has joined #openstack-infra10:04
*** dustinc has joined #openstack-infra10:04
*** DinaBelova has joined #openstack-infra10:04
*** csatari has joined #openstack-infra10:04
*** johnsom has joined #openstack-infra10:04
*** sdoran has joined #openstack-infra10:04
*** portdirect has joined #openstack-infra10:04
*** noonedeadpunk has joined #openstack-infra10:04
*** snierodz has joined #openstack-infra10:04
*** rodrigods has joined #openstack-infra10:04
*** Shrews has joined #openstack-infra10:04
*** melwitt has joined #openstack-infra10:04
*** dancek has joined #openstack-infra10:04
*** rm_work has joined #openstack-infra10:04
*** mnaser has joined #openstack-infra10:04
*** vdrok has joined #openstack-infra10:04
*** cjloader has joined #openstack-infra10:04
*** philroche has joined #openstack-infra10:04
*** evgenyl has joined #openstack-infra10:04
*** ChanServ has joined #openstack-infra10:04
*** orwell.freenode.net sets mode: +o ChanServ10:04
*** ttx has joined #openstack-infra10:04
*** pkopec has joined #openstack-infra10:04
*** larainema has joined #openstack-infra10:04
*** lourot has joined #openstack-infra10:04
*** andreykurilin has joined #openstack-infra10:04
*** mgoddard has joined #openstack-infra10:04
*** fnordahl has joined #openstack-infra10:04
*** cmurphy has joined #openstack-infra10:04
*** StevenK has joined #openstack-infra10:04
*** kevinz has joined #openstack-infra10:04
*** logan- has joined #openstack-infra10:04
*** hwoarang has joined #openstack-infra10:04
*** bradm has joined #openstack-infra10:04
*** dtantsur|afk has joined #openstack-infra10:04
*** markmcclain has joined #openstack-infra10:04
*** smcginnis has joined #openstack-infra10:04
*** zxiiro has joined #openstack-infra10:04
*** admcleod has joined #openstack-infra10:04
*** onovy has joined #openstack-infra10:04
*** haleyb has joined #openstack-infra10:04
*** calebb has joined #openstack-infra10:04
*** yankcrime has joined #openstack-infra10:04
*** radez has joined #openstack-infra10:04
*** arif-ali has joined #openstack-infra10:04
*** nicholas has joined #openstack-infra10:04
*** rakhmerov has joined #openstack-infra10:04
*** fresta has joined #openstack-infra10:04
*** rpioso has joined #openstack-infra10:04
*** kota_ has joined #openstack-infra10:04
*** davidlenwell has joined #openstack-infra10:04
*** cyberpear has joined #openstack-infra10:04
*** petevg has joined #openstack-infra10:04
*** SergeyLukjanov has joined #openstack-infra10:04
*** amotoki has joined #openstack-infra10:04
*** zaro has joined #openstack-infra10:04
*** Anticimex has joined #openstack-infra10:04
*** ildikov has joined #openstack-infra10:04
*** coreycb has joined #openstack-infra10:04
*** clayg has joined #openstack-infra10:04
*** davecore has joined #openstack-infra10:04
*** knikolla has joined #openstack-infra10:04
*** gfidente has joined #openstack-infra10:05
*** logan- has quit IRC10:05
*** dtantsur|afk is now known as dtantsur10:06
*** logan- has joined #openstack-infra10:06
*** hashar has joined #openstack-infra10:14
*** osmanlicilegi has quit IRC10:18
*** osmanlicilegi has joined #openstack-infra10:19
*** osmanlicilegi has quit IRC10:22
*** osmanlicilegi has joined #openstack-infra10:27
*** ykarel is now known as ykarel|afk10:27
*** dchen has joined #openstack-infra10:32
*** ociuhandu has joined #openstack-infra10:39
*** pcaruana has quit IRC10:42
*** hashar has quit IRC10:54
*** ykarel|afk is now known as ykarel11:07
*** rfolco has joined #openstack-infra11:17
*** rfolco has quit IRC11:17
*** rfolco has joined #openstack-infra11:18
*** rishabhhpe has quit IRC11:18
*** witek has quit IRC11:29
*** ociuhandu has quit IRC11:31
*** derekh has quit IRC11:33
*** SurajPatil has joined #openstack-infra11:33
*** surpatil has quit IRC11:36
*** SurajPatil has quit IRC11:38
openstackgerritMattias Jernberg proposed opendev/git-review master: Allow the default of notopic to be configurable  https://review.opendev.org/69744811:40
*** ociuhandu has joined #openstack-infra11:44
*** witek has joined #openstack-infra11:46
*** udesale has joined #openstack-infra11:51
*** iury_course has quit IRC11:55
*** iurygregory has joined #openstack-infra11:56
*** iurygregory is now known as iury_course11:56
*** Lucas_Gray has joined #openstack-infra12:07
*** zbr_ is now known as zbr|out12:13
*** Lucas_Gray has quit IRC12:20
*** jpena is now known as jpena|lunch12:22
*** Lucas_Gray has joined #openstack-infra12:22
*** larainema has quit IRC12:26
noonedeadpunkhi folks. We have a pretty usefull job, which sometimes lasts more than 3 hours which results in timeout limit. For some roles it fails almost always.12:32
noonedeadpunkSo it's upgrade job which verifies that OSA upgrade is good. It's pretty inportant to be sure that patches are not breaking upgrade path. But they deploy osa twice, which is not so fast...12:44
noonedeadpunkI was wondering if we can do smth regarding that - like changing flavor with encreased ram or allow this exact job running more that 3h....12:44
noonedeadpunkOr maybe you have some suggestions regarding this case...12:45
*** rcernin has quit IRC12:52
*** jamesmcarthur has joined #openstack-infra12:53
*** pcaruana has joined #openstack-infra12:54
*** jklare has quit IRC12:57
*** jklare has joined #openstack-infra13:00
*** rlandy has joined #openstack-infra13:01
Shrewsnoonedeadpunk: https://zuul-ci.org/docs/zuul/user/config.html#attr-job.timeout13:02
*** ociuhandu has quit IRC13:06
*** eharney has quit IRC13:06
*** jamesmcarthur has quit IRC13:07
*** rh-jelabarre has joined #openstack-infra13:07
noonedeadpunkShrews: there's max limit now defined in 3hours https://review.opendev.org/#/c/697435/13:07
noonedeadpunkoh, so you mean we have this limit defined...13:09
sshnaidmsorry for bothering again, but can someone clear SSH sessions to Gerrit for use os-tripleo-ci ? I have an error about max 64 connections13:10
donnydI have brought this up before, but I'm not entirely sure how to fix it. I think last time we had to restart my nodepool worker. The instances in "Ready" are eating a significant amount of quota from what nodepool will build on FN.13:10
donnydIts at 9 instances, but quota is much higher. So right now it will only build 25-27 instances13:11
donnydbut it should be more like 3513:12
*** jamesmcarthur has joined #openstack-infra13:12
donnydIn the big picture, 10 test nodes isn't a lot.. but for little FN it's like 30% of what it can do :(13:13
*** jpena|lunch is now known as jpena13:14
Shrewsnoonedeadpunk: ah, ok, that's the default max timeout. it could be changed, but would have to be done for the entire openstack tenant.13:20
*** goldyfruit_ has quit IRC13:20
noonedeadpunkShrews: Yeah, that's not so good and reasonable... What about changing flavor to smth with more that 8Gb of ram?13:21
noonedeadpunkas an alternative we're considering just dropping such job for centos7 until we add support of centos8 which hopefully will be faster...13:22
Shrewsif the job is resource constrained, then a different flavor might be faster. but i have no idea if that is the case13:23
*** Goneri has quit IRC13:24
*** jamesmcarthur has quit IRC13:25
*** jamesmcarthur has joined #openstack-infra13:26
*** ociuhandu has joined #openstack-infra13:27
*** udesale has quit IRC13:30
noonedeadpunkShrews: actually I even find ooms from time to time during tempest runs, and when reproducing test from vm with the same flavor it really lucks resources...13:30
*** jamesmcarthur has quit IRC13:31
*** witek has quit IRC13:34
*** udesale has joined #openstack-infra13:39
donnydnoonedeadpunk: FN already has expanded resources. We have instances for centos and ubuntu that have 16Gb of memory13:40
Shrewsdonnyd: not entirely sure i understand your question. there are different quotas at play in nodepool: the max-servers setting and the quotas reported back from the provider (cores, ram, etc)13:41
donnydjust use the following labels - centos-7-expanded or ubuntu-bionic-expanded13:41
noonedeadpunkdonnyd: oh, that's great,thanks!13:42
noonedeadpunkwill try this out13:42
donnydI think FN is the only one with those labels, so it will execute there.. but at least it can help you sort out if more memory fixes the issue13:42
AJaegernoonedeadpunk: keep in mind that this will tie you to a single cloud - if that one is down, you have bad luck13:43
donnydI wouldn't use it as a long term solution, maybe just something to deterministically figure out if more memory will in fact fix the issue you are having13:44
donnydBut the purpose of the label is for exactly issues like this13:44
donnydnova used them last cycle for a couple things if I am not mistaken13:44
noonedeadpunkcan we add flavor on another one cloud (like vexxhost?:)) if this will work out?13:45
AJaegernoonedeadpunk: that's better a question to discuss later with the rest of the team. I expect the answer to be: No - with reference to https://docs.openstack.org/infra/manual/testing.html13:46
donnydwe can ask mnaser, but I think for the most part infra didn't want these labels to be used permanently. I believe the feeling is the job should fit in the standard 8gb flavor. don't want to speak on behalf on infra though13:46
donnydIts only there to aid in getting a critical job through or to be used for testing13:47
*** Lucas_Gray has quit IRC13:47
donnydnoonedeadpunk: I am thing AJaeger is right, and it surely has been asked before. With that said, use the label and see if it fixes the issue.13:48
mordredinfra-root: I've contracted plague so I may not be my normal energetic self today13:48
noonedeadpunkAnyway I hope that centos8 is going to be faster - just centos 7 jobs are 1.5 times slower than ubuntu ones while doing exactly the same thing...13:48
Shrewsmordred: ugh. have you tried not being sick?13:49
noonedeadpunkthanks for the answers anyway13:49
*** ykarel is now known as ykarel|afk13:50
noonedeadpunkwill check if extended resources help to think futher about this13:50
mordredShrews: yes, in fact, I did try that. I clearly failed13:51
donnydmordred: unacceptable13:51
mordredShrews: I guess that makes me both sick and a failure13:51
mordreddonnyd: I know it13:51
donnydmordred: I am thinking you are correct13:51
donnydI get at least one solid laugh out of this community every day, mordred you have already made mine13:52
mordreddonnyd: \o/ *cough*13:52
*** mriedem has joined #openstack-infra13:54
mordredclarkb: yay on tcp open! we should send that to lunny just to say "nice work"13:56
*** derekh has joined #openstack-infra13:56
*** jamesmcarthur has joined #openstack-infra13:57
mordred(I pinged him with the link in their discord)13:59
*** diablo_rojo has joined #openstack-infra14:00
*** armax has joined #openstack-infra14:00
*** witek has joined #openstack-infra14:04
fricklerinfra-root: we are seeing some RST rendering issue on https://opendev.org/openstack/cookbook-openstack-common , see the table at the end (we moved from .md to .rst after noticing other issues with the rendering there)14:04
*** jamesmcarthur has quit IRC14:04
fricklerthe table shows up fine on github and also when I look at it locally with https://pypi.org/project/restview/14:05
fricklerdo we know what gitea uses for rendering?14:05
mordredfrickler: /usr/bin/pandoc -f rst14:06
mordredfrickler: check playbooks/roles/gitea/templates/app.ini.j2 in system-config for the markup.pandoc section14:07
*** jtomasek has quit IRC14:08
*** eharney has joined #openstack-infra14:09
noonedeadpunkdonnyd: unfortunatelly I got "The nodeset "centos-7-expanded" was not found."...14:11
*** jtomasek has joined #openstack-infra14:11
*** gfidente has quit IRC14:12
*** gfidente has joined #openstack-infra14:12
*** cgoncalves has quit IRC14:13
noonedeadpunkoh, I did it wrong14:14
fricklermordred: great, thx, with that I can reproduce the issue. is there a particular reason to use pandoc instead of other renderers? gitea proposes rst2html.py from docutils which also produces a correct rendering for me https://docs.gitea.io/en-us/external-renderers/14:15
*** ociuhandu has quit IRC14:16
AJaegerfrickler: add an empty line - want me to send a patch?14:18
*** Goneri has joined #openstack-infra14:18
fricklerAJaeger: oh, that works, thanks. if you want to send a patch, you are welcome, we haven't had a contribution from suse in a while ;)14:21
AJaegerfrickler: HUGE and significatn patch sent ;)14:23
AJaegerhttps://review.opendev.org/69747914:24
fricklerAJaeger: well, this was the last blocker for me to start branching stable/rocky, so in fact not as insignificant as it may look, thank you14:25
AJaeger;)14:27
AJaegerglad to be of help14:27
*** rlandy is now known as rlandy|mtg14:28
ricolinHi guys, besides the new comming Linaro servers, so we also got other ARM server in our nodepool already?14:29
*** rosmaita has joined #openstack-infra14:33
*** zbr|out has quit IRC14:33
*** zbr has joined #openstack-infra14:34
dtantsurfolks, could we do something about "This change depends on a change that failed to merge." when the parent patch fails in the gate?14:35
dtantsurlike, instead of -2 and recheck, hold it with no verified and re-enter the gate when the parent re-enters it?14:35
*** jamesmcarthur has joined #openstack-infra14:36
*** pcaruana has quit IRC14:38
*** ociuhandu has joined #openstack-infra14:40
*** jamesmcarthur has quit IRC14:40
*** zbr_ has joined #openstack-infra14:41
openstackgerritMatt Riedemann proposed opendev/elastic-recheck master: Add query for NetworkAmbiguous fail bug 1844568  https://review.opendev.org/69748614:41
openstackbug 1844568 in tempest "[compute] "create_test_server" if networks is undefined and more than one network is present" [Medium,In progress] https://launchpad.net/bugs/1844568 - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)14:41
*** zbr has quit IRC14:44
openstackgerritSimon Westphahl proposed zuul/zuul master: Store event id for buildsets in database  https://review.opendev.org/69748814:47
*** goldyfruit_ has joined #openstack-infra14:48
*** zbr_ has quit IRC14:49
openstackgerritAmy Marrich (spotz) proposed opendev/irc-meetings master: New UC meeting time  https://review.opendev.org/69748914:53
*** zbr has joined #openstack-infra14:55
openstackgerritSimon Westphahl proposed zuul/zuul master: Store event id for buildsets in database  https://review.opendev.org/69748814:55
*** ociuhandu has quit IRC14:57
*** cgoncalves has joined #openstack-infra14:57
*** ociuhandu has joined #openstack-infra14:57
*** goldyfruit___ has joined #openstack-infra15:00
*** goldyfruit_ has quit IRC15:02
*** rpittau is now known as rpittau|afk15:03
*** rlandy|mtg is now known as rlandy15:09
*** dpawlik has quit IRC15:16
TheJuliaAny chance someone can take a look at https://review.opendev.org/#/c/697159/?15:16
TheJuliaI have some folks that want to go ahead and start packaging that downstream15:17
*** ociuhandu has quit IRC15:21
*** ociuhandu has joined #openstack-infra15:22
mordredTheJulia: comment from frickler on it - that group name looks weird15:24
clarkbsshnaidm: can you figure put why you are leaking connections? and they should tineout after 1 hour15:27
fungiricolin: yes, we have quota for 8 servers in linaro-london currently, booting arm64 images for two versions of ubuntu and two versions of debian: https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L394-L43215:27
clarkbsshnaidm: is this account connecting with zuul? if so is that zuul install up to date or is it older? there was a paramiko issue zuul had to address at one point15:27
ricolinfungi, are we assigned any task to those server at this time?15:28
TheJuliamordred: ack, thanks15:28
clarkbricolin: this is why I suggested chatting with linaro. They already supply arm64 nodes to us15:28
ricolinclarkb, okay I thought you're taking about the same Linaro team as I was15:30
ricolindo we have any contact window from Linaro London15:31
clarkbricolin: hrw iirc15:32
clarkbdtantsur: I dont think zuul's current config options can express that15:34
ricolinclarkb, Okay I will to chat with Marcin15:34
dtantsurclarkb: that's quite sad. is it something that can be considered for a roadmap?15:34
clarkbdtantsur: I expect that the zuul team may suggest openstack stop requiring "clean check" which I think will avoid the problems you have with the current setup15:35
clarkbthe issue being you dont want to wait for both changes to get verified +1 again15:36
dtantsurI'm fine with the one that failed to go through check again15:36
dtantsurI don't quite enjoy making the 3 that depend on it (in this case) to do the same15:36
clarkbthat said from an openstack perspective this is why I constantly push on "please fix job instability"15:36
dtantsurwell....15:37
*** udesale has quit IRC15:37
clarkband if I had to vocally say it I wouldve already lost my voice at least once doing it15:37
*** gfidente has quit IRC15:37
dtantsurI'd love somebody to fix e.g. PXE instability in nested virt15:37
*** jamesmcarthur has joined #openstack-infra15:37
dtantsuror similar vague issues we see from time to time15:37
dtantsuranyway, what I suggest is a bit less nuclear than bringing 'reverify' back15:39
openstackgerritBrian Haley proposed openstack/project-config master: Add ovn-octavia-provider driver project  https://review.opendev.org/69707615:39
clarkbthe issue is from zuuls perspective (under currently expressable config) the child hasfailed when the parent does15:39
clarkbit os a different type of failure and we could possibly configure zuul to treat it differently but that requires updating zuul15:40
dtantsurright15:41
*** jamesmcarthur has quit IRC15:42
openstackgerritIlya Etingof proposed openstack/project-config master: Add `sushy-oem-idrac` project  https://review.opendev.org/69715915:42
*** witek has quit IRC15:54
*** iury_course has quit IRC15:55
openstackgerritMerged opendev/elastic-recheck master: Add query for NetworkAmbiguous fail bug 1844568  https://review.opendev.org/69748615:56
openstackbug 1844568 in tempest "[compute] "create_test_server" if networks is undefined and more than one network is present" [Medium,In progress] https://launchpad.net/bugs/1844568 - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)15:56
*** chkumar|ruck is now known as raukadah15:57
*** ykarel|afk is now known as ykarel|away16:01
*** ykarel|away has quit IRC16:05
AJaegerfrickler: time to branch - https://opendev.org/openstack/cookbook-openstack-common looks fine to me ;)16:08
*** dave-mccowan has joined #openstack-infra16:08
openstackgerritMerged openstack/project-config master: Add `sushy-oem-idrac` project  https://review.opendev.org/69715916:10
mordredis sushy-oem-idrac highlighted weirdly in anyone else's client in those lines above?16:13
clarkbnot mine16:13
clarkbbut possible due to the `wrapper` ?16:13
mordredoh - wrapper was just highlighted the same way16:14
mordredwhat is that?16:14
mordredahhhh16:14
mordredit's the backticks16:14
mordred*fascinating*16:14
mordredwhat a strange life choice glowingbear has made16:15
clarkbmordred: re being sick. I was getting over the last thing then my kids and my cousins kids ended up in the same place for thanksgiving and now we are all sick again16:15
clarkbalso now we know how to get mordred's attention16:15
AJaegerJust got a RETRY_LIMIT, see https://zuul.opendev.org/t/openstack/build/05427596f97046299133a986dfa90d3816:16
AJaeger"E: Failed to fetch http://mirror.dfw.rax.opendev.org/ubuntu/dists/bionic-updates/main/binary-amd64/Packages.gz  File has unexpected size (1139750 != 1139760). Mirror sync in progress? [IP: 2001:4800:7819:105:be76:4eff:fe04:9b8a 80]"16:16
mordredclarkb: that sounds so wonderful16:17
clarkbAJaeger: I think that is the afs cache sync bug. If it persists we can delete things out of the cache via some afs command I don't remember16:17
clarkbI think it may be the case that we hit that now when we read the index, vos release happens, then we read the package file16:18
clarkbso it doesn't actually persist past the next index read16:18
clarkband it is specific to Packages.gz becasue that filename is fixed unlike the actual package names (whcih are version specific filenames)16:18
AJaegerclarkb: ok, let's wait and see whether we get more reports16:20
*** ykarel|away has joined #openstack-infra16:20
*** mattw4 has joined #openstack-infra16:21
*** ociuhandu has quit IRC16:23
fricklerAJaeger: thanks for double checking. we just need to update about a dozen other repos in the same way now ... but I hope we can branch tomorrow16:23
AJaegerfrickler: hope it will all work out !16:27
*** igordc has joined #openstack-infra16:27
*** witek has joined #openstack-infra16:28
openstackgerritMatthieu Huin proposed zuul/zuul master: [DNM][WIP] admin REST API: zuul-web integration  https://review.opendev.org/64353616:32
*** jamesmcarthur has joined #openstack-infra16:38
*** tesseract has quit IRC16:39
*** gyee has joined #openstack-infra16:41
*** pgaxatte has quit IRC16:41
*** slaweq_ has joined #openstack-infra16:41
*** jamesmcarthur has quit IRC16:42
*** openstackgerrit has quit IRC16:42
*** openstackgerrit has joined #openstack-infra16:43
openstackgerritMatthieu Huin proposed zuul/zuul master: enqueue: make trigger deprecated  https://review.opendev.org/69544616:43
*** goldyfruit_ has joined #openstack-infra16:43
*** nicolasbock has joined #openstack-infra16:43
*** jpena is now known as jpena|brb16:45
*** goldyfruit___ has quit IRC16:46
*** mattw4 has quit IRC16:49
*** electrofelix has quit IRC16:49
*** ricolin has quit IRC16:54
*** dave-mccowan has quit IRC16:58
*** mattw4 has joined #openstack-infra16:58
openstackgerritDrew Walters proposed opendev/system-config master: lists: Add Airship VMP mailing lists  https://review.opendev.org/68927116:59
*** dave-mccowan has joined #openstack-infra17:00
*** jamesmcarthur has joined #openstack-infra17:00
*** ijw_ has joined #openstack-infra17:02
*** slaweq_ has quit IRC17:02
*** ociuhandu has joined #openstack-infra17:02
*** dave-mccowan has quit IRC17:10
*** gfidente has joined #openstack-infra17:10
*** goldyfruit___ has joined #openstack-infra17:12
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: Add podman support  https://review.opendev.org/69505117:14
*** goldyfruit_ has quit IRC17:15
*** dtantsur is now known as dtantsur|afk17:19
*** lennyb has quit IRC17:20
*** witek has quit IRC17:20
*** etingof has joined #openstack-infra17:22
*** lastmikoi has quit IRC17:24
*** jpena|brb is now known as jpena17:25
*** lennyb has joined #openstack-infra17:26
*** dpawlik has joined #openstack-infra17:28
*** lastmikoi has joined #openstack-infra17:28
*** dpawlik has quit IRC17:30
*** lennyb has quit IRC17:32
openstackgerritMerged opendev/irc-meetings master: New UC meeting time  https://review.opendev.org/69748917:35
*** jamesmcarthur has quit IRC17:36
*** lpetrut has quit IRC17:40
*** lucasagomes has quit IRC17:43
*** jamesmcarthur has joined #openstack-infra17:46
*** ccamacho has quit IRC17:46
*** rfolco is now known as rfolco|brb17:48
*** jamesmcarthur has quit IRC17:50
*** jamesmcarthur has joined #openstack-infra17:58
*** jamesmcarthur has quit IRC17:59
*** derekh has quit IRC18:00
*** mattw4 has quit IRC18:03
*** lpetrut has joined #openstack-infra18:07
*** ykarel|away has quit IRC18:07
*** jamesmcarthur has joined #openstack-infra18:10
*** jamesmcarthur has quit IRC18:11
*** mattw4 has joined #openstack-infra18:15
openstackgerritMichael Johnson proposed openstack/diskimage-builder master: Set correct python version for non-chroot scripts  https://review.opendev.org/69721118:15
*** ociuhandu has quit IRC18:17
*** jpena is now known as jpena|off18:18
*** michael-beaver has joined #openstack-infra18:19
*** armax has quit IRC18:25
*** jamesmcarthur has joined #openstack-infra18:26
cmurphycould i request a new glean release? git log 1.16.0..master --oneline => 403646c Add support for SLES18:28
clarkbcmurphy: yes I can take a look at doing that after late breakfast18:28
mordredcmurphy: I think you just did request it18:28
cmurphymordred: well would you look at that18:29
mordred;)18:29
cmurphythanks clarkb18:29
*** jamesmcarthur has quit IRC18:32
*** Goneri has quit IRC18:35
*** rfolco|brb is now known as rfolco18:35
*** ralonsoh has quit IRC18:37
*** gfidente is now known as gfidente|afk18:38
*** dave-mccowan has joined #openstack-infra18:38
openstackgerritClark Boylan proposed zuul/zuul master: Record job build attempts in inventory  https://review.opendev.org/69754818:43
*** Goneri has joined #openstack-infra18:44
openstackgerritMonty Taylor proposed opendev/storyboard master: Build container images  https://review.opendev.org/61119118:58
openstackgerritMonty Taylor proposed opendev/storyboard master: Change py3 testing to py36  https://review.opendev.org/69754918:58
mordredfungi, SotK: ^^ updated the storyboard contianer patch - and another thing19:05
fungithanks!19:05
SotKthanks, just reviewed the other thing :)19:06
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: WIP: use-buildset-registry: Add podman support  https://review.opendev.org/69505119:07
*** ociuhandu has joined #openstack-infra19:07
openstackgerritClark Boylan proposed zuul/zuul master: Record job build attempts in inventory  https://review.opendev.org/69754819:11
*** tobias-urdin has quit IRC19:16
*** ijw_ has quit IRC19:17
clarkbcmurphy: tag pushed19:18
clarkbcmurphy: release things should happen shortly19:18
cmurphytyty19:18
openstackgerritMerged opendev/system-config master: Update gitea docs  https://review.opendev.org/69442719:23
clarkbhttps://pypi.org/project/glean/1.17.0/ there it is19:29
*** ociuhandu has quit IRC19:31
*** dave-mccowan has quit IRC19:36
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: use-buildset-registry: Add podman support  https://review.opendev.org/69505119:44
*** pkopec has quit IRC19:52
*** hashar has joined #openstack-infra19:53
*** jklare has quit IRC19:59
*** jklare has joined #openstack-infra20:01
*** rcernin has joined #openstack-infra20:02
*** dosaboy has quit IRC20:04
*** igordc has quit IRC20:05
*** rcernin has quit IRC20:18
openstackgerritMerged opendev/infra-specs master: Declare victory on StoryBoard  https://review.opendev.org/69720120:21
rlandyAJaeger: clarkb: we also seeing the rax mirror issue ...20:22
rlandy2019-12-05 16:19:21.042766 | ubuntu-bionic | E: Failed to fetch http://mirror.iad.rax.openstack.org/ubuntu/dists/bionic-updates/main/binary-amd64/Packages.gz  File has unexpected size (1139750 != 1139760). Mirror sync in progress? [IP: 2001:4802:7807:103:be76:4eff:fe20:53f8 80]20:22
rlandyreporting again as mentioned above20:22
clarkbrlandy: looks to be in the same time range as AJaeger's?20:23
clarkbfungi ^ is it possible there are races around the index file contents and the packages.gz files as I hypthesised before?20:23
rlandyclarkb: so just a hitch?20:24
rlandyEmilienM reported seeing this failure20:24
clarkbrlandy: my current hunch is it is a race betweens afs file updates and apt's fetching of said files20:24
clarkbthere us a different race that our use of afs addresses but I think this one may slip through20:25
rlandyare we expecting a fix? or just retest and hope for better times ahead?20:25
clarkbI think it should work now. It only happens in a small period of time around the fs update20:27
clarkband doesnt happen every time20:27
rlandychecking logstash20:27
clarkbas for fixing if I am correct in my hunch then this is a part of apts inherent racing20:27
clarkbit may also be an afs bug20:28
rlandyk - logstash shows a good few failures around the same time20:32
*** jklare has quit IRC20:58
fungiclarkb: there shouldn't be races around generation of the indices and performing vos release. remember this is a constructed package repository with mirrors of packages, not a mirror of a package repository20:58
fungiit _is_ possible though that there are cache corruption problems on some of the servers... was it in the same provider each time?20:59
fungii remember we saw this a bunch when testing kafs, though that got rolled back21:00
fungiwe only have one process which writes to that volume, and it does so with a flock to prevent other concurrent runs overlapping it, and performs a vos release only once it's finished21:01
fungiand we're serving the read-only copy, not the read-write volume21:02
*** jklare has joined #openstack-infra21:02
fungilooking back, AJaeger reported one in rax-dfw around 16:16z21:03
fungiso same provider but different region (and so different server/cache)21:04
clarkbfungi: I mean a race between files being updated and apt pulling them21:05
clarkbfungi: apt-get update somehow gets old version of signing file with file sizes then gets new version of packages.gz21:05
clarkbor vice versa21:05
fungithat could be... looking to see how that length is recorded21:05
clarkbthe packages themselves should be fine because we don't delete old packages so if you've got an old index you still  get the old packages just fine21:06
fungii was interpreting it as a decompression error, but i may have misread21:06
clarkbbut could be a race in the metadata itself21:06
clarkbfungi: iirc its checking filesize against what is listed in some index21:06
*** eharney has quit IRC21:06
fungipossibly http://mirror.iad.rax.openstack.org/ubuntu/dists/bionic-updates/main/Contents-amd64.gz but i'm checking now21:07
fungitaking a loooong time to retrieve for some reason21:07
fungigetting around 700kb/sec average21:08
clarkbI think that is "normal" for uncached files21:08
clarkbthough you'd expect that to be a hot file and be cached21:08
fungiin that case i guess apt isn't hitting that file, since it's timestamped from 7 hours ago21:08
clarkbor we aren't caching it21:09
*** dosaboy has joined #openstack-infra21:09
fungibut also, no, that file doesn't record a length for binary-amd64/Packages.gz so i'll keep looking21:09
fungiyeah, nevermind, it's in http://mirror.iad.rax.openstack.org/ubuntu/dists/bionic-updates/Release21:10
fungi1e560614eb300b783d16637560a69d76 1139750 main/binary-amd64/Packages.gz21:10
clarkbhttp://mirror.iad.rax.openstack.org/ubuntu/dists/bionic-updates/Release that file21:10
clarkbya21:10
fungiand yep, `wget -qO- http://mirror.iad.rax.openstack.org/ubuntu/dists/bionic-updates/main/binary-amd64/Packages.gz|wc -c` gives me 113975021:11
clarkbif I do an apt-get update on my local xenial server it grabs the InRelease files then the Packages files and in quick succession21:11
clarkbis it possible that apt-get grabs the release file first with old data, then gets the packages file with new data and new size and we see failures21:12
clarkbthat gap isn't big in local testing but does exist21:12
fungithe file with the length in it and the file to which it refers are timestamped only a couple minutes apart21:12
clarkbfungi: right this depends on the timing of vos release completion I think21:12
*** jtomasek has quit IRC21:12
clarkbbasically you'd need a vos release to complete in the middle of an apt-get update21:13
clarkbwhich for the time span we see the errors for probably doesn't line up well21:13
fungiseems possible. checking to see when vos release started/ended21:13
*** dave-mccowan has joined #openstack-infra21:13
fungithe vos release for that volume ran from 14:47:05 to 14:50:2421:14
clarkbwhich is about 1.25 hours prior to when we see the issue21:15
fungiit ran again between 16:38:45 and 16:39:1921:15
fungi(cronjob for it runs every two hours)21:16
fungibut yeah, the timestamps for those indices were prior to the 14:47:05 vos release, so didn't update in the 16:38:45 or later releases21:17
clarkbwe should probably also double check that the mirrors are talking to the RO volume21:17
fungiDocumentRoot /var/www/mirror21:19
fungilrwxrwxrwx 1 root root 32 Oct 13  2017 /var/www/mirror/ubuntu -> /afs/openstack.org/mirror/ubuntu21:19
fungiso that's the read-only path21:19
clarkbok, chances are we are running into an afs read issue?21:21
clarkband getting an incomplete file size? or the web server is doing an incomplete transfer21:21
fungiextra bytes if i'm reading the error correctly21:22
*** jaosorior has joined #openstack-infra21:22
*** sgw has joined #openstack-infra21:22
fungiFile has unexpected size (1139750 != 1139760).21:22
fungi1e560614eb300b783d16637560a69d76 1139750 main/binary-amd64/Packages.gz21:22
clarkbhrm that makes it more interesting as I would expect fewer not more21:22
fungiyeah, truncated read is easier to explain21:22
fungicurrently the Release file and a wget|wc -c agree on 113975021:24
fungiso we need an explanation for why it got an 1139760 byte Packages.gz there21:24
clarkbfungi: what if Packages.gz at 1139760 bytes is an old version and we got a new (current) Release file?21:25
*** rfolco has quit IRC21:25
clarkbso no read problems other than we got a file from older vos release21:25
fungiexcept the timestamp for when both were written was prior to the vos release starting21:25
fungiPackages.gz written at 2019-12-05 14:32 and Release written a couple minutes later at 14:34 then vos release initiated from 14:47:05 to 14:50:24 and we didn't get the bad read until 16:19:21 and the next vos release didn't start until 16:38:4521:28
clarkbyeah super odd21:28
fungiso i guess possible but the timeline isn't anywhere close to backing up that theory, we'd have to operate on an assumption that we've got bad timestamps somewhere too21:29
clarkbor that the cache somehow reverts to an older state long after a cache update event should happen21:29
fungiright, plenty of pathological explanations but nothing passes an occam's razor sniff test for me yet21:30
clarkbya21:30
fungiwhat logstash query were you using? curious to look at the cluster21:30
clarkband the file size is such that I doubt it is a 500 error response or similar from apache21:30
clarkbI haven't done a search myself yet. was just looking at the two example instances21:30
* clarkb makes a query21:31
fungiright, being within 10 bytes on a 1.1mb file is more than mere coincidence21:31
clarkbfungi: message:"File has unexpected size" seems to work21:31
fungii half wonder if there's a transparent "web accelerator" in between the nodes and the mirror, buried somewhere in rackspace's routing infrastructure, which served a cached copy or something (this is http after all)21:32
clarkblooks like it happened over an almost 2 hour time period that I now have to do timezone math on to figure out21:32
fungii do not have to do timezone math, taking a look21:32
clarkbok the first occurence does seem to lin up with the 14:47-14:50 vos release21:33
clarkbthen it stops at ~16:3521:33
clarkbI think that implies it was broken after that first vos release until the subsequent release21:33
fungiyep, i concur21:34
clarkbI guess we now need to determine if the vos release and associated cache updates were successful and the actual disk content was bad or if the vos release itself had problems. The file timetsamps imply to me that the vos release itself had problem21:34
funginote that these are not all the same files, different expected and found lengths21:34
clarkbthe files don't seem to be region specific either21:36
clarkbthe data in the release file seems to match what we currently have so maybe we assume that data was correct21:37
clarkbthat would then imply the Packages.gz contents were the ones out of sync (too large)21:37
fungirax-dfw and rax-iad (we already had examples from each) but also ovh-bhs121:37
fungithough i'm interestingly not seeing failures from any other providers like i would expect21:37
*** Goneri has quit IRC21:38
clarkbfungi: I don't think it is a bionic issue as the ovh mirror is xenial iirc21:38
funginope, adjusting the query window to go back further i also see rax-ord in the mix21:39
*** pkopec has joined #openstack-infra21:39
clarkbfungi: in the same time window or another occurence?21:39
fungisame window, just closer to the start21:39
fungiwe had more than 500 hits so ls was limiting query results21:39
fungii narrowed the window to closer to the start of the cluster21:40
clarkb" Very occasionally problems might arise due to cache inconsistencies. The following commands (subcommands of fs) can be used to remove stale cache data." dartmouth seems to maybe know about this21:40
clarkband it lists the fs flush and fs flushvolume which we've used in the past to help address this21:41
*** ociuhandu has joined #openstack-infra21:41
fungii need to go meet some folks for dinner, but might be interesting to ponder why we only saw it impacting rax and ovh builds, but saw more than enough hits that we should have statistically gotten a few other providers in there21:41
fungii'll be back in an hour or two21:41
clarkbfungi: my guess is that since caches are per server inconsistencies may arise on them independently21:42
clarkbperhaps due to missed udp packets?21:42
fungiperhaps21:42
clarkbI don't know if there is any handhsaking down around those packets21:42
* fungi <- out21:42
*** pkopec has quit IRC21:43
*** persia has quit IRC21:43
*** persia has joined #openstack-infra21:45
openstackgerritJames E. Blair proposed zuul/zuul master: Log item warning messages at info level  https://review.opendev.org/69757121:45
clarkbianw: corvus ^ any of this sound familiar to you?21:45
*** ijw has joined #openstack-infra21:46
ianwsorry, not something i've heard of :/21:47
corvusclarkb: do i need to read back to 20:22?21:47
clarkbcorvus: ya 20:22 gives the background21:47
clarkb20:58 may be sufficient though21:48
openstackgerritIan Wienand proposed zuul/nodepool master: Dockerfile : install sudo for nodepool-builder  https://review.opendev.org/69470921:48
openstackgerritIan Wienand proposed zuul/nodepool master: Dockerfile: add DEBUG environment flag  https://review.opendev.org/69484521:48
openstackgerritIan Wienand proposed zuul/nodepool master: Also build sibling container images  https://review.opendev.org/69739321:48
openstackgerritIan Wienand proposed zuul/nodepool master: [wip] move openstack testing to use containerised daemon  https://review.opendev.org/69346421:48
clarkbhttps://lists.openafs.org/pipermail/openafs-info/2012-November/039023.html <- taking the udp dropped packets idea brings that up21:49
clarkbmaybe we should increase udp buffer size?21:49
*** snierodz has quit IRC21:50
*** stephenfin has quit IRC21:50
clarkbif we think that is a reasonable next step I can start looking at that after my bike ride21:51
*** jaosorior has quit IRC21:53
corvusremember that rx is basically tcp-before-tcp-existed-implemented-on-udp -- so with dropped packets, we're not supposed to end up with incorrect data or anything -- just poor performance.  aiui.21:54
corvusso yeah, there should be handshaking21:54
*** stephenfin has joined #openstack-infra21:57
clarkbok, so maybe worth doing for performance reasons but wont help this issue necessarily21:58
openstackgerritIan Wienand proposed zuul/nodepool master: Dockerfile: install sudo for nodepool-builder  https://review.opendev.org/69470922:03
openstackgerritIan Wienand proposed zuul/nodepool master: Dockerfile: add DEBUG environment flag  https://review.opendev.org/69484522:03
openstackgerritIan Wienand proposed zuul/nodepool master: Also build sibling container images  https://review.opendev.org/69739322:03
openstackgerritIan Wienand proposed zuul/nodepool master: [wip] move openstack testing to use containerised daemon  https://review.opendev.org/69346422:03
*** goldyfruit___ has quit IRC22:05
openstackgerritSaul Wold proposed openstack/project-config master: starlingx: add oidc-auth-armada-app  https://review.opendev.org/69757622:16
openstackgerritJames E. Blair proposed zuul/zuul master: Fix potential wedge with provides/requires/dependencies  https://review.opendev.org/69757922:19
*** ociuhandu has quit IRC22:20
sgwMorning folks, I just sent a review to project-config for adding a new starlingx/oidc-auth-armada-app, when it merges, we will need a Gerrit group added "starlingx-oidc-auth-armada-app-core", please add robert.church@windriver.com as initial reviewer22:20
openstackgerritKendall Nelson proposed openstack/cookiecutter master: Update CONTRIBUTING.rst template  https://review.opendev.org/69600122:27
*** mattw4 has quit IRC22:27
*** rcernin has joined #openstack-infra22:28
*** jamesmcarthur has joined #openstack-infra22:31
ianwok, crazy thought maybe ... we do have a global cluster that serves our AFS contents over https -- possibly we should put a load-balancer infront of our mirrors and serve things like governance.openstack.org via that?22:39
corvusianw: yes, or we can just scale out on separate servers so we're not hitting our web presence with our ci system22:42
ianwyeah, that was why i had the generic load-balancer idea in the original spec22:44
ianwbtw https://review.opendev.org/#/c/677903/ is reviewable, which makes haproxy more generic22:44
*** jamesmcarthur has quit IRC22:45
*** jamesmcarthur has joined #openstack-infra22:45
ianwi was thinking the mirrors are probably not under enough load that a bit of static interactive web traffic would notice.  just a thought :)22:45
*** hashar has quit IRC23:00
*** ijw has quit IRC23:04
*** tkajinam has joined #openstack-infra23:07
*** mattw4 has joined #openstack-infra23:15
openstackgerritIan Wienand proposed opendev/system-config master: Add roles for a basic static server  https://review.opendev.org/69758723:20
*** mattw4 has quit IRC23:20
ianwhttps://zuul.opendev.org/t/zuul/stream/28896b23208645fca2ddd176bb1b4c79?logfile=console.log ... looks like we lost at least one streamer23:24
*** sshnaidm is now known as sshnaidm|afk23:24
ianwze04 by the looks of things23:25
*** jamesmcarthur has quit IRC23:28
*** nicolasbock has quit IRC23:29
*** ociuhandu has joined #openstack-infra23:30
fungiianw: i'm not clear on how that's any different from serving it from files.o.o (after maybe making an additional files.o.o to do failover between)23:33
fungior is that pretty much what you said?23:34
fungiwe could i guess serve it from files.o.o with one of the mirror servers as a backup in the lb pool23:34
ianwfungi: yeah, it was more that we *probably* don't need dedicated servers for performance reasons23:35
fungioh, sure23:35
ianwbut "keep things separate for admin purposes" reasons is different :)23:35
*** ociuhandu has quit IRC23:35
*** ijw has joined #openstack-infra23:36
rpiosoclarkb: I'm still experiencing lengthy, often timed out, git clone issues. Is there a way to specify a specific server? If so, which ones may I try?23:36
clarkbrpioso: you can use https://gitea0X.opendev.org:3000 with X values 1-8 inclusive23:37
clarkbrpioso: fwiw I was unable to reproduce your experience against gitea02 yesterday and did hundreds of clones23:37
*** tosky has quit IRC23:38
clarkbwould be curious to see if other hosts are also slow or not as they are all hosted adjacent to each other. If they are all slow that could point to a networking issue between gitea and you23:38
fungirpioso: do you have ipv6 connectivity to the internet, or only ipv4?23:38
*** jamesmcarthur has joined #openstack-infra23:38
fungiworth noting that we only publish ipv4 records in dns for the individual gitea servers, but publish both ipv4 and ipv6 addresses for the load balancer23:40
fungiso if you're seeing slow performance to the lb but not to any individual gitea servers and you have ipv6 connectivity...23:40
*** ijw has quit IRC23:40
rpiosofungi: Checking ...23:41
funginote that git accepts -4 and -6 command-line options to choose the ip address family23:42
rpiosofungi: Looks like it has both ipv4 and ipv6 connectivity to the internet.23:42
openstackgerritClark Boylan proposed opendev/puppet-openafs master: Increase udp buffer sizes when installing afs client  https://review.opendev.org/69758823:42
openstackgerritClark Boylan proposed opendev/system-config master: Increase udp buffer sizes when installing afs client  https://review.opendev.org/69758923:43
clarkbtwo chagnes for udp buffer size bumps. One gets the puppeted servers the other the ansibled23:43
fungiso if you're seeing slow performance to https://opendev.org/... you can try something like `git clone -4 https://opendev.org/...` to see if ipv4 is also slow23:43
fungithat might at least tell you if your ipv6 connectivity is terrible compared to ipv423:44
clarkbreboots will be required for those changes to take effect or we can manually run sysctl -w23:44
* rpioso kicks off the infinite clone loop23:44
*** rlandy is now known as rlandy|afk23:45
rpiosofungi: Looks like clone -4 is slow. Gonna try -6.23:46
fungiodds are -6 is the default for you anyway if you have ipv6 connectivity23:46
rpiosofungi "fatal: unable to access 'https://opendev.org/openstack/ironic/': Couldn't connect to server"23:47
clarkbyou probably don't have ipv6 at all then23:47
fungiyeah23:48
rpiosoThe interface has both ipv4 and ipv6 addresses.23:48
clarkbrpioso: is your ipv6 address global?23:48
fungieasier way to tell is to check your v6 routing table for a default route23:49
fungi`ip -6 route show | grep default`23:49
rpiosofungi: Notta23:50
fungithen you probably just have linklocal v6 addressing23:50
fungithat tends to be on by default23:50
rpiosofungi: Gotcha23:50
fungiso ignore all the ipv4 vs ipv6 discussion, you're seeing poor performance and you connect over ipv4. that's something to go on anyway23:50
fungiat least your connections to the individual backends will be via the same routes and address family23:51
fungiso it does make ruling out some classes of problems slightly easier23:51
*** jamesmcarthur has quit IRC23:52
rpiosoThe interesting thing is that the slowness is intermittent. Several clones in a row can complete in 10-14 seconds each. And then the next one takes forever, say two minutes or more. Sometimes it'll take so long that it'll time out. There can be a few of those in a row, too. Rinse and repeat.23:52
clarkbfungi: mnaser ttx sent an updated governance proposal email trying to incporate feedback you've sent. Thank you!23:52
fungirpioso: that's when the government has decided to intercept your connection (just kidding, i hope!)23:53
clarkbrpioso: fungi probably the next step is to check if it is only gitea02 or if all 8 exhibit this problem23:53
clarkbif all 8 then it points more to networking problem imo23:53
clarkbsince they share a common network23:53
fungiyeah, if it's all 8 of them, then we can start looking for signs of packet loss along the outbound and return routes between your clients and vexxhost where these servers live23:55
rpiosoFailure: git clone https://gitea02.opendev.org:3000/openstack/ironic.git /opt/stack/ironic23:55
fungiand if it's none of them, and only shows up when going through the load balancer, that's yet another thing we probably need to dig into23:55
rpiosofatal: unable to access 'https://gitea02.opendev.org:3000/openstack/ironic.git/': Failed to connect to gitea02.opendev.org port 3000: Connection refused23:55
fungirpioso: that sounds like something close to your client is doing packet filtering23:56
fungiis there a corporate firewall or something which might block connectivity to 3000/tcp on remote addresses?23:56
rpiosofungi: Likely23:56
fungii can run the same git clone command from home and have no special access23:57
fungiso it's nothing we're blocking server-side23:57
clarkband ssl even validates now :)23:57
rpiosoI can try that when I get home.23:57
rpiosoThe other thing is that I have another system which can successfully and speedily clone like that battery manufacturer bunny :-)23:58
rpiosoTheir network connections are different.23:59
fungirpioso: yeah, hate to say it but i would suspect something on or topologically near the client which is experiencing this problem23:59
clarkbrpioso: checking from home doesnt necessarily help23:59
clarkbif it is a network path issue23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!