Friday, 2018-11-23

*** jamesmcarthur has quit IRC00:02
*** yamamoto has quit IRC00:28
*** longkb has joined #openstack-infra00:39
*** eumel8 has quit IRC00:52
*** longkb has quit IRC01:08
openstackgerritIan Wienand proposed openstack-infra/nodepool master: [wip] Add Fedora 29 testing  https://review.openstack.org/61867101:14
*** jklare has quit IRC01:23
*** calbers has quit IRC01:23
*** calbers has joined #openstack-infra01:24
*** jklare has joined #openstack-infra01:24
*** auristor has quit IRC02:18
openstackgerritIan Wienand proposed openstack-infra/glean master: Add NetworkManager distro plugin support  https://review.openstack.org/61896402:18
*** auristor has joined #openstack-infra02:19
*** owalsh has quit IRC02:25
*** mrsoul has joined #openstack-infra02:27
*** d0ugal has quit IRC02:33
*** owalsh has joined #openstack-infra02:39
*** tonyb has quit IRC02:49
*** tonyb has joined #openstack-infra02:52
*** ianychoi has quit IRC02:57
*** ianychoi has joined #openstack-infra02:57
*** psachin has joined #openstack-infra03:12
*** udesale has joined #openstack-infra03:13
*** bhavikdbavishi has joined #openstack-infra03:33
*** bhavikdbavishi1 has joined #openstack-infra03:33
*** bhavikdbavishi has quit IRC03:37
*** bhavikdbavishi1 is now known as bhavikdbavishi03:37
*** ykarel|away has joined #openstack-infra03:40
*** ykarel|away is now known as ykarel03:44
*** jamesmcarthur has joined #openstack-infra03:59
*** jamesmcarthur has quit IRC04:03
ianwyay! seems the f29 networkmanager was a systemd boot race ... look like i'm getting good tests in the gate now.  could not replicate this outside the gate, which probably makes sense now04:13
*** ramishra has joined #openstack-infra04:15
openstackgerritIan Wienand proposed openstack-infra/glean master: Add NetworkManager distro plugin support  https://review.openstack.org/61896404:28
*** janki has joined #openstack-infra04:29
openstackgerritIan Wienand proposed openstack-infra/nodepool master: [wip] Add Fedora 29 testing  https://review.openstack.org/61867104:43
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Add Fedora 29 testing  https://review.openstack.org/61867104:44
*** fuentess has quit IRC04:58
*** ykarel has quit IRC05:07
*** psachin has quit IRC05:13
*** psachin has joined #openstack-infra05:14
*** ykarel has joined #openstack-infra05:24
*** witek has quit IRC05:30
*** witek has joined #openstack-infra05:33
*** tonyb[m] has joined #openstack-infra05:36
*** pcaruana has quit IRC05:44
*** witek has quit IRC05:50
*** witek has joined #openstack-infra05:51
*** janki has quit IRC05:55
*** janki has joined #openstack-infra05:56
*** witek has quit IRC05:57
*** witek has joined #openstack-infra05:58
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Add Fedora 29 testing  https://review.openstack.org/61867106:11
*** chandankumar has joined #openstack-infra06:23
*** ralonsoh has joined #openstack-infra06:23
*** chandankumar is now known as chkumar|rover06:25
*** chkumar|rover is now known as chkumar|ruck06:25
*** quiquell|off is now known as quiquell06:41
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Add Fedora 29 testing  https://review.openstack.org/61867106:49
*** jamesmcarthur has joined #openstack-infra06:58
*** kjackal has joined #openstack-infra07:00
*** jamesmcarthur has quit IRC07:02
*** david-lyle has joined #openstack-infra07:03
*** dklyle has quit IRC07:04
*** pcaruana has joined #openstack-infra07:08
*** dtantsur|afk is now known as dtantsur07:38
*** rcernin has quit IRC07:48
*** slaweq has joined #openstack-infra07:49
*** e0ne has joined #openstack-infra07:54
*** quiquell is now known as quiquell|brb07:56
*** rpittau has joined #openstack-infra08:02
*** jtomasek has joined #openstack-infra08:04
*** bobh has joined #openstack-infra08:08
*** ginopc has joined #openstack-infra08:11
*** quiquell|brb is now known as quiquell08:17
*** jpena|off is now known as jpena08:17
*** ykarel is now known as ykarel|lunch08:25
*** simon-AS5591 has joined #openstack-infra08:28
tobias-urdinfungi: thank you :)08:32
*** ramishra has quit IRC08:33
*** ramishra has joined #openstack-infra08:38
*** bobh has quit IRC08:39
*** gfidente has joined #openstack-infra08:46
*** bobh has joined #openstack-infra08:46
*** bobh has quit IRC08:50
*** d0ugal has joined #openstack-infra08:54
openstackgerritBrendan proposed openstack-infra/zuul master: Add support for Gerrit v2.16's change URL schema  https://review.openstack.org/61953308:58
*** jpich has joined #openstack-infra08:59
*** mdbooth has joined #openstack-infra09:01
*** rossella_s has joined #openstack-infra09:02
*** mdbooth has quit IRC09:03
*** florianf is now known as florianf|pto09:04
*** e0ne_ has joined #openstack-infra09:07
*** e0ne has quit IRC09:09
openstackgerritPierre Riteau proposed openstack/diskimage-builder master: Fix a typo in the help message of disk-image-create  https://review.openstack.org/61967909:09
*** ykarel|lunch is now known as ykarel09:10
*** tosky has joined #openstack-infra09:15
*** bhavikdbavishi has quit IRC09:17
*** alexchadin has joined #openstack-infra09:19
*** emine__ has joined #openstack-infra09:26
*** alexchadin has quit IRC09:29
*** derekh has joined #openstack-infra09:44
*** sshnaidm is now known as sshnaidm|off09:52
*** sambetts_ has joined #openstack-infra09:54
openstackgerritBrendan proposed openstack-infra/zuul master: Add support for Gerrit v2.16's change URL schema  https://review.openstack.org/61953309:55
openstackgerritchenge proposed openstack/diskimage-builder master: fix links  https://review.openstack.org/61969910:03
*** longkb has joined #openstack-infra10:10
*** ramishra has quit IRC10:12
*** ramishra has joined #openstack-infra10:12
*** longkb has quit IRC10:20
*** chkumar|ruck has quit IRC10:22
*** simon-AS559 has joined #openstack-infra10:25
*** simon-AS5591 has quit IRC10:25
*** psachin has quit IRC10:26
*** psachin has joined #openstack-infra10:27
*** dpawlik_ has quit IRC10:32
*** chandan_kumar has joined #openstack-infra10:35
*** chandan_kumar is now known as chkumar|ruck10:35
*** dpawlik has joined #openstack-infra10:37
*** electrofelix has joined #openstack-infra10:38
*** lpetrut has joined #openstack-infra10:46
*** sambetts_ has quit IRC10:49
*** udesale has quit IRC11:07
*** jchhatbar has joined #openstack-infra11:13
*** janki has quit IRC11:15
*** ahosam has joined #openstack-infra11:20
*** psachin has quit IRC11:26
*** jchhatba_ has joined #openstack-infra11:29
*** rpittau_ has joined #openstack-infra11:31
*** aojea has joined #openstack-infra11:32
*** jchhatbar has quit IRC11:32
*** rpittau has quit IRC11:35
*** simon-AS5591 has joined #openstack-infra11:37
*** simon-AS559 has quit IRC11:38
*** rpittau_ is now known as rpittau11:43
*** alexchadin has joined #openstack-infra11:55
*** ahosam has quit IRC12:03
*** ramishra has quit IRC12:04
*** EmilienM is now known as EvilienM12:14
*** carl_cai has joined #openstack-infra12:20
*** jpena is now known as jpena|lunch12:20
*** shardy has joined #openstack-infra12:20
*** pcaruana has quit IRC12:26
*** bhavikdbavishi has joined #openstack-infra12:41
*** fuentess has joined #openstack-infra13:09
*** jpena|lunch is now known as jpena13:15
*** bhavikdbavishi has quit IRC13:34
*** quiquell is now known as quiquell|lunch13:39
*** ykarel is now known as ykarel|away14:01
*** quiquell|lunch is now known as quiquell14:01
*** simon-AS5591 has quit IRC14:06
*** simon-AS559 has joined #openstack-infra14:08
*** simon-AS559 has quit IRC14:08
*** simon-AS5591 has joined #openstack-infra14:08
*** pbourke has quit IRC14:11
*** pbourke has joined #openstack-infra14:12
*** chkumar|ruck has quit IRC14:15
*** simon-AS5591 has quit IRC14:24
*** ykarel|away has quit IRC14:25
*** simon-AS559 has joined #openstack-infra14:25
*** carl_cai has quit IRC14:29
*** cdent has joined #openstack-infra14:33
cdentgerrit cranky?14:33
witekI cannot push my change either14:35
fungihas it bogged down again?14:39
cdentfungi: it seems to have recovered, but for about 5 minutes about 5 minutes ago it was timing out14:39
fungioh yeah, looks like memory usage in the jvm climbed steadily after i garbage-collected it yesterday and got back up around 45gb before it recovered again at 12:30z14:40
funginow it's starting to climb again14:40
cdentthat seems rather architecturally totally wrong14:42
cdentbut I'm probably drawn wrong14:42
fungiyeah, i'm looking over a chart of memory utilization within the jvm over the past month and it did reach similar levels the week before the summit too14:43
*** gtema has joined #openstack-infra14:43
AJaegerinfra-root, gtema mentioned some strange jobs failures on OVH-BHS1 - anybody around to check the cloud, please?14:45
fungilooks like nl04 is the one handling ovh14:47
fungii see timeouts on node delete requests in gra114:49
*** alexchadin has quit IRC14:49
fungiwhich explains why we have a bunch of nodes in a delete state in that region14:50
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Migration to PHP 7.x  https://review.openstack.org/61622614:51
fungiahh, here's a launch failure for bhs1...14:51
fungiyeah, the majority of the launch failures are bhs1 while the delete timeouts are gra114:52
fungiopenstack.exceptions.SDKException: Error in creating the server. Compute service reports fault: No valid host was found. There are not enough hosts available.14:53
fungiso likely a capacity problem with the host aggregate they have us on (or at least i think that's what they've said was the issue when we've seen that error in the past?)14:54
*** ykarel|away has joined #openstack-infra14:54
gtemaand the ones are there seems to be overloaded as well, since lots of jobs are getting sporadic timeouts14:54
fungiamorin: ^ perhaps you know (if you're around)?14:54
*** trown|outtypewww has quit IRC14:54
*** rossella_s has quit IRC14:55
fungipossible we're oversubscribed there and dropping our max-servers in bhs1 for now would improve performance, if we're being our own noisy neighbor14:55
*** quiquell is now known as quiquell|off14:55
fungiwe've failed over 350 boot calls in bhs1 over the past ~2.5 hours due to "not enough hosts available"14:57
amorinhey14:57
fungilooking to see how far back this stretches14:57
amorinno valid host on BHS!/14:57
amorinBHS1?14:57
fungiyep14:58
amorinchecking14:58
*** trown|brb has joined #openstack-infra14:58
fungilooks like we've been seeing it fairly steadily when we're under load stretching back ~5 days14:58
fungithough actually it was only summit week where we weren't, so might just be due to making far fewer boot calls that week than usual14:59
fungihttp://grafana.openstack.org/d/BhcSH5Iiz/nodepool-ovh?orgId=1&panelId=12&fullscreen&from=1531925989384&to=1542985189896&var-region=ovh-bhs115:00
fungifor a longer term view of the launch node error counts15:00
fungiso if the job timeouts in bhs1 have only cropped up relatively recently, i don't think the launch failures are likely related to the job timeouts there15:02
fungiprobably two separate problems15:02
*** dpawlik has quit IRC15:06
amorineveryuthing is good but the aggregate is full15:07
amorinI will add a host15:07
fungiokay, thanks!15:07
fungimakes sense that we'd be getting that failure then if there weren't enough hosts in the aggregate to satisfy our quota15:08
*** simon-AS5591 has joined #openstack-infra15:08
fungiwhat sort of oversubscription ratio do we have there?15:09
fungiwondering if we should try setting our max-servers a bit under quota to see if that alleviates job timeouts, but hard to guess how much to try dropping it without knowing how oversubscribed the hosts are15:10
*** simon-AS559 has quit IRC15:10
amorinthere is not oversubscription15:15
amorinno*15:15
amorinlet me confirm15:16
amorinso, on CPU, we do x215:17
amorinno oversubscript on ram15:18
fungigiven the reports seem to be about processes being generally slow in jobs i wouldn't be surprised if we're running into cpu contention. it might be a good experiment for us to try setting our bhs1 max-servers in nodepool to half and seeing if the job timeouts cease15:19
*** lpetrut has quit IRC15:22
fungilooks like we have bhs1 set to max-servers 159 and are running right up against that limit at the moment15:22
*** cdent has quit IRC15:23
evrardjpis it me or gerrit is in trouble?15:25
*** bhavikdbavishi has joined #openstack-infra15:26
gtemaevrardjp: you are not alone15:26
fungievrardjp: yeah, i think we're seeing a pileup of established connections on it eating memory, trying to see if there's a pattern but also troubleshooting ci issues concurrently15:26
evrardjpfungi: and you should be on holiday I guess :p15:27
fungimaybe some other infra-root folks will be around soon, but not sure since it's a holiday for many of them15:27
evrardjpit's start to be end of day for europe (it's a friday too, right?), so that timing is either really bad or perfect15:28
fungiheh, i'll take it! ;)15:28
evrardjp(depending on which side of the pond you're looking)15:28
evrardjpfungi: not sure what, but if it's about holidays, the timing is about right :p15:29
evrardjphahaha15:29
odyssey4meyeah, gerrit seems to be hurting - sometimes a page comes back, sometimes it doesn't15:29
evrardjpI probably can't help you folks, but I am really behind you all15:29
evrardjpodyssey4me: yeah15:29
evrardjpyou're not alone15:29
evrardjpI now regret not using gertty more :p15:29
openstackgerritJeremy Stanley proposed openstack-infra/project-config master: Halve ovh-bhs1 max-servers temporarily  https://review.openstack.org/61974915:31
openstackgerritJeremy Stanley proposed openstack-infra/project-config master: Revert "Halve ovh-bhs1 max-servers temporarily"  https://review.openstack.org/61975015:31
fungiestablished connections to the gerrit ssh api seem to account for a majority of the count reported by cacti15:35
*** shardy has quit IRC15:36
fungitop offenders are ip addresses at emc, netapp, red hat, vmware, cloudbase, fortinet, nec, lenovo, ibm... probably all third-party ci systems15:40
*** rossella_s has joined #openstack-infra15:40
fungimmm, no i was looking at the wrong timeframe on the graph so i don't think the established connection counts are necessarily related15:42
*** ykarel|away has quit IRC15:43
*** bhavikdbavishi has quit IRC15:43
*** aojea has quit IRC15:45
*** ykarel|away has joined #openstack-infra15:45
fungimost of the recent errors reported by javamelody are related to queries the stackalytics-bot-2 account has been making, not sure if there's any correlation15:45
aspiersI guess folks already saw that Gerrit 2.16 is out? https://www.gerritcodereview.com/2.16.html15:49
fungiaspiers: awesome! i was not, no15:50
aspiersah cool15:50
* aspiers feels slightly less useless than normal ;-)15:50
aspiersfungi: then you might also be interested in https://gitenterprise.me/2018/11/18/gerrit-user-summit-2018-2/15:52
aspierskeep an eye on https://www.youtube.com/gerritforgetv for videos emerging15:53
fungithanks!!! that's going to be a good read once things hopefully calm down today15:53
*** e0ne_ has quit IRC15:55
fungi#status log temporarily blocked stackalytics-bot-2 access to review.o.o to confirm whether the errors reported for it are related to current performance problems15:56
openstackstatusfungi: finished logging15:56
fungii used `sudo ip6tables -I openstack-INPUT 3 -s 2600:3c01:0:0:f03c:91ff:fe69:4d6f -j DROP` to insert it just before the accept related,established rule but after the accept icmp rule15:57
*** jamesmcarthur has joined #openstack-infra15:57
*** pcaruana has joined #openstack-infra16:00
*** cdent has joined #openstack-infra16:02
*** dpawlik has joined #openstack-infra16:04
*** armstrong has joined #openstack-infra16:05
fungimemory usage plummeted shortly after i applied that block rule16:06
fungiinside the jvm anyway16:06
fungigoing to keep an eye on it and see if that was only a temporary effect or if it restabilizes16:06
*** dpawlik has quit IRC16:08
fungicdent: witek: evrardjp: (and anyone else) please let us know if gerrit response times are still a problem. hard to be sure it was related to the stackalytics queries16:10
*** bhavikdbavishi has joined #openstack-infra16:11
cdentfungi: so far good, will keep an eye on it16:11
fungithe webui is responding fine for me at the moment16:11
fungibut i know it's been in and out over the past few days16:11
openstackgerritMerged openstack-infra/project-config master: Halve ovh-bhs1 max-servers temporarily  https://review.openstack.org/61974916:11
*** roman_g has joined #openstack-infra16:12
fungigtema: ^ hopefully we'll find out useful information if the job timeouts dry up after that goes into effect in the next half hour or so16:12
gtemathanks fungi16:13
*** adriancz has quit IRC16:19
*** bobh has joined #openstack-infra16:19
evrardjpfungi: good for me now16:28
evrardjpfungi: interesting to know that stackalytics is causing issues :p16:30
evrardjpnow go eat some moar turkey! ;)16:30
evrardjphahaha16:30
evrardjpI promise I am not jalous there! (oh god I am starving!)16:31
fungiwell, i don't know for certain it's the fault of stackalytics querying, just testing a theory16:32
evrardjpaspiers: anything real good on the changelog?16:32
aspiersprobably :)16:32
evrardjpIf people don't come complaining it means it was a successful fix :)16:32
evrardjpaspiers: oh /files api endpoint that's kinda neat16:33
evrardjp learning gerrit at the same time :D16:33
fungiwell, still far from a fix since the stackalytics operators are already used to rotating to new accounts because their socket handling doesn't clean up after itself and leaks eventually result in us refusing their connections anyway16:33
fungiif this seems to have helped we need to get up with them and find out what they've changed recently which could be contributing to the problem16:34
evrardjpshouldn't stackalytics be hosted by infra, when we think of it?16:34
evrardjpnot sure what 'stackalytics operators' is :)16:35
*** bobh has quit IRC16:44
*** kjackal has quit IRC16:44
*** rpittau has quit IRC16:45
fungistill mirantis folks as far as i know16:48
fungiand yeah, memory utilization has stabilized after the plummet from when i added that block rule16:49
fungievrardjp: we've tried to work with them to take over hosting and collaborate on development in the past and it never quite happened16:49
fungithere were people willing to contribute (in some cases already wrote specs/patches) to fixing some of the inherent design issues with it, but they didn't seem to want to give up control of the project nor did they seem to have time to review any such overhaul16:50
fungiwe ran a stackalytics.openstack.org instance under infra team control for a while, which is what led to us discovering some of its problems like lack of persistent analysis state (leading it to re-query and reanalyze everything each time it was restarted) and lack of on-the-fly reconfiguration (leading to a need for frequent restarts for simple things like changing someone's employer affiliation)16:52
fungii suppose we could have decided to fork it, but the people involved on this end lost interest after so much radio silence from the stackalytics maintainers and operators16:54
*** jchhatba_ has quit IRC16:54
*** ginopc has quit IRC17:01
gtemaanother job failed with timeout in devstack setup in ovh-bhs117:01
*** jpich has quit IRC17:07
*** dtantsur is now known as dtantsur|afk17:07
*** gtema has quit IRC17:13
*** roman_g has quit IRC17:13
*** slaweq has quit IRC17:16
fungii wish gtema has linked to the log so i could check whether that ran before our utilization dropped in that region17:18
fungier, had linked17:18
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Fixed method getPresentationsBySummitAndRole Added Selection Plan to Presentation Serializer  https://review.openstack.org/61977517:19
*** jamesmcarthur has quit IRC17:31
openstackgerritMerged openstack-infra/openstackid-resources master: Fixed method getPresentationsBySummitAndRole Added Selection Plan to Presentation Serializer  https://review.openstack.org/61977517:31
fungilooks like we whittled the in use nodecount down to ~80 as of 17:00 utc, so if we see a lot of unanticipated timeouts from jobs starting in that region after that time then it's likely for some other reason than cpu contention17:32
*** jpena is now known as jpena|off17:33
fungialso gerrit jvm memory utilization seems to be holding steady still since the firewall rule went in at ~16:00z17:34
*** emine has joined #openstack-infra17:35
*** Adri2000 has quit IRC17:38
*** emine__ has quit IRC17:38
*** jmccrory has quit IRC17:38
*** cmurphy has quit IRC17:38
*** Adri2000 has joined #openstack-infra17:39
*** jmccrory has joined #openstack-infra17:39
*** cmurphy has joined #openstack-infra17:44
fungithough digging into the gerrit ssh log, i'm unsure the two are connected now since the last connection from stackalytics-bot-2 terminated at 14:37:43z17:44
fungiand memory utilization was still climbing steadily for ~1.5 hours after that17:45
*** roman_g has joined #openstack-infra17:48
fungihuh, looking at gerrit's javamelody http stats, someone cloned neutron directly from gerrit's java-based git service 53 times17:53
*** robcresswell has joined #openstack-infra17:56
fungimore than one somebody, looks like17:56
fungian ip address assigned to pure storage did it 22 times, an ip address assigned to intel did it 16 times, 15 from one for citrix17:57
fungi11 from ibm17:58
fungiaverage size of the request was 30mb17:58
logan-sounds like 3rd party ci17:59
fungiit does indeed, but i doubt this is out of the ordinary either since it seems to be the same order of magnitude for a variety of organizations17:59
fungimean time to complete each of those requests was 45 seconds18:00
*** derekh has quit IRC18:00
fungianyway, also not a smoking gun but good to know we have rather a lot of third-party ci systems cloning neutron directly from gerrit's jgit/jetty18:01
fungiwhich is notoriously under-performant18:02
fungii count 15 distinct addresses from what look like probably 10 different organizations18:02
fungisome more prolific than others mind you18:02
fungiover the past ~12 hours18:03
*** roman_g has quit IRC18:04
logan-yeah, I think the jenkins gerrit plugin does that18:05
fungiand that's just cloning neutron. i see similar patterns hitting nova, cinder, ironic18:05
fungithe usual suspects for 3ptyci18:06
*** ralonsoh has quit IRC18:08
fungiaccording to the apache logs there'massive amounts of data being queried about18:09
fungigerrit reviewer accounts by some address at mcgill university18:09
fungi34298 in the past 12 hours from bodiddley.ece.mcgill.ca18:10
*** fuentess has quit IRC18:11
*** jamesmcarthur has joined #openstack-infra18:12
fungi147794 queries from that address in total over the last 12 hours18:12
fungiseems to have started at 2018-11-17 06:01:11 utc and performed as many as 330k requests a day to gerrit with various queries for broad ranges of change dates18:15
fungiget requests like changes/?q=after:%7B2011-07-01%2000:00:00.000%7D%20AND%20before:%7B2011-07-01%2023:59:59.999%7D&o=ALL_COMMITS&o=ALL_REVISIONS&o=ALL_FILES&S=018:15
fungiincrementing day at a time and then re-querying for details on specific changes (presumably returned by the earlier queries)18:17
fungisome script/tool using python-requests18:17
funginot gertty, as that identifies itself with a specific user agent string18:18
logan-ouch18:18
*** bobh has joined #openstack-infra18:18
fungithe exciting things you find when you go spelunking in access logs18:19
fungithough that's only second place for the most requests today18:20
fungitop offender is some anonymous ip address in aws18:20
*** bobh has quit IRC18:21
fungiit too seems to be crawling change numbers18:22
fungiwith a user agent string of "Default User Agent"18:22
*** ykarel|away has quit IRC18:22
fungistarted 2018-11-22 23:25:21 utc by checking /robots.txt before continuing to crawl18:24
*** jamesmcarthur has quit IRC18:25
*** cdent has quit IRC18:25
*** bobh has joined #openstack-infra18:28
*** e0ne has joined #openstack-infra18:31
*** d0ugal has quit IRC18:41
*** bobh has quit IRC18:42
*** d0ugal has joined #openstack-infra18:57
*** roman_g has joined #openstack-infra19:00
*** electrofelix has quit IRC19:06
*** e0ne has quit IRC19:08
*** bhavikdbavishi has quit IRC19:12
*** roman_g has quit IRC19:19
*** roman_g has joined #openstack-infra19:24
*** armstrong has quit IRC19:25
*** jtomasek has quit IRC19:40
*** rfolco has quit IRC19:40
*** e0ne has joined #openstack-infra19:42
*** roman_g has quit IRC19:43
*** david-lyle has quit IRC19:47
*** witek has quit IRC19:54
*** emine has quit IRC19:58
*** witek has joined #openstack-infra19:59
*** bobh has joined #openstack-infra20:05
*** bobh has quit IRC20:09
*** e0ne has quit IRC20:20
*** jamesmcarthur has joined #openstack-infra20:25
*** robcresswell has quit IRC20:25
*** gfidente has quit IRC20:25
*** jamesmcarthur has quit IRC20:30
*** slaweq has joined #openstack-infra20:40
*** kjackal has joined #openstack-infra20:53
*** dave-mccowan has joined #openstack-infra20:54
*** xek__ has quit IRC21:01
*** xek__ has joined #openstack-infra21:02
*** xek__ has quit IRC21:03
*** xek__ has joined #openstack-infra21:04
*** xek__ has quit IRC21:05
*** xek has joined #openstack-infra21:06
*** nsmeds has joined #openstack-infra21:08
*** xek has quit IRC21:09
*** xek has joined #openstack-infra21:09
*** xek has quit IRC21:11
*** xek_ has joined #openstack-infra21:11
*** dave-mccowan has quit IRC21:11
*** xek_ has quit IRC21:12
*** slaweq has quit IRC21:21
*** pcaruana has quit IRC21:46
*** jamesmcarthur has joined #openstack-infra22:17
*** jamesmcarthur has quit IRC22:18
*** jamesmcarthur has joined #openstack-infra22:18
*** lbragstad has joined #openstack-infra22:53
*** dhill_ has quit IRC22:56
*** kjackal has quit IRC22:57
*** jamesmcarthur has quit IRC23:04
*** eandersson has joined #openstack-infra23:10
*** slaweq has joined #openstack-infra23:18
*** xek has joined #openstack-infra23:25
*** rossella_s has quit IRC23:26
*** slaweq has quit IRC23:27
*** jamesmcarthur has joined #openstack-infra23:30
*** jamesmcarthur has quit IRC23:49
*** jamesmcarthur has joined #openstack-infra23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!