Monday, 2020-11-30

*** tosky has quit IRC00:39
*** cshen has joined #openstack-ansible01:11
*** cshen has quit IRC01:16
*** macz_ has joined #openstack-ansible01:31
*** macz_ has quit IRC01:35
*** cshen has joined #openstack-ansible02:12
*** cshen has quit IRC02:16
openstackgerritMerged openstack/openstack-ansible-os_magnum master: Updated from OpenStack Ansible Tests  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/75553702:25
*** stduolc has joined #openstack-ansible03:02
stduolchi, I need some help, I try to setup a development evironment. I get some error like this: https://gist.github.com/stdhi/a33a47f129f68ca02a2dfd251798895f03:03
*** stduolc has quit IRC03:06
*** stduolc has joined #openstack-ansible03:10
stduolcZZZzzz...03:10
*** macz_ has joined #openstack-ansible03:32
*** macz_ has quit IRC03:36
*** cshen has joined #openstack-ansible04:12
*** cshen has quit IRC04:16
*** pto has joined #openstack-ansible04:34
*** chandan_kumar has joined #openstack-ansible04:37
*** chandan_kumar is now known as chandankumar04:37
*** pto has quit IRC04:39
*** ianychoi__ has joined #openstack-ansible04:57
*** ianychoi_ has quit IRC04:59
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
ThiagoCMCjrosser, I finally have my VXLAN networks with MTU=1500 inside my Instances! I had to re-deploy everything from scratch. I believe that I have now the most perfect `/etc/openstack_deploy` ever!  LOL06:07
*** macz_ has joined #openstack-ansible06:09
*** cshen has joined #openstack-ansible06:12
*** macz_ has quit IRC06:13
*** cshen has quit IRC06:17
*** miloa has joined #openstack-ansible06:32
*** viks____ has joined #openstack-ansible06:41
*** SiavashSardari has joined #openstack-ansible06:53
*** cshen has joined #openstack-ansible07:00
*** cshen has quit IRC07:05
stduolchi, I have six virtual machine, I want to deploy a development cluster, where I can found some tutorail?07:11
*** pto has joined #openstack-ansible07:18
*** pto has quit IRC07:18
*** pto has joined #openstack-ansible07:18
*** sep has joined #openstack-ansible07:21
*** pto_ has joined #openstack-ansible07:25
*** pto_ has quit IRC07:26
*** pto_ has joined #openstack-ansible07:27
*** pto has quit IRC07:29
*** macz_ has joined #openstack-ansible07:45
*** macz_ has quit IRC07:49
*** fanfi has joined #openstack-ansible07:54
*** stduolc has quit IRC07:55
*** pto_ has quit IRC07:58
SiavashSardarimorning guys, is there a problem with gerrit and launchpad integration? I created a bug and upload a patch for it, but it seems the bug status didn't get updated.07:58
*** stduolc has joined #openstack-ansible07:58
SiavashSardaristduolc you can start here https://docs.openstack.org/project-deploy-guide/openstack-ansible/ussuri/08:00
noonedeadpunkSiavashSardari: no idea, maybe after gerrit update integration get broken...08:00
SiavashSardarimore detailed examples are coverd here https://docs.openstack.org/openstack-ansible/ussuri/user/index.html08:00
noonedeadpunkworth asking infra about it though...08:00
SiavashSardarinoonedeadpunk thanks, just a newbie question, how can I ask infra? '=D08:02
noonedeadpunkeither in #opendev or in #openstack-infra08:04
SiavashSardarioh, OK. Thank you08:04
*** andrewbonney has joined #openstack-ansible08:05
fanfiHello, is there anyone who can help me please. I would like to install os-octavia but i am failed on TASK [os_octavia : iptables rules]...>?https://pastebin.com/EcTwULg208:08
noonedeadpunkfanfi: have you defined br-lbaas network in openstack-user-config?08:09
stduolcthinks08:10
fanfinot sure :) i am just use config from docs.. - network:08:11
fanfi         container_bridge: "br-lbaas"08:11
fanfi         container_type: "veth"08:11
fanfi         container_interface: "eth14"08:11
fanfi         host_bind_override: "lb-veth-ovrd"08:11
fanfi         ip_from_q: "lbaas"08:11
fanfi         type: "flat"08:11
fanfi         net_name: "lbaas"08:11
fanfi         group_binds:08:11
fanfi           - neutron_linuxbridge_agent08:11
fanfi           - octavia-worker08:11
fanfi           - octavia-housekeeping08:11
fanfi           - octavia-health-manager08:11
noonedeadpunkhm...08:15
*** cshen has joined #openstack-ansible08:16
noonedeadpunkwere you re-defining octavia_provider_network_name variable?08:16
*** pto has joined #openstack-ansible08:17
fanfino...but I will check again08:18
fanfiin octavia.yml i have only define octavia-infra_hosts08:19
*** cshen has quit IRC08:22
*** pcaruana has joined #openstack-ansible08:22
*** cshen has joined #openstack-ansible08:23
noonedeadpunkcan you launch this debug? http://paste.openstack.org/show/800529/08:24
fanfihttp://paste.openstack.org/show/800532/08:28
noonedeadpunkok, so this list is empty for some reason....08:29
fanfii am not sure why...and do you have idea how I can fix it ?08:31
fanfiplease :)08:34
noonedeadpunkcan you also debug octavia_provider_network_name ?08:34
fanfilooks like empty08:36
noonedeadpunkas it seems you did override it, otherwise you'd get `"VARIABLE IS NOT DEFINED!"` as I just realized that octavia_provider_network_name used in test playbook provided should not be defined08:36
*** tosky has joined #openstack-ansible08:37
noonedeadpunkso you should get `"octavia_provider_network_name": "VARIABLE IS NOT DEFINED!"`08:37
fanfiwill it could be...but i do not know how did it08:40
jrossermorning08:47
noonedeadpunkщ/09:00
noonedeadpunko/09:00
fanfinoonedeadpunk selectattr('net_name', 'equalto', octavia_provider_network_name)|list": "VARIABLE IS NOT DEFINED!" ...I had it definfe in user variables :(09:05
noonedeadpunkwell since you removed it I think role should work now09:06
fanfiansible is working on it ;)09:06
*** pto has quit IRC09:11
*** pto has joined #openstack-ansible09:12
*** pto has quit IRC09:23
*** rpittau|afk is now known as rpittau09:27
*** arxcruz|rover is now known as arxcruz|202109:28
*** macz_ has joined #openstack-ansible09:45
*** macz_ has quit IRC09:50
*** pto has joined #openstack-ansible09:51
*** pto has quit IRC09:54
*** pto has joined #openstack-ansible09:55
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/ussuri: Set correct permissions for rootwrap.d  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/76278909:56
*** csmart has joined #openstack-ansible09:57
fanfinoondeadpunk Thank you for your support. Playbook execution success10:02
fanfinow ...I will go to try deploy os-magnum  :)10:03
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/train: Set correct permissions for rootwrap.d  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/76279010:26
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462710:32
noonedeadpunkjrosser: any idea what could possible go wrong here? https://zuul.opendev.org/t/openstack/build/b58d8decd7cc493d8e1a06b93b9b261f/log/logs/host/octavia-api.service.journal-00-06-32.log.txt#977 ?10:41
noonedeadpunkconsidering it was ok some time ago.....10:41
noonedeadpunkand breaks only debian...10:41
jrosserwierd10:45
jrosserRequest path is / and it does not require keystone authentication process_request /openstack/venvs/octavia-22.0.0.0b2.dev21/lib/python3.7/site-packages/octavia/common/keystone.py:7710:45
jrosserConnection reset by peer [core/writer.c line 306] during GET / (172.29.236.100)10:45
jrosserit almost looks like it's trying to get to / on 172.29.236.100 which would be horizon, not keystone10:46
noonedeadpunkhm, I think it should be trying to reach 172.29.236.101 as well?10:46
noonedeadpunkas haproxy is there?10:47
jrosseroh of course10:47
jrosser.100 is external isnt it10:47
noonedeadpunkbut for focal it logs the same https://zuul.opendev.org/t/openstack/build/ff18df595e83450baec6e3096bc71c2f/log/logs/host/octavia-api.service.journal-23-13-38.log.txt#99310:48
noonedeadpunkwhich passingtests...10:48
jrosserhttps://zuul.opendev.org/t/openstack/build/b58d8decd7cc493d8e1a06b93b9b261f/log/logs/host/octavia-api.service.journal-00-06-32.log.txt#41-4410:48
noonedeadpunk:(10:49
noonedeadpunkyeah....10:49
*** pto has quit IRC10:49
jrossernot sure it's related but memcached_servers = 172.29.236.100:11211 is probably not totally correct10:50
noonedeadpunkwhy not?10:50
noonedeadpunkthey are not going through haproxy10:51
jrosseroh right yes this is metal isnt it10:51
noonedeadpunkyep it is metal10:51
jrosserthe internet suggests these errors are from uwsgi10:59
noonedeadpunkyep, they are from uwsgi 100%11:04
noonedeadpunknot not sure I treally understand them... sound like worth spawning an aio...11:04
noonedeadpunkbut upgrade jobs are fixed :)11:05
openstackgerritMerged openstack/openstack-ansible-os_magnum master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76063311:06
openstackgerritMerged openstack/openstack-ansible-os_magnum master: Drop magnum distro CI jobs  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76052511:06
*** luksky has joined #openstack-ansible11:13
openstackgerritMerged openstack/openstack-ansible master: Bump SHAs for master  https://review.opendev.org/c/openstack/openstack-ansible/+/76458911:23
noonedeadpunkjrosser: I think I will do RC on top of ^ what do you think?11:23
noonedeadpunkand release in 2 weeks?11:26
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Added Openstack Adjutant role deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/75631011:29
*** pto has joined #openstack-ansible11:36
*** pto has joined #openstack-ansible11:37
*** pto has quit IRC11:38
*** pto has joined #openstack-ansible11:39
*** jbadiapa has joined #openstack-ansible11:40
jrossernoonedeadpunk: sounds good - nothing like a deadline to get things merged :)11:42
jrosserbut i guess there is relatively small number of outstanding things now11:43
noonedeadpunk:P11:43
noonedeadpunkyeah, I htink nothing wich can overload us with backporting11:43
jrosserwe are making progress with zun but seems docker download rate limit breaks the CI11:43
*** fanfi has quit IRC11:44
jrosserit uses cirros and nginx docker images for the tempest tests11:44
noonedeadpunkoh, well, except I didn't implement api-threads for all roels :(11:44
noonedeadpunkwill revise this topic during this week....11:45
noonedeadpunk(today)11:45
noonedeadpunkuh :(11:45
*** macz_ has joined #openstack-ansible11:46
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/76464411:49
*** macz_ has quit IRC11:51
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/76464611:53
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76464711:55
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_panko master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_panko/+/76464811:55
*** rfolco has joined #openstack-ansible11:56
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement stable/train: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/76464911:56
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/76465011:57
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_sahara master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_sahara/+/76465111:59
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_swift master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/76465212:02
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_trove master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/76465312:03
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_trove master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/76465312:04
*** mgariepy has quit IRC12:08
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/76465412:08
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Trigger uwsgi restart  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/76465512:12
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Define condition for the first play host one time  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/76465612:17
admin0morning12:20
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Reduce number of processes on small systems  https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/76465812:20
*** SiavashSardari has quit IRC12:22
*** tosky has quit IRC12:24
*** SiavashSardari has joined #openstack-ansible12:24
*** tosky has joined #openstack-ansible12:25
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Define condition for the first play host one time  https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/76465912:25
*** MickyMan77 has joined #openstack-ansible12:34
MickyMan77hi,12:34
MickyMan77which is the latest stable version of openstack-ansible for use on CentOS 8?12:35
noonedeadpunkwell, it's still ussuri 21.2.012:35
MickyMan77I have one farm that is deployd with version 21.0.1, can I re-deploy it with 21.2.0 ?12:36
noonedeadpunkyep, you can either re-deploy or upgrade12:37
*** sshnaidm|off is now known as sshnaidm12:38
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Simplify service creation  https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/76466112:47
*** miloa has quit IRC12:53
*** macz_ has joined #openstack-ansible13:08
noonedeadpunkjrosser: btw I think this change https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/410681 is valid nowadays13:10
jrosseroh yes thats matching the rest of the services defaults/main.yml13:11
*** macz_ has quit IRC13:12
*** mgariepy has joined #openstack-ansible13:16
*** mgariepy has quit IRC13:18
*** mgariepy has joined #openstack-ansible13:21
*** stduolc has quit IRC13:28
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462713:33
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-galera_server master: Use mysql user instead of root  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/76444913:34
guilhermespnice docs jrosser !13:35
jrosserguilhermesp: it's your paste :) i just wrote made a patch from it!13:36
guilhermesp+2 !13:36
jrosseri've not tested the ansible path for creating the image and coe template13:36
jrosserbut it looks like we should have everything in os_magnum now to be able to put the data in user_variables.yml completely13:37
kleiniI plan to extend my single infra node with a second one, currently running 21.1.0. Would it be possible to set up second infra node with Ubuntu 20.04, while the first one remains on 18.04? I already consider to remove internal/external configured VIP and let keepalived do this and I know about guides for upgrading xenial to bionic. Does anything else come to your mind, what could be an issue?13:37
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462713:42
jrosserkleini: there are two things to consider, first is your internal and external VIP becoming managed by keepalived rather than statically assigned on the single infra host13:46
jrosserand second would be that the python wheels built by the repo server are specific to the OS distribition of the repo server13:46
jrosserso building on bionic and deploying those wheels onto focal is not going to go well, as there is a lot of linkage to C libraries like openssl13:47
kleiniSo, I need to disable repo container on the bionic node, so the one on the focal node is used, right?13:48
jrosserfor the second infra host (and all of it's containers) you should override https://github.com/openstack/ansible-role-python_venv_build/blob/master/defaults/main.yml#L121 to point to the focal repo server13:49
jrosserand there might be some wierness with repo contents replication between repo servers too, as they're both behind haproxy and unless you do something about it won't have an identical set of wheels in each one13:49
kleinieven if I shut down the repo container on my first infra host?13:51
jrosserwell this depends what your plan is13:51
kleiniupgrading then frist infra host after the second is running with focal13:51
jrosserif this is step 1 on a migration to focal then shutting down the first repo server is fine, as you'd not need the bionic wheels any more13:52
kleinibut I don't want to deploy the new and second host with bionic and have then the need to upgrade it later13:52
jrossersure, the only thing that you won't be able to do is re-deploy anything further on bionic compute hosts for example13:53
kleinithen I would turn off repo container on focal node and switch repo container on bionic node back on13:53
jrosserwell not really, i was just going to say13:53
jrosserthe repo server which builds the wheels is selected algorthmically by ansible13:53
kleiniand yes, deployment then needs to pick the correct target hosts according to active repo containers13:53
jrossernot by the load balancer13:54
kleinioh, okay. I thought, it is the load balancer and my limits on the OSA commands13:54
jrosseryes, two things to fix - 1) build on the right repo server, set some vars for this, 2) install computes and things from the right repo server, shut down the one you don't want13:55
jrossernormally this situation would only exist for a short time in the middle of an OS upgrade13:55
jrosserso after doing the infra nodes you'd move on to do the computes13:55
kleiniand I have dedicated network nodes13:56
kleiniso maybe too complex if something happens and it is maybe better for me as a more unexperienced OSA user to add second infra node with bionic and then later do the more complex upgrade from bionic to focal13:57
jrosserthey will be running at least the l3 agent in a venv, which would require wheels for the current OS13:57
*** d34dh0r53 has joined #openstack-ansible13:58
jrossergenerally starting the OS upgrade with whichever infra/repo host is the default for wheel builds is best13:58
kleinithanks for your advice13:58
*** redrobot has joined #openstack-ansible13:59
jrosseri think i'd also do whatever minor upgrade to the lastest point of your current release first too14:01
kleinithis would indeed help with base images problems14:02
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462714:10
*** cshen has quit IRC14:30
*** spatel has joined #openstack-ansible14:40
spateljamesdenton: morning (its Monday so much be busy ;) )14:42
jamesdentonhi spatel14:42
spatelI reached max 700kpps on both SRIOV/DPDK vm (i able to reach 1mpps but with some drops)14:42
jamesdentonan improvement! been following along with your thread on the ML14:43
spatellook like my guest VM kernel is bottleneck14:43
spatelYes i changed my UDP profile and now i can git 700kpps without single drops14:43
spatelon standard virtio VM result is 200kpps without drops14:44
spateljamesdenton: did you use testpmd on guest VM during your Trex test?14:44
jamesdentonnot that i recall, no14:44
jamesdentoni am not confident that my test strategy was all that efficient, TBH14:45
spateljamesdenton: i am curious even you use DPDK on host but guest is still using kernel to process packet so how people saying they hit million packet :)14:45
jamesdentonwith regards to trex14:45
spatelI am seeing CPU Interrupt hitting almost 80k to 90k/interrupt per sec on vmstat output14:46
spatelbased on that result i think kernel is bottleneck in VM14:47
*** d34dh0r53 has quit IRC14:48
*** fanfi has joined #openstack-ansible14:49
MickyMan77Hi all, Why do I get this error msg, "Database schema file with version 140 doesn't exist" at the task "os_cinder : Perform a cinder DB sync" -> http://paste.openstack.org/show/800543/14:52
*** d34dh0r53 has joined #openstack-ansible14:59
jrosserMickyMan77: i think you may have mixed up versions somewhere? db schema 140 for cinder was only created 2 months ago in this patch https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/migrate_repo/versions/140_create_project_default_volume_type.py, but you have a ussuri version venv here /openstack/venvs/cinder-21.1.015:01
MickyMan77jrosser: how do I get around it ? deploy a newer version ?15:06
jrosseri don't know - you seem to be using a newer version of cinder than would be expected for a ussuri deployment15:13
jrosseri would check that /etc/ansible/roles/os_cinder SHA matches the one specified in /opt/openstack-ansible/ansible-role-requirements.yml15:13
*** klamath_atx has quit IRC15:18
*** klamath_atx has joined #openstack-ansible15:18
*** SiavashSardari has quit IRC15:24
openstackgerritMerged openstack/openstack-ansible-os_magnum master: Use openstack_service_*uri_proto vars by default  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/41068115:33
fanfihey folsk, what I need to configure for deplyment os-magnum? magnum-infra_hosts and ?15:33
kleinimagnum requires barbican and Heat besides the standard services like image, volume, compute and so on15:34
fanfiit must be part of user_variables.yml ?15:35
jrossershould not be necessary to have barbican15:35
jrossermagnum can optionally use many things like ocativa, but they're not mandatory15:36
fanfiis there any example for minimal config ? for os-magnum deployment ?15:36
jrosserfanfi: for these questions it is always best to refer back to how the all-in-one works, then you can find out yourself15:37
jrosserhere is the additional config we add to the AIO for osa/magnum CI jobs https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/conf.d/magnum.yml.aio15:38
jrosserand here would be a set of user_variables which are used for osa/magnum CI tests https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables_magnum.yml.j215:38
jrosseryou should be able to take those and adapt to your deployment15:39
fanfiI have like tthis but with 3 nodes ....but not works for me :(15:39
jrosserif you could paste any relevant output to paste.openstack.org then we can see what is happening15:40
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462715:44
fanfihttp://paste.openstack.org/show/800553/15:44
jrosserthat looks like a typo/bad yaml formatting in openstack_user_config.yml15:45
jrosseryes the indentation is incorrect in /etc/openstack_deploy/conf.d/magnum.yml15:46
fanfioch ...yes15:47
fanfisorry for my stupid question  :(15:47
jrosserno worries :)15:48
*** nurdie has joined #openstack-ansible15:50
*** pto has quit IRC15:51
MickyMan77jrosser:  How do I check the SHA of /etc/ansible/roles/os_cinder ?15:55
jrossercd /etc/ansible/roles/os_cinder15:56
jrossergit log15:56
jrosserlook at the SHA of the first commit you see15:56
jrosserwell actually this is the version of the cinder service itself though isnt it?15:57
MickyMan77git log = "commit d01cdadeb99dd9b0ec6a5f5896ca31af2e363c81"15:59
MickyMan77and the /opt/openstack-ansible/ansible-role-requirements.yml --> version: d01cdadeb99dd9b0ec6a5f5896ca31af2e363c8115:59
*** macz_ has joined #openstack-ansible16:01
jrosserok thats ok16:05
jrosserbut you have an error from the cinder service itself rather than the ansible role16:06
jrosseri seem to remember you having a lot of switching between master/stable branches a while ago which would result in this sort of thing16:06
admin0fanfi, me and ThiagoCMC have been trying to do at least one successful magnum/k8 setup osa .. no results yet .. if you want to join in the test, we welcome it16:06
*** d34dh0r53 has quit IRC16:09
MickyMan77jrosser: I did think I did remove all setting etc etc when I did move from master to stable...16:10
jrosseradmin0: i have written a documentation patch for magnum, would be nice if you could see if it makes sense https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462716:10
fanfiadmin0 of course I interesting to join you16:13
fanfii am just deploying the os-magnum16:14
*** d34dh0r53 has joined #openstack-ansible16:16
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to troubleshoot high load and poor query caching performance, downtime should be less than 5 minutes16:20
MickyMan77jrosser: I did found an issue.. on one farm the controll node have this folder. "/openstack/venvs/cinder-21.1.0" but on the farm that have the problem it looks like this...16:20
MickyMan77"openstack/venvs/cinder-21.1.0.dev119"16:20
MickyMan77but I also have "cinder-21.1.0"16:21
admin0jrosser, was this tested on 21.1 or 21.2 branch ?16:22
admin0so that i can replicate/use the same ?16:22
jrosserthe SHA of magnum is given in the patch16:23
jrosserit is just me writing up the notes that guilhermesp left here in pastes16:23
MickyMan77first I did run the master version, "that did not work" so I did move over to 21.0.1 and later to version 21.1.016:25
*** pto has joined #openstack-ansible16:28
*** pto has quit IRC16:28
*** pto has joined #openstack-ansible16:29
MickyMan77jrosser : can it work if I just add the file "140_create_project_default_volume_type.py"?16:29
*** pto has quit IRC16:37
admin0can anyone who had an AIO of 21.2.0 confirm that you can create volumes, but not mount it16:41
admin0so that if you are facing the same, we can open it as a bug16:41
*** mgariepy has quit IRC16:48
ThiagoCMCadmin0, I'm basically ready to give Magnum another try!16:54
ThiagoCMCJust created a template with: `openstack coe cluster template create k8s-ultra-template --image fedora-coreos-32.20201104.3.0 --keypair default --external-network intranet-1 --dns-nameserver 1.1.1.1 --master-flavor m1.small --flavor r1.large --docker-volume-size 8 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes`16:55
ThiagoCMCBTW, fresh installed (yesterday) OSA Ussuri on top of Ubuntu 20.04.1, fully upgraded.16:56
ThiagoCMCMy Ubuntu image is also available to build templates... Gonna try it too16:57
ThiagoCMCTrying to launch a Kubernetes cluster with: `openstack coe cluster create k8s-ultra-cluster --cluster-template k8s-ultra-template --node-count 1` - accepted, let's see!16:58
ThiagoCMCCREATE_FAILED right away...  :-(17:01
ThiagoCMCMagnum error: ERROR: The Parameter (fixed_network_name) was not defined in template.17:03
ThiagoCMCHmmm17:03
ThiagoCMCNew error17:03
ThiagoCMCI might be facing this: https://answers.launchpad.net/ubuntu/+source/magnum/+question/69367417:10
*** cshen has joined #openstack-ansible17:11
*** cshen has quit IRC17:15
jrosserMickyMan77: "openstack/venvs/cinder-21.1.0.dev119" means at some point you deployed stable/ussuri rather than a specific tag release17:22
jrosserI think that somewhere along the line you have deployed a master or victoria version of cinder17:23
jrosserperhaps trying to re-run the cinder playbook with -e venv_rebuild=yes to fully rebuild the virtualenv would be a good idea17:24
*** cshen has joined #openstack-ansible17:30
*** rpittau is now known as rpittau|afk17:42
*** andrewbonney has quit IRC17:59
*** mgariepy has joined #openstack-ansible18:05
admin0ThiagoCMC, can you try with the docs on what jrosser pasted18:10
jrosserlooking at the patch linked from ThiagoCMC launchpad question i'm not sure i can see the linked 'fix' for that in the ussuri branch of the magnum code18:11
ThiagoCMCSo, the Ussuri branch still have that problem, right?18:15
ThiagoCMCadmin0, can you share it again, please?  =P18:16
admin0ThiagoCMC, https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76462718:21
admin0recommended in the end of that is this: container_infra_prefix=registry.public.mtl1.vexxhost.net/magnum/18:21
jrosserThiagoCMC: i do not know why the linked patch was applied to master then backported to train and not ussuri18:22
jrosserthat would be a question for the magnum developers, or perhaps apply it locally and see what happens18:23
ThiagoCMCadmin0, thanks! jrosser, I'll give it a try then, after trying the new doc18:28
*** gyee has joined #openstack-ansible18:28
openstackgerritMerged openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri  https://review.opendev.org/c/openstack/openstack-ansible/+/76459018:33
openstackgerritMerged openstack/openstack-ansible master: Set service region for masakari  https://review.opendev.org/c/openstack/openstack-ansible/+/76184218:34
*** dave-mccowan has quit IRC18:36
ThiagoCMCAha! One small progress! To avoid that "fixed_network_name" ERROR, I had to change the `os_distro` metada from just "coreos", to "fedora-coreos"! Now I'm not seeing that error anymore and Heat was called!18:40
ThiagoCMCI still have to try that big Cluster Template from the docs. My simple template doesn't create the Kubernetes nodes (just the master).18:41
*** dave-mccowan has joined #openstack-ansible18:42
ThiagoCMCI can't use `--master-lb-enabled` if I don't have any LBaaS, right?18:43
ThiagoCMCWhat is a good value for "boot_volume_type=<your volume type>" in `--labels`?18:44
ThiagoCMCI have OSA+Ceph, Nova uses `vms` pool by default. Cinder has "RBD" under "Type"... I guess that I answered my own question... lol18:45
ThiagoCMCWhat about this: `container_infra_prefix=<docker-registry-without-rate-limit>` ?18:46
ThiagoCMCDo I have to change it?18:46
ThiagoCMCBTW, I'm talking about: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627/5/doc/source/index.rst#14118:47
jrosseryou need to specify a docker registry18:58
ThiagoCMCDo you have a working example?18:59
jrossersorry that’s not very helpful18:59
ThiagoCMCI have no idea what to type there lol18:59
jrosserdockerhub just imposed rate limits recently18:59
jrosserif there was a single universal answer then I would have put it in the documentation :(19:00
jrosserif you look back through the irc log then you’ll see that guilhermesp used a registry they host at vexxhost19:00
jrosserbut I didn’t think it was fair to immortalise that into documentation as it’s someone’s actual deployment19:01
ThiagoCMCWe might need an Ansible playbook to build this local "Docker Registry", like we do with Repos... :-P19:02
ThiagoCMCI'm sad that I won't be able to test this...  :-(19:02
ThiagoCMCI mean, now.  lol19:02
jrosserI’m sure none will19:02
guilhermespyep jrosser I agree we should make the registry up to the user.19:02
ThiagoCMCguilhermesp, could I use yours to give it a try?19:03
guilhermespsure go ahead19:03
ThiagoCMCAwesome! What is the syntax for `container_infra_prefix= XXX_vexhost` ?19:04
ThiagoCMCYou can PM me if you want  =P19:04
guilhermespcontainer_infra_prefix=registry.public.mtl1.vexxhost.net/magnum/19:04
kleiniDocker Hub pull rate limits may quickly prevent download of further container images. You're allowed to download 100 pulls in 6 hours. With Kubernetes you reach that limit very quickly. The problem can be mitigated by used a local pull through registry cache. An easy example for that is documented here: https://docs.docker.com/registry/recipes/mirror/19:04
ThiagoCMCThank you!!!19:04
ThiagoCMCkleini, if I want to give a try with dockerhub, what's the syntax for the container_infra_prefix label?19:05
ThiagoCMCGuys, it would be nice to document the correct syntax and point out the limitations19:06
ThiagoCMCIt would be nice to also share how to build a local Docker Registry for this! I think that I could make a small Ansible playbook to deploy it in a VM outside of OSA...19:07
*** cshen has quit IRC19:17
*** pcaruana has quit IRC19:23
jrosserwell - we are not here to document the magnum project itself, there is tons of stuff already https://docs.openstack.org/magnum/latest/user/index.html#19:31
*** cshen has joined #openstack-ansible19:34
ThiagoCMCjrosser, sure, makes sense...  =P19:38
ThiagoCMCBTW, I had to remove the `--master-lb-enabled`, do you think that it should work without it?19:39
jrosserkleini: does that pull through cache still work now that the upstream rate limit is on the metadata, not the images themselves?19:39
ThiagoCMCI'm not sure if it's related to Octavia, or now...19:39
ThiagoCMC*nor not19:39
ThiagoCMCGot it: https://docs.openstack.org/magnum/latest/user/  lol19:41
jrosseroctavia is to do with ingress_controller iirc19:42
ThiagoCMCYep, sure... I don't have it, so, I'm not creating the Magnum template with --master-lb-enabled.19:42
ThiagoCMCI'm trying this: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627/5/doc/source/index.rst - So far, I'm seeing the same problem that I saw before, it creates only the Kubernetes Master and no Kubernetes Nodes.19:43
ThiagoCMCIt's CREATING_IN_PROGRESS for about 25 minutes already19:43
jrosserthis is a typical failure condition19:44
*** tosky has quit IRC19:45
jrossermagnum api->magnum conductor->heat->some vm->cloud-init->heat-container-agent you've got to just follow the flow of how this works from beginning to end19:45
jrossertrying lots of different options for the cluster template won't be productive as there are tons of them19:46
ThiagoCMCI'm seeing that in your doc, there is only the "openstack coe cluster template create" but no example about how to use that templete with "openstack coe cluster create ..."19:46
ThiagoCMCSo, I'm guessing that one using Horizon  =P19:46
ThiagoCMCAny easy way to avoid this typical failure condition?19:47
jrosseropenstack coe cluster create mycluster --cluster-template mytemplate --node-count <N> --master-count <M>19:47
jrosser^ creating the cluster from the template is the easy bit19:47
jrosserno not really, like i say this is sooooo complicated that you've just got to work through logically from start to finish whats going on under the hood in magnum19:48
jrossereach of those things is making a log file19:48
jrosserthe 'loop is closed' by the heat agent / cloud-init stuff in the VM making a callback to the public heat API endpoint that everything has completed ok19:49
jrosserso you can start from there perhaps19:49
ThiagoCMCI see... Ok!19:49
jrosserthe important thing to see here with magnum is that....19:51
jrosser1) it provisions a load of VM with heat19:51
jrosser2) it drops a whole load of shell scripts with cloud-init / heat-container-agent like these https://github.com/openstack/magnum/tree/master/magnum/drivers/common/templates/kubernetes/fragments19:51
jrosserthose shell scripts are run and provision the k8s cluster, then 'phone home' to the heat API to say they are done19:52
jrosserall of the 'labels' that you set on the cluster template are like a set of variables passed to those shell scripts to customise the deployment inside the VM19:53
ThiagoCMCThis is indeed very complicated!19:56
ThiagoCMCMight be easier to just launch an empty stack with Heat and use some Ansible playbook to deploy Kubernetes and be done with it, no Magnum...19:57
*** dave-mccowan has quit IRC19:59
*** dave-mccowan has joined #openstack-ansible20:02
jrosserurgh new pip resolver has broken the neutron role tests https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/75875120:05
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-tests master: Apply OSA global-requirements-pins during functional tests  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/76482420:26
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Updated from OpenStack Ansible Tests  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/75875120:27
kleinijrosser: that pull through cache at least mitigates our problems in CI. we change docker.io/library/node:10 into registry-cache.domain.tld/library/node:10 and we can locally always pull the image even if fetching new images from Docker Hub is currently rate limited for us20:36
jrosseri was looking at the documentation for it20:36
*** spatel has quit IRC20:37
jrosser"When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content." <- i was wondering if that was a metadata call to the upstream registry which would count against the rate limit20:38
kleinimaybe it counts. I don't know. If the image already exists in the cache, the cached version is delivered.20:42
kleinihttp://paste.openstack.org/show/800571/ <- first image does not exist in cache and pull gets a rate limit from docker.io, second image is already cached locally20:44
*** tosky has joined #openstack-ansible20:48
admin0so magnum .. :( .. what about lbaas and  trove ? are those working good ?20:49
ThiagoCMCadmin0, I gave up on Magnum today. Never tried LBaaS, neither Trove...  =P20:53
admin0lbaas is octavia20:55
ThiagoCMCMagnum is still too over complicated. It would be easier to just launch an empty stack with Heat and call some regular Ansible playbook (kubespray.io?) to deploy Kubernetes on top of it.20:56
ThiagoCMCYep, I know that LBaaS is Octavia but, I've never tried it before.20:56
ThiagoCMCDoes it depends on virtual machines to run the LBaaS services?20:57
*** rfolco has quit IRC20:59
kleinido not underestimate a K8s deployment. Tools like kubespray and k3s make some usable deployment simple, but these deployments often do not meet requirements, you will have in production. Magnum does a pretty good job for that. I never had a Kubernetes cluster having issues in production. With kubespray, k3s and kind we have a lot of issues in production.21:00
ThiagoCMCkleini, thanks for your feedback! I woudl actually love to have Magnum working here...  =P21:00
*** vesper11 has quit IRC21:01
ThiagoCMCBut it's too complicated now.21:01
ThiagoCMCAnd hard to troubleshoot21:01
*** spatel has joined #openstack-ansible21:02
admin0kleini, i don't want to give up :)21:03
ThiagoCMCadmin0, you can't give up!   :-P21:04
admin0i will not21:04
ThiagoCMC^_^21:04
spatelfolks, asciinema is awesome tool :)21:06
spatelJust created movie of my Trex load-test - https://asciinema.org/a/Zy8LsXU1B31DaeCqcqi5nqdDU21:07
admin0yep21:07
spatelnow i am going to add it in my all blog :)21:09
spatelso easy to document21:09
*** luksky has quit IRC21:17
*** luksky has joined #openstack-ansible21:18
*** cshen has quit IRC21:34
*** mathlin has joined #openstack-ansible21:37
*** yann-kaelig has joined #openstack-ansible21:51
*** yann-kaelig has quit IRC21:52
*** cshen has joined #openstack-ansible21:52
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to make further query caching and Git garbage collection adjustments, downtime should be less than 5 minutes22:35
*** spatel has quit IRC22:45
*** nurdie has quit IRC22:46
*** nurdie has joined #openstack-ansible23:02
*** nurdie has quit IRC23:07
*** luksky has quit IRC23:14
*** jbadiapa has quit IRC23:15

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!