*** tosky has quit IRC | 00:39 | |
*** cshen has joined #openstack-ansible | 01:11 | |
*** cshen has quit IRC | 01:16 | |
*** macz_ has joined #openstack-ansible | 01:31 | |
*** macz_ has quit IRC | 01:35 | |
*** cshen has joined #openstack-ansible | 02:12 | |
*** cshen has quit IRC | 02:16 | |
openstackgerrit | Merged openstack/openstack-ansible-os_magnum master: Updated from OpenStack Ansible Tests https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/755537 | 02:25 |
---|---|---|
*** stduolc has joined #openstack-ansible | 03:02 | |
stduolc | hi, I need some help, I try to setup a development evironment. I get some error like this: https://gist.github.com/stdhi/a33a47f129f68ca02a2dfd251798895f | 03:03 |
*** stduolc has quit IRC | 03:06 | |
*** stduolc has joined #openstack-ansible | 03:10 | |
stduolc | ZZZzzz... | 03:10 |
*** macz_ has joined #openstack-ansible | 03:32 | |
*** macz_ has quit IRC | 03:36 | |
*** cshen has joined #openstack-ansible | 04:12 | |
*** cshen has quit IRC | 04:16 | |
*** pto has joined #openstack-ansible | 04:34 | |
*** chandan_kumar has joined #openstack-ansible | 04:37 | |
*** chandan_kumar is now known as chandankumar | 04:37 | |
*** pto has quit IRC | 04:39 | |
*** ianychoi__ has joined #openstack-ansible | 04:57 | |
*** ianychoi_ has quit IRC | 04:59 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
ThiagoCMC | jrosser, I finally have my VXLAN networks with MTU=1500 inside my Instances! I had to re-deploy everything from scratch. I believe that I have now the most perfect `/etc/openstack_deploy` ever! LOL | 06:07 |
*** macz_ has joined #openstack-ansible | 06:09 | |
*** cshen has joined #openstack-ansible | 06:12 | |
*** macz_ has quit IRC | 06:13 | |
*** cshen has quit IRC | 06:17 | |
*** miloa has joined #openstack-ansible | 06:32 | |
*** viks____ has joined #openstack-ansible | 06:41 | |
*** SiavashSardari has joined #openstack-ansible | 06:53 | |
*** cshen has joined #openstack-ansible | 07:00 | |
*** cshen has quit IRC | 07:05 | |
stduolc | hi, I have six virtual machine, I want to deploy a development cluster, where I can found some tutorail? | 07:11 |
*** pto has joined #openstack-ansible | 07:18 | |
*** pto has quit IRC | 07:18 | |
*** pto has joined #openstack-ansible | 07:18 | |
*** sep has joined #openstack-ansible | 07:21 | |
*** pto_ has joined #openstack-ansible | 07:25 | |
*** pto_ has quit IRC | 07:26 | |
*** pto_ has joined #openstack-ansible | 07:27 | |
*** pto has quit IRC | 07:29 | |
*** macz_ has joined #openstack-ansible | 07:45 | |
*** macz_ has quit IRC | 07:49 | |
*** fanfi has joined #openstack-ansible | 07:54 | |
*** stduolc has quit IRC | 07:55 | |
*** pto_ has quit IRC | 07:58 | |
SiavashSardari | morning guys, is there a problem with gerrit and launchpad integration? I created a bug and upload a patch for it, but it seems the bug status didn't get updated. | 07:58 |
*** stduolc has joined #openstack-ansible | 07:58 | |
SiavashSardari | stduolc you can start here https://docs.openstack.org/project-deploy-guide/openstack-ansible/ussuri/ | 08:00 |
noonedeadpunk | SiavashSardari: no idea, maybe after gerrit update integration get broken... | 08:00 |
SiavashSardari | more detailed examples are coverd here https://docs.openstack.org/openstack-ansible/ussuri/user/index.html | 08:00 |
noonedeadpunk | worth asking infra about it though... | 08:00 |
SiavashSardari | noonedeadpunk thanks, just a newbie question, how can I ask infra? '=D | 08:02 |
noonedeadpunk | either in #opendev or in #openstack-infra | 08:04 |
SiavashSardari | oh, OK. Thank you | 08:04 |
*** andrewbonney has joined #openstack-ansible | 08:05 | |
fanfi | Hello, is there anyone who can help me please. I would like to install os-octavia but i am failed on TASK [os_octavia : iptables rules]...>?https://pastebin.com/EcTwULg2 | 08:08 |
noonedeadpunk | fanfi: have you defined br-lbaas network in openstack-user-config? | 08:09 |
stduolc | thinks | 08:10 |
fanfi | not sure :) i am just use config from docs.. - network: | 08:11 |
fanfi | container_bridge: "br-lbaas" | 08:11 |
fanfi | container_type: "veth" | 08:11 |
fanfi | container_interface: "eth14" | 08:11 |
fanfi | host_bind_override: "lb-veth-ovrd" | 08:11 |
fanfi | ip_from_q: "lbaas" | 08:11 |
fanfi | type: "flat" | 08:11 |
fanfi | net_name: "lbaas" | 08:11 |
fanfi | group_binds: | 08:11 |
fanfi | - neutron_linuxbridge_agent | 08:11 |
fanfi | - octavia-worker | 08:11 |
fanfi | - octavia-housekeeping | 08:11 |
fanfi | - octavia-health-manager | 08:11 |
noonedeadpunk | hm... | 08:15 |
*** cshen has joined #openstack-ansible | 08:16 | |
noonedeadpunk | were you re-defining octavia_provider_network_name variable? | 08:16 |
*** pto has joined #openstack-ansible | 08:17 | |
fanfi | no...but I will check again | 08:18 |
fanfi | in octavia.yml i have only define octavia-infra_hosts | 08:19 |
*** cshen has quit IRC | 08:22 | |
*** pcaruana has joined #openstack-ansible | 08:22 | |
*** cshen has joined #openstack-ansible | 08:23 | |
noonedeadpunk | can you launch this debug? http://paste.openstack.org/show/800529/ | 08:24 |
fanfi | http://paste.openstack.org/show/800532/ | 08:28 |
noonedeadpunk | ok, so this list is empty for some reason.... | 08:29 |
fanfi | i am not sure why...and do you have idea how I can fix it ? | 08:31 |
fanfi | please :) | 08:34 |
noonedeadpunk | can you also debug octavia_provider_network_name ? | 08:34 |
fanfi | looks like empty | 08:36 |
noonedeadpunk | as it seems you did override it, otherwise you'd get `"VARIABLE IS NOT DEFINED!"` as I just realized that octavia_provider_network_name used in test playbook provided should not be defined | 08:36 |
*** tosky has joined #openstack-ansible | 08:37 | |
noonedeadpunk | so you should get `"octavia_provider_network_name": "VARIABLE IS NOT DEFINED!"` | 08:37 |
fanfi | will it could be...but i do not know how did it | 08:40 |
jrosser | morning | 08:47 |
noonedeadpunk | щ/ | 09:00 |
noonedeadpunk | o/ | 09:00 |
fanfi | noonedeadpunk selectattr('net_name', 'equalto', octavia_provider_network_name)|list": "VARIABLE IS NOT DEFINED!" ...I had it definfe in user variables :( | 09:05 |
noonedeadpunk | well since you removed it I think role should work now | 09:06 |
fanfi | ansible is working on it ;) | 09:06 |
*** pto has quit IRC | 09:11 | |
*** pto has joined #openstack-ansible | 09:12 | |
*** pto has quit IRC | 09:23 | |
*** rpittau|afk is now known as rpittau | 09:27 | |
*** arxcruz|rover is now known as arxcruz|2021 | 09:28 | |
*** macz_ has joined #openstack-ansible | 09:45 | |
*** macz_ has quit IRC | 09:50 | |
*** pto has joined #openstack-ansible | 09:51 | |
*** pto has quit IRC | 09:54 | |
*** pto has joined #openstack-ansible | 09:55 | |
openstackgerrit | Merged openstack/openstack-ansible-os_cinder stable/ussuri: Set correct permissions for rootwrap.d https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/762789 | 09:56 |
*** csmart has joined #openstack-ansible | 09:57 | |
fanfi | noondeadpunk Thank you for your support. Playbook execution success | 10:02 |
fanfi | now ...I will go to try deploy os-magnum :) | 10:03 |
openstackgerrit | Merged openstack/openstack-ansible-os_cinder stable/train: Set correct permissions for rootwrap.d https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/762790 | 10:26 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 10:32 |
noonedeadpunk | jrosser: any idea what could possible go wrong here? https://zuul.opendev.org/t/openstack/build/b58d8decd7cc493d8e1a06b93b9b261f/log/logs/host/octavia-api.service.journal-00-06-32.log.txt#977 ? | 10:41 |
noonedeadpunk | considering it was ok some time ago..... | 10:41 |
noonedeadpunk | and breaks only debian... | 10:41 |
jrosser | wierd | 10:45 |
jrosser | Request path is / and it does not require keystone authentication process_request /openstack/venvs/octavia-22.0.0.0b2.dev21/lib/python3.7/site-packages/octavia/common/keystone.py:77 | 10:45 |
jrosser | Connection reset by peer [core/writer.c line 306] during GET / (172.29.236.100) | 10:45 |
jrosser | it almost looks like it's trying to get to / on 172.29.236.100 which would be horizon, not keystone | 10:46 |
noonedeadpunk | hm, I think it should be trying to reach 172.29.236.101 as well? | 10:46 |
noonedeadpunk | as haproxy is there? | 10:47 |
jrosser | oh of course | 10:47 |
jrosser | .100 is external isnt it | 10:47 |
noonedeadpunk | but for focal it logs the same https://zuul.opendev.org/t/openstack/build/ff18df595e83450baec6e3096bc71c2f/log/logs/host/octavia-api.service.journal-23-13-38.log.txt#993 | 10:48 |
noonedeadpunk | which passingtests... | 10:48 |
jrosser | https://zuul.opendev.org/t/openstack/build/b58d8decd7cc493d8e1a06b93b9b261f/log/logs/host/octavia-api.service.journal-00-06-32.log.txt#41-44 | 10:48 |
noonedeadpunk | :( | 10:49 |
noonedeadpunk | yeah.... | 10:49 |
*** pto has quit IRC | 10:49 | |
jrosser | not sure it's related but memcached_servers = 172.29.236.100:11211 is probably not totally correct | 10:50 |
noonedeadpunk | why not? | 10:50 |
noonedeadpunk | they are not going through haproxy | 10:51 |
jrosser | oh right yes this is metal isnt it | 10:51 |
noonedeadpunk | yep it is metal | 10:51 |
jrosser | the internet suggests these errors are from uwsgi | 10:59 |
noonedeadpunk | yep, they are from uwsgi 100% | 11:04 |
noonedeadpunk | not not sure I treally understand them... sound like worth spawning an aio... | 11:04 |
noonedeadpunk | but upgrade jobs are fixed :) | 11:05 |
openstackgerrit | Merged openstack/openstack-ansible-os_magnum master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/760633 | 11:06 |
openstackgerrit | Merged openstack/openstack-ansible-os_magnum master: Drop magnum distro CI jobs https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/760525 | 11:06 |
*** luksky has joined #openstack-ansible | 11:13 | |
openstackgerrit | Merged openstack/openstack-ansible master: Bump SHAs for master https://review.opendev.org/c/openstack/openstack-ansible/+/764589 | 11:23 |
noonedeadpunk | jrosser: I think I will do RC on top of ^ what do you think? | 11:23 |
noonedeadpunk | and release in 2 weeks? | 11:26 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Added Openstack Adjutant role deployment https://review.opendev.org/c/openstack/openstack-ansible/+/756310 | 11:29 |
*** pto has joined #openstack-ansible | 11:36 | |
*** pto has joined #openstack-ansible | 11:37 | |
*** pto has quit IRC | 11:38 | |
*** pto has joined #openstack-ansible | 11:39 | |
*** jbadiapa has joined #openstack-ansible | 11:40 | |
jrosser | noonedeadpunk: sounds good - nothing like a deadline to get things merged :) | 11:42 |
jrosser | but i guess there is relatively small number of outstanding things now | 11:43 |
noonedeadpunk | :P | 11:43 |
noonedeadpunk | yeah, I htink nothing wich can overload us with backporting | 11:43 |
jrosser | we are making progress with zun but seems docker download rate limit breaks the CI | 11:43 |
*** fanfi has quit IRC | 11:44 | |
jrosser | it uses cirros and nginx docker images for the tempest tests | 11:44 |
noonedeadpunk | oh, well, except I didn't implement api-threads for all roels :( | 11:44 |
noonedeadpunk | will revise this topic during this week.... | 11:45 |
noonedeadpunk | (today) | 11:45 |
noonedeadpunk | uh :( | 11:45 |
*** macz_ has joined #openstack-ansible | 11:46 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/764644 | 11:49 |
*** macz_ has quit IRC | 11:51 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/764646 | 11:53 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/764647 | 11:55 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_panko master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_panko/+/764648 | 11:55 |
*** rfolco has joined #openstack-ansible | 11:56 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement stable/train: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/764649 | 11:56 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764650 | 11:57 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_sahara master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_sahara/+/764651 | 11:59 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_swift master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/764652 | 12:02 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_trove master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/764653 | 12:03 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_trove master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/764653 | 12:04 |
*** mgariepy has quit IRC | 12:08 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/764654 | 12:08 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Trigger uwsgi restart https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/764655 | 12:12 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Define condition for the first play host one time https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/764656 | 12:17 |
admin0 | morning | 12:20 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/764658 | 12:20 |
*** SiavashSardari has quit IRC | 12:22 | |
*** tosky has quit IRC | 12:24 | |
*** SiavashSardari has joined #openstack-ansible | 12:24 | |
*** tosky has joined #openstack-ansible | 12:25 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Define condition for the first play host one time https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764659 | 12:25 |
*** MickyMan77 has joined #openstack-ansible | 12:34 | |
MickyMan77 | hi, | 12:34 |
MickyMan77 | which is the latest stable version of openstack-ansible for use on CentOS 8? | 12:35 |
noonedeadpunk | well, it's still ussuri 21.2.0 | 12:35 |
MickyMan77 | I have one farm that is deployd with version 21.0.1, can I re-deploy it with 21.2.0 ? | 12:36 |
noonedeadpunk | yep, you can either re-deploy or upgrade | 12:37 |
*** sshnaidm|off is now known as sshnaidm | 12:38 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: Simplify service creation https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764661 | 12:47 |
*** miloa has quit IRC | 12:53 | |
*** macz_ has joined #openstack-ansible | 13:08 | |
noonedeadpunk | jrosser: btw I think this change https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/410681 is valid nowadays | 13:10 |
jrosser | oh yes thats matching the rest of the services defaults/main.yml | 13:11 |
*** macz_ has quit IRC | 13:12 | |
*** mgariepy has joined #openstack-ansible | 13:16 | |
*** mgariepy has quit IRC | 13:18 | |
*** mgariepy has joined #openstack-ansible | 13:21 | |
*** stduolc has quit IRC | 13:28 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 13:33 |
openstackgerrit | Georgina Shippey proposed openstack/openstack-ansible-galera_server master: Use mysql user instead of root https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/764449 | 13:34 |
guilhermesp | nice docs jrosser ! | 13:35 |
jrosser | guilhermesp: it's your paste :) i just wrote made a patch from it! | 13:36 |
guilhermesp | +2 ! | 13:36 |
jrosser | i've not tested the ansible path for creating the image and coe template | 13:36 |
jrosser | but it looks like we should have everything in os_magnum now to be able to put the data in user_variables.yml completely | 13:37 |
kleini | I plan to extend my single infra node with a second one, currently running 21.1.0. Would it be possible to set up second infra node with Ubuntu 20.04, while the first one remains on 18.04? I already consider to remove internal/external configured VIP and let keepalived do this and I know about guides for upgrading xenial to bionic. Does anything else come to your mind, what could be an issue? | 13:37 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 13:42 |
jrosser | kleini: there are two things to consider, first is your internal and external VIP becoming managed by keepalived rather than statically assigned on the single infra host | 13:46 |
jrosser | and second would be that the python wheels built by the repo server are specific to the OS distribition of the repo server | 13:46 |
jrosser | so building on bionic and deploying those wheels onto focal is not going to go well, as there is a lot of linkage to C libraries like openssl | 13:47 |
kleini | So, I need to disable repo container on the bionic node, so the one on the focal node is used, right? | 13:48 |
jrosser | for the second infra host (and all of it's containers) you should override https://github.com/openstack/ansible-role-python_venv_build/blob/master/defaults/main.yml#L121 to point to the focal repo server | 13:49 |
jrosser | and there might be some wierness with repo contents replication between repo servers too, as they're both behind haproxy and unless you do something about it won't have an identical set of wheels in each one | 13:49 |
kleini | even if I shut down the repo container on my first infra host? | 13:51 |
jrosser | well this depends what your plan is | 13:51 |
kleini | upgrading then frist infra host after the second is running with focal | 13:51 |
jrosser | if this is step 1 on a migration to focal then shutting down the first repo server is fine, as you'd not need the bionic wheels any more | 13:52 |
kleini | but I don't want to deploy the new and second host with bionic and have then the need to upgrade it later | 13:52 |
jrosser | sure, the only thing that you won't be able to do is re-deploy anything further on bionic compute hosts for example | 13:53 |
kleini | then I would turn off repo container on focal node and switch repo container on bionic node back on | 13:53 |
jrosser | well not really, i was just going to say | 13:53 |
jrosser | the repo server which builds the wheels is selected algorthmically by ansible | 13:53 |
kleini | and yes, deployment then needs to pick the correct target hosts according to active repo containers | 13:53 |
jrosser | not by the load balancer | 13:54 |
kleini | oh, okay. I thought, it is the load balancer and my limits on the OSA commands | 13:54 |
jrosser | yes, two things to fix - 1) build on the right repo server, set some vars for this, 2) install computes and things from the right repo server, shut down the one you don't want | 13:55 |
jrosser | normally this situation would only exist for a short time in the middle of an OS upgrade | 13:55 |
jrosser | so after doing the infra nodes you'd move on to do the computes | 13:55 |
kleini | and I have dedicated network nodes | 13:56 |
kleini | so maybe too complex if something happens and it is maybe better for me as a more unexperienced OSA user to add second infra node with bionic and then later do the more complex upgrade from bionic to focal | 13:57 |
jrosser | they will be running at least the l3 agent in a venv, which would require wheels for the current OS | 13:57 |
*** d34dh0r53 has joined #openstack-ansible | 13:58 | |
jrosser | generally starting the OS upgrade with whichever infra/repo host is the default for wheel builds is best | 13:58 |
kleini | thanks for your advice | 13:58 |
*** redrobot has joined #openstack-ansible | 13:59 | |
jrosser | i think i'd also do whatever minor upgrade to the lastest point of your current release first too | 14:01 |
kleini | this would indeed help with base images problems | 14:02 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 14:10 |
*** cshen has quit IRC | 14:30 | |
*** spatel has joined #openstack-ansible | 14:40 | |
spatel | jamesdenton: morning (its Monday so much be busy ;) ) | 14:42 |
jamesdenton | hi spatel | 14:42 |
spatel | I reached max 700kpps on both SRIOV/DPDK vm (i able to reach 1mpps but with some drops) | 14:42 |
jamesdenton | an improvement! been following along with your thread on the ML | 14:43 |
spatel | look like my guest VM kernel is bottleneck | 14:43 |
spatel | Yes i changed my UDP profile and now i can git 700kpps without single drops | 14:43 |
spatel | on standard virtio VM result is 200kpps without drops | 14:44 |
spatel | jamesdenton: did you use testpmd on guest VM during your Trex test? | 14:44 |
jamesdenton | not that i recall, no | 14:44 |
jamesdenton | i am not confident that my test strategy was all that efficient, TBH | 14:45 |
spatel | jamesdenton: i am curious even you use DPDK on host but guest is still using kernel to process packet so how people saying they hit million packet :) | 14:45 |
jamesdenton | with regards to trex | 14:45 |
spatel | I am seeing CPU Interrupt hitting almost 80k to 90k/interrupt per sec on vmstat output | 14:46 |
spatel | based on that result i think kernel is bottleneck in VM | 14:47 |
*** d34dh0r53 has quit IRC | 14:48 | |
*** fanfi has joined #openstack-ansible | 14:49 | |
MickyMan77 | Hi all, Why do I get this error msg, "Database schema file with version 140 doesn't exist" at the task "os_cinder : Perform a cinder DB sync" -> http://paste.openstack.org/show/800543/ | 14:52 |
*** d34dh0r53 has joined #openstack-ansible | 14:59 | |
jrosser | MickyMan77: i think you may have mixed up versions somewhere? db schema 140 for cinder was only created 2 months ago in this patch https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/migrate_repo/versions/140_create_project_default_volume_type.py, but you have a ussuri version venv here /openstack/venvs/cinder-21.1.0 | 15:01 |
MickyMan77 | jrosser: how do I get around it ? deploy a newer version ? | 15:06 |
jrosser | i don't know - you seem to be using a newer version of cinder than would be expected for a ussuri deployment | 15:13 |
jrosser | i would check that /etc/ansible/roles/os_cinder SHA matches the one specified in /opt/openstack-ansible/ansible-role-requirements.yml | 15:13 |
*** klamath_atx has quit IRC | 15:18 | |
*** klamath_atx has joined #openstack-ansible | 15:18 | |
*** SiavashSardari has quit IRC | 15:24 | |
openstackgerrit | Merged openstack/openstack-ansible-os_magnum master: Use openstack_service_*uri_proto vars by default https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/410681 | 15:33 |
fanfi | hey folsk, what I need to configure for deplyment os-magnum? magnum-infra_hosts and ? | 15:33 |
kleini | magnum requires barbican and Heat besides the standard services like image, volume, compute and so on | 15:34 |
fanfi | it must be part of user_variables.yml ? | 15:35 |
jrosser | should not be necessary to have barbican | 15:35 |
jrosser | magnum can optionally use many things like ocativa, but they're not mandatory | 15:36 |
fanfi | is there any example for minimal config ? for os-magnum deployment ? | 15:36 |
jrosser | fanfi: for these questions it is always best to refer back to how the all-in-one works, then you can find out yourself | 15:37 |
jrosser | here is the additional config we add to the AIO for osa/magnum CI jobs https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/conf.d/magnum.yml.aio | 15:38 |
jrosser | and here would be a set of user_variables which are used for osa/magnum CI tests https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables_magnum.yml.j2 | 15:38 |
jrosser | you should be able to take those and adapt to your deployment | 15:39 |
fanfi | I have like tthis but with 3 nodes ....but not works for me :( | 15:39 |
jrosser | if you could paste any relevant output to paste.openstack.org then we can see what is happening | 15:40 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 15:44 |
fanfi | http://paste.openstack.org/show/800553/ | 15:44 |
jrosser | that looks like a typo/bad yaml formatting in openstack_user_config.yml | 15:45 |
jrosser | yes the indentation is incorrect in /etc/openstack_deploy/conf.d/magnum.yml | 15:46 |
fanfi | och ...yes | 15:47 |
fanfi | sorry for my stupid question :( | 15:47 |
jrosser | no worries :) | 15:48 |
*** nurdie has joined #openstack-ansible | 15:50 | |
*** pto has quit IRC | 15:51 | |
MickyMan77 | jrosser: How do I check the SHA of /etc/ansible/roles/os_cinder ? | 15:55 |
jrosser | cd /etc/ansible/roles/os_cinder | 15:56 |
jrosser | git log | 15:56 |
jrosser | look at the SHA of the first commit you see | 15:56 |
jrosser | well actually this is the version of the cinder service itself though isnt it? | 15:57 |
MickyMan77 | git log = "commit d01cdadeb99dd9b0ec6a5f5896ca31af2e363c81" | 15:59 |
MickyMan77 | and the /opt/openstack-ansible/ansible-role-requirements.yml --> version: d01cdadeb99dd9b0ec6a5f5896ca31af2e363c81 | 15:59 |
*** macz_ has joined #openstack-ansible | 16:01 | |
jrosser | ok thats ok | 16:05 |
jrosser | but you have an error from the cinder service itself rather than the ansible role | 16:06 |
jrosser | i seem to remember you having a lot of switching between master/stable branches a while ago which would result in this sort of thing | 16:06 |
admin0 | fanfi, me and ThiagoCMC have been trying to do at least one successful magnum/k8 setup osa .. no results yet .. if you want to join in the test, we welcome it | 16:06 |
*** d34dh0r53 has quit IRC | 16:09 | |
MickyMan77 | jrosser: I did think I did remove all setting etc etc when I did move from master to stable... | 16:10 |
jrosser | admin0: i have written a documentation patch for magnum, would be nice if you could see if it makes sense https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 16:10 |
fanfi | admin0 of course I interesting to join you | 16:13 |
fanfi | i am just deploying the os-magnum | 16:14 |
*** d34dh0r53 has joined #openstack-ansible | 16:16 | |
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to troubleshoot high load and poor query caching performance, downtime should be less than 5 minutes | 16:20 | |
MickyMan77 | jrosser: I did found an issue.. on one farm the controll node have this folder. "/openstack/venvs/cinder-21.1.0" but on the farm that have the problem it looks like this... | 16:20 |
MickyMan77 | "openstack/venvs/cinder-21.1.0.dev119" | 16:20 |
MickyMan77 | but I also have "cinder-21.1.0" | 16:21 |
admin0 | jrosser, was this tested on 21.1 or 21.2 branch ? | 16:22 |
admin0 | so that i can replicate/use the same ? | 16:22 |
jrosser | the SHA of magnum is given in the patch | 16:23 |
jrosser | it is just me writing up the notes that guilhermesp left here in pastes | 16:23 |
MickyMan77 | first I did run the master version, "that did not work" so I did move over to 21.0.1 and later to version 21.1.0 | 16:25 |
*** pto has joined #openstack-ansible | 16:28 | |
*** pto has quit IRC | 16:28 | |
*** pto has joined #openstack-ansible | 16:29 | |
MickyMan77 | jrosser : can it work if I just add the file "140_create_project_default_volume_type.py"? | 16:29 |
*** pto has quit IRC | 16:37 | |
admin0 | can anyone who had an AIO of 21.2.0 confirm that you can create volumes, but not mount it | 16:41 |
admin0 | so that if you are facing the same, we can open it as a bug | 16:41 |
*** mgariepy has quit IRC | 16:48 | |
ThiagoCMC | admin0, I'm basically ready to give Magnum another try! | 16:54 |
ThiagoCMC | Just created a template with: `openstack coe cluster template create k8s-ultra-template --image fedora-coreos-32.20201104.3.0 --keypair default --external-network intranet-1 --dns-nameserver 1.1.1.1 --master-flavor m1.small --flavor r1.large --docker-volume-size 8 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes` | 16:55 |
ThiagoCMC | BTW, fresh installed (yesterday) OSA Ussuri on top of Ubuntu 20.04.1, fully upgraded. | 16:56 |
ThiagoCMC | My Ubuntu image is also available to build templates... Gonna try it too | 16:57 |
ThiagoCMC | Trying to launch a Kubernetes cluster with: `openstack coe cluster create k8s-ultra-cluster --cluster-template k8s-ultra-template --node-count 1` - accepted, let's see! | 16:58 |
ThiagoCMC | CREATE_FAILED right away... :-( | 17:01 |
ThiagoCMC | Magnum error: ERROR: The Parameter (fixed_network_name) was not defined in template. | 17:03 |
ThiagoCMC | Hmmm | 17:03 |
ThiagoCMC | New error | 17:03 |
ThiagoCMC | I might be facing this: https://answers.launchpad.net/ubuntu/+source/magnum/+question/693674 | 17:10 |
*** cshen has joined #openstack-ansible | 17:11 | |
*** cshen has quit IRC | 17:15 | |
jrosser | MickyMan77: "openstack/venvs/cinder-21.1.0.dev119" means at some point you deployed stable/ussuri rather than a specific tag release | 17:22 |
jrosser | I think that somewhere along the line you have deployed a master or victoria version of cinder | 17:23 |
jrosser | perhaps trying to re-run the cinder playbook with -e venv_rebuild=yes to fully rebuild the virtualenv would be a good idea | 17:24 |
*** cshen has joined #openstack-ansible | 17:30 | |
*** rpittau is now known as rpittau|afk | 17:42 | |
*** andrewbonney has quit IRC | 17:59 | |
*** mgariepy has joined #openstack-ansible | 18:05 | |
admin0 | ThiagoCMC, can you try with the docs on what jrosser pasted | 18:10 |
jrosser | looking at the patch linked from ThiagoCMC launchpad question i'm not sure i can see the linked 'fix' for that in the ussuri branch of the magnum code | 18:11 |
ThiagoCMC | So, the Ussuri branch still have that problem, right? | 18:15 |
ThiagoCMC | admin0, can you share it again, please? =P | 18:16 |
admin0 | ThiagoCMC, https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 | 18:21 |
admin0 | recommended in the end of that is this: container_infra_prefix=registry.public.mtl1.vexxhost.net/magnum/ | 18:21 |
jrosser | ThiagoCMC: i do not know why the linked patch was applied to master then backported to train and not ussuri | 18:22 |
jrosser | that would be a question for the magnum developers, or perhaps apply it locally and see what happens | 18:23 |
ThiagoCMC | admin0, thanks! jrosser, I'll give it a try then, after trying the new doc | 18:28 |
*** gyee has joined #openstack-ansible | 18:28 | |
openstackgerrit | Merged openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri https://review.opendev.org/c/openstack/openstack-ansible/+/764590 | 18:33 |
openstackgerrit | Merged openstack/openstack-ansible master: Set service region for masakari https://review.opendev.org/c/openstack/openstack-ansible/+/761842 | 18:34 |
*** dave-mccowan has quit IRC | 18:36 | |
ThiagoCMC | Aha! One small progress! To avoid that "fixed_network_name" ERROR, I had to change the `os_distro` metada from just "coreos", to "fedora-coreos"! Now I'm not seeing that error anymore and Heat was called! | 18:40 |
ThiagoCMC | I still have to try that big Cluster Template from the docs. My simple template doesn't create the Kubernetes nodes (just the master). | 18:41 |
*** dave-mccowan has joined #openstack-ansible | 18:42 | |
ThiagoCMC | I can't use `--master-lb-enabled` if I don't have any LBaaS, right? | 18:43 |
ThiagoCMC | What is a good value for "boot_volume_type=<your volume type>" in `--labels`? | 18:44 |
ThiagoCMC | I have OSA+Ceph, Nova uses `vms` pool by default. Cinder has "RBD" under "Type"... I guess that I answered my own question... lol | 18:45 |
ThiagoCMC | What about this: `container_infra_prefix=<docker-registry-without-rate-limit>` ? | 18:46 |
ThiagoCMC | Do I have to change it? | 18:46 |
ThiagoCMC | BTW, I'm talking about: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627/5/doc/source/index.rst#141 | 18:47 |
jrosser | you need to specify a docker registry | 18:58 |
ThiagoCMC | Do you have a working example? | 18:59 |
jrosser | sorry that’s not very helpful | 18:59 |
ThiagoCMC | I have no idea what to type there lol | 18:59 |
jrosser | dockerhub just imposed rate limits recently | 18:59 |
jrosser | if there was a single universal answer then I would have put it in the documentation :( | 19:00 |
jrosser | if you look back through the irc log then you’ll see that guilhermesp used a registry they host at vexxhost | 19:00 |
jrosser | but I didn’t think it was fair to immortalise that into documentation as it’s someone’s actual deployment | 19:01 |
ThiagoCMC | We might need an Ansible playbook to build this local "Docker Registry", like we do with Repos... :-P | 19:02 |
ThiagoCMC | I'm sad that I won't be able to test this... :-( | 19:02 |
ThiagoCMC | I mean, now. lol | 19:02 |
jrosser | I’m sure none will | 19:02 |
guilhermesp | yep jrosser I agree we should make the registry up to the user. | 19:02 |
ThiagoCMC | guilhermesp, could I use yours to give it a try? | 19:03 |
guilhermesp | sure go ahead | 19:03 |
ThiagoCMC | Awesome! What is the syntax for `container_infra_prefix= XXX_vexhost` ? | 19:04 |
ThiagoCMC | You can PM me if you want =P | 19:04 |
guilhermesp | container_infra_prefix=registry.public.mtl1.vexxhost.net/magnum/ | 19:04 |
kleini | Docker Hub pull rate limits may quickly prevent download of further container images. You're allowed to download 100 pulls in 6 hours. With Kubernetes you reach that limit very quickly. The problem can be mitigated by used a local pull through registry cache. An easy example for that is documented here: https://docs.docker.com/registry/recipes/mirror/ | 19:04 |
ThiagoCMC | Thank you!!! | 19:04 |
ThiagoCMC | kleini, if I want to give a try with dockerhub, what's the syntax for the container_infra_prefix label? | 19:05 |
ThiagoCMC | Guys, it would be nice to document the correct syntax and point out the limitations | 19:06 |
ThiagoCMC | It would be nice to also share how to build a local Docker Registry for this! I think that I could make a small Ansible playbook to deploy it in a VM outside of OSA... | 19:07 |
*** cshen has quit IRC | 19:17 | |
*** pcaruana has quit IRC | 19:23 | |
jrosser | well - we are not here to document the magnum project itself, there is tons of stuff already https://docs.openstack.org/magnum/latest/user/index.html# | 19:31 |
*** cshen has joined #openstack-ansible | 19:34 | |
ThiagoCMC | jrosser, sure, makes sense... =P | 19:38 |
ThiagoCMC | BTW, I had to remove the `--master-lb-enabled`, do you think that it should work without it? | 19:39 |
jrosser | kleini: does that pull through cache still work now that the upstream rate limit is on the metadata, not the images themselves? | 19:39 |
ThiagoCMC | I'm not sure if it's related to Octavia, or now... | 19:39 |
ThiagoCMC | *nor not | 19:39 |
ThiagoCMC | Got it: https://docs.openstack.org/magnum/latest/user/ lol | 19:41 |
jrosser | octavia is to do with ingress_controller iirc | 19:42 |
ThiagoCMC | Yep, sure... I don't have it, so, I'm not creating the Magnum template with --master-lb-enabled. | 19:42 |
ThiagoCMC | I'm trying this: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627/5/doc/source/index.rst - So far, I'm seeing the same problem that I saw before, it creates only the Kubernetes Master and no Kubernetes Nodes. | 19:43 |
ThiagoCMC | It's CREATING_IN_PROGRESS for about 25 minutes already | 19:43 |
jrosser | this is a typical failure condition | 19:44 |
*** tosky has quit IRC | 19:45 | |
jrosser | magnum api->magnum conductor->heat->some vm->cloud-init->heat-container-agent you've got to just follow the flow of how this works from beginning to end | 19:45 |
jrosser | trying lots of different options for the cluster template won't be productive as there are tons of them | 19:46 |
ThiagoCMC | I'm seeing that in your doc, there is only the "openstack coe cluster template create" but no example about how to use that templete with "openstack coe cluster create ..." | 19:46 |
ThiagoCMC | So, I'm guessing that one using Horizon =P | 19:46 |
ThiagoCMC | Any easy way to avoid this typical failure condition? | 19:47 |
jrosser | openstack coe cluster create mycluster --cluster-template mytemplate --node-count <N> --master-count <M> | 19:47 |
jrosser | ^ creating the cluster from the template is the easy bit | 19:47 |
jrosser | no not really, like i say this is sooooo complicated that you've just got to work through logically from start to finish whats going on under the hood in magnum | 19:48 |
jrosser | each of those things is making a log file | 19:48 |
jrosser | the 'loop is closed' by the heat agent / cloud-init stuff in the VM making a callback to the public heat API endpoint that everything has completed ok | 19:49 |
jrosser | so you can start from there perhaps | 19:49 |
ThiagoCMC | I see... Ok! | 19:49 |
jrosser | the important thing to see here with magnum is that.... | 19:51 |
jrosser | 1) it provisions a load of VM with heat | 19:51 |
jrosser | 2) it drops a whole load of shell scripts with cloud-init / heat-container-agent like these https://github.com/openstack/magnum/tree/master/magnum/drivers/common/templates/kubernetes/fragments | 19:51 |
jrosser | those shell scripts are run and provision the k8s cluster, then 'phone home' to the heat API to say they are done | 19:52 |
jrosser | all of the 'labels' that you set on the cluster template are like a set of variables passed to those shell scripts to customise the deployment inside the VM | 19:53 |
ThiagoCMC | This is indeed very complicated! | 19:56 |
ThiagoCMC | Might be easier to just launch an empty stack with Heat and use some Ansible playbook to deploy Kubernetes and be done with it, no Magnum... | 19:57 |
*** dave-mccowan has quit IRC | 19:59 | |
*** dave-mccowan has joined #openstack-ansible | 20:02 | |
jrosser | urgh new pip resolver has broken the neutron role tests https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/758751 | 20:05 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-tests master: Apply OSA global-requirements-pins during functional tests https://review.opendev.org/c/openstack/openstack-ansible-tests/+/764824 | 20:26 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Updated from OpenStack Ansible Tests https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/758751 | 20:27 |
kleini | jrosser: that pull through cache at least mitigates our problems in CI. we change docker.io/library/node:10 into registry-cache.domain.tld/library/node:10 and we can locally always pull the image even if fetching new images from Docker Hub is currently rate limited for us | 20:36 |
jrosser | i was looking at the documentation for it | 20:36 |
*** spatel has quit IRC | 20:37 | |
jrosser | "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content." <- i was wondering if that was a metadata call to the upstream registry which would count against the rate limit | 20:38 |
kleini | maybe it counts. I don't know. If the image already exists in the cache, the cached version is delivered. | 20:42 |
kleini | http://paste.openstack.org/show/800571/ <- first image does not exist in cache and pull gets a rate limit from docker.io, second image is already cached locally | 20:44 |
*** tosky has joined #openstack-ansible | 20:48 | |
admin0 | so magnum .. :( .. what about lbaas and trove ? are those working good ? | 20:49 |
ThiagoCMC | admin0, I gave up on Magnum today. Never tried LBaaS, neither Trove... =P | 20:53 |
admin0 | lbaas is octavia | 20:55 |
ThiagoCMC | Magnum is still too over complicated. It would be easier to just launch an empty stack with Heat and call some regular Ansible playbook (kubespray.io?) to deploy Kubernetes on top of it. | 20:56 |
ThiagoCMC | Yep, I know that LBaaS is Octavia but, I've never tried it before. | 20:56 |
ThiagoCMC | Does it depends on virtual machines to run the LBaaS services? | 20:57 |
*** rfolco has quit IRC | 20:59 | |
kleini | do not underestimate a K8s deployment. Tools like kubespray and k3s make some usable deployment simple, but these deployments often do not meet requirements, you will have in production. Magnum does a pretty good job for that. I never had a Kubernetes cluster having issues in production. With kubespray, k3s and kind we have a lot of issues in production. | 21:00 |
ThiagoCMC | kleini, thanks for your feedback! I woudl actually love to have Magnum working here... =P | 21:00 |
*** vesper11 has quit IRC | 21:01 | |
ThiagoCMC | But it's too complicated now. | 21:01 |
ThiagoCMC | And hard to troubleshoot | 21:01 |
*** spatel has joined #openstack-ansible | 21:02 | |
admin0 | kleini, i don't want to give up :) | 21:03 |
ThiagoCMC | admin0, you can't give up! :-P | 21:04 |
admin0 | i will not | 21:04 |
ThiagoCMC | ^_^ | 21:04 |
spatel | folks, asciinema is awesome tool :) | 21:06 |
spatel | Just created movie of my Trex load-test - https://asciinema.org/a/Zy8LsXU1B31DaeCqcqi5nqdDU | 21:07 |
admin0 | yep | 21:07 |
spatel | now i am going to add it in my all blog :) | 21:09 |
spatel | so easy to document | 21:09 |
*** luksky has quit IRC | 21:17 | |
*** luksky has joined #openstack-ansible | 21:18 | |
*** cshen has quit IRC | 21:34 | |
*** mathlin has joined #openstack-ansible | 21:37 | |
*** yann-kaelig has joined #openstack-ansible | 21:51 | |
*** yann-kaelig has quit IRC | 21:52 | |
*** cshen has joined #openstack-ansible | 21:52 | |
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to make further query caching and Git garbage collection adjustments, downtime should be less than 5 minutes | 22:35 | |
*** spatel has quit IRC | 22:45 | |
*** nurdie has quit IRC | 22:46 | |
*** nurdie has joined #openstack-ansible | 23:02 | |
*** nurdie has quit IRC | 23:07 | |
*** luksky has quit IRC | 23:14 | |
*** jbadiapa has quit IRC | 23:15 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!