Monday, 2020-11-09

cloudxtinyAnyone know if we can use openvswitch in the ironic_compute container?00:37
cloudxtinyseems to be failing to start due to "'/dev/hugepages': No such file or directory" for me00:38
*** macz_ has joined #openstack-ansible00:39
*** tosky has quit IRC00:41
*** macz_ has quit IRC00:44
cloudxtinysorted it :-)00:52
*** spatel has joined #openstack-ansible00:54
ThiagoCMCCan someone help me with this error: http://paste.openstack.org/show/799811/ <- I reinstalled the Controllers but letf the Compute Nodes, now, they're all trowing this error here but, I can't see anything wrong from Horizon's Admin UI... How to fix this?00:56
ThiagoCMCcloudxtiny, cool!  :-P00:56
*** spatel has quit IRC01:23
*** macz_ has joined #openstack-ansible01:30
*** macz_ has quit IRC01:34
*** fresta_ has joined #openstack-ansible01:52
*** fresta has quit IRC01:53
*** cloudxtiny has quit IRC02:46
*** macz_ has joined #openstack-ansible03:54
*** macz_ has quit IRC03:58
*** pto has joined #openstack-ansible04:06
*** pto has quit IRC04:10
*** fresta_ has quit IRC04:24
*** fresta has joined #openstack-ansible04:32
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** miloa has joined #openstack-ansible06:36
*** viks____ has joined #openstack-ansible06:57
noonedeadpunko/07:34
*** pto has joined #openstack-ansible07:46
*** pto has quit IRC07:47
*** pto has joined #openstack-ansible07:48
*** andrewbonney has joined #openstack-ansible08:10
*** cshen has joined #openstack-ansible08:16
*** rpittau|afk is now known as rpittau08:30
ptoIs there more configuration to murano other that defining murano-infra_hosts: *infrastructure_hosts?08:47
ptoGetting an error with os-horizon:  HORIZON_CONFIG['legacy_static_settings'] = LEGACY_STATIC_SETTINGS NameError: name 'HORIZON_CONFIG' is not defined08:51
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_glance stable/stein: Do not symlink ceph libraries for distro path  https://review.opendev.org/76081808:51
noonedeadpunkpto: um, not sure how it's related with murano08:53
noonedeadpunkah08:54
noonedeadpunkit's murano dashboard08:54
ptoYep08:54
noonedeadpunkI think it's plugin issue. it needs `from openstack_dashboard.settings import HORIZON_CONFIG`08:55
noonedeadpunkeventually I think murano is pretty much unmaintained for several years...08:55
ptoOk. I was just testing it, im gonna skip that part08:56
noonedeadpunkso adding this import somewhere here https://opendev.org/openstack/murano-dashboard/src/branch/master/muranodashboard/local/local_settings.d/_50_murano.py#L49 or at the beginning of the file08:57
ptoIn the osa config file?08:59
noonedeadpunkno in murano dashboard plugin09:01
noonedeadpunkin horizon virtualenv09:01
noonedeadpunkit's not smth we can/should overwrite09:01
noonedeadpunkbut needs patching in murano-dahsboard itself09:01
jrossermorning09:02
noonedeadpunkjrosser: hey!09:04
jrosserhello09:04
jrossertook some time away last week, needed a rest09:04
noonedeadpunkwas wondering if you saw this http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2020-11-06.log.html#t2020-11-06T14:05:50 and if you have any thoughts09:04
ptonoonedeadpunk: Is the muran dashboard in the horizon container?09:05
noonedeadpunkpto: yep09:05
noonedeadpunkjrosser: yeah, can totally understand this09:05
jrossernoonedeadpunk: yes i did see it, and i think the biggest thing would be getting good CI09:06
jrosserlike currently we find all the wierd cases across debian/ubuntu/centos/.... where the repos all change randomly and stuff09:06
noonedeadpunkI also a bit afraid of that role will get in weird shape sooner or later09:06
noonedeadpunkin terms that they were looking to split it into small parts...09:07
jrosserhumm yes and we just undid splitting up roles for galera because it was a massive pain09:08
jrosseri guess in a way it would be a bit like ceph-ansible has many roles09:09
noonedeadpunkyeah, but I\m not sure about that as having it that way feels reasonable as well, especially for our usecase. And not sure we have resources to adopt to this as well...09:10
noonedeadpunkand honestly... feels like pulling collections will be such a pain in some time...09:11
noonedeadpunkas their size is going to grow a lot with this approach09:11
jrossermaybe it's OK to take that role and use it for the basis of stuff in the collection09:12
jrosserif we can use it in OSA or not is another question09:12
jrosseror if anyone has time to work on that too09:12
noonedeadpunkwell agree09:12
noonedeadpunkbut doesn't make much sense to work on them separately afterwards as well...09:13
noonedeadpunkand according to how we strugled with new version of rabbit (because upstream patch wasn't been approved for a while) it felt like we were one of the biggest users....09:14
jrosseri expect we push the rabbit version forward more aggressively than others09:14
jrosserjust stick with the distro package and the version will be older09:15
noonedeadpunkwell yes09:15
jrosserit is a difficult choice, if we don't use a role from the collection then we duplicate work but keep in control09:16
jrosserbut if we do use it then things might turn out not so good, like ceph-ansible09:16
noonedeadpunkyeah, have super mixed fillings about that as well09:17
noonedeadpunk*feelings09:17
ptonoonedeadpunk: I think i will skip the murano - i dont have a clue what needs to be fixed in the config file.09:24
*** sshnaidm_ is now known as sshnaidm|rover09:24
ptoWhat is the status of Masakari? Is it stable in ussuri?09:24
noonedeadpunkpto: iirc it is09:25
noonedeadpunkwell, using it from train09:25
noonedeadpunkexcept in U you will need to handle pacemaker installation on your own09:26
noonedeadpunkI have https://github.com/noonedeadpunk/ansible-pacemaker-corosync but I think it's not doing pacemaker-remote at the moment09:27
ptoIs masakari like vitrage?09:30
ptoAuto healing of a dead compute? Or did I misundertand something?09:31
noonedeadpunkmasakari auto evacuates instances from failed compute09:34
ptonoonedeadpunk: Is it complicated to get working?09:42
noonedeadpunkno, not at all09:42
noonedeadpunkin case you have pacemaker+corosync cluster working:)09:42
noonedeadpunkthe role I referenced did it great, but has limitation in 16 nodes, as otherwise you need to use pacemaker-remote09:43
ptonoonedeadpunk: I have never triged to deploy a pacemaker cluster09:43
ptoHave you ever looked at the Vitrage project?09:43
noonedeadpunkah, is it project held by Nokia?09:45
noonedeadpunkI think it's pretty different09:45
noonedeadpunkas what masakari does - it checks pacemnaker cluster status and in case one compute host is down it tries to issue nova host-evacuate command. also it has processmonitor to check for the ps and restarting service if smth is down and instancemonitor, to ensure that VM with specific tag is always on;ine and boot it in case it goes down09:47
noonedeadpunkeventually pacemaker cluster is needed only for hostmonitor09:47
ptoCorrect, the vitrage project is backed by Nokia. I have used their commercial openstack for some time in my previous job. It uses vitrage combined with zabbix, and if a host goes down, zabbix informs vitrage and it will evacuate the host09:53
* noonedeadpunk loves zabbix09:53
noonedeadpunkyeah, but it kind of the way more complicated flow then with masakari imo)09:54
ptonoonedeadpunk: tbh, i think masakari looks a little complicated to begin with.09:55
ptoNot much usable documentation09:55
noonedeadpunkalso nice thing I was using with masakari is reserved host. So we were having a disabled host, that was with 0 VMs and set in reserved state in masakari. And in case of host failure host is enabled in nova, unmarked from reservation and all vms moves to it09:55
noonedeadpunkso you kind of sure you will have enough resources to spawn everything09:56
noonedeadpunkwhile agree on poor documentation, it's working pretty reliable and straightforward09:56
ptoThat makes good sense. I spend allot of planning compute resources in my previous job (Telco). Almost everything depended on a SRVIOV and hard pinned CPU cores09:57
ptonoonedeadpunk: The os cluster will have 50+ computes, will this fly with pacemaker+10:07
noonedeadpunkyes, but as I said, it needs pacemaker-remote. also you can split up pacemaker clusters into groups and add them as different segmentation groups in masakari as well10:11
ptonoonedeadpunk: Cool. Thanks for helping. I think i will skip this part for now, as I have plenty of other stuff to look at. But it will be on the todo with high priority10:13
fanfiGuys, could somebody help me please. When I trying to create a new image I got following error and I cant find any error in log files. :( ...Image creation failed: Unable to establish connection to http://172.16.1.91:9292/v2/images/b333b5df-29bb-40a7-adba-0fbbcc10d759/file        : ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))10:18
fanfiGlace api works10:18
fanfithere is commands debug https://pastebin.com/7tVeXWSN10:18
noonedeadpunkI'm not sure if it's the case, but you can try setting glance_use_uwsgi: false in user-variables and re-run os-glance-install.yml playbook10:21
jrosserwould also be worthwhile checking the glance service logs, as those errors are just from the client10:21
noonedeadpunkbut I think it was some different issue here...10:21
noonedeadpunkfanfi: btw, logs are in journald10:22
fanfithx, I will try it.10:22
noonedeadpunkas it seems you don;t use import out of the logs, which will fail with uwsgi, so yeah, seems really idfferent10:25
ptonoonedeadpunk: Does the pacemaker run on ubuntu?10:52
noonedeadpunkwell I run it on ubuntu10:52
*** tosky has joined #openstack-ansible11:09
ptonoonedeadpunk: I have removed the murano setting, but the os-horizon keeps failing with the same error. I have removed the container using lxc-containers-destroy.yml. What else need to be removed? ansible_facts?11:16
noonedeadpunkum, no, host from inventory11:43
noonedeadpunkwe have /scripts/inventory-manage.py11:43
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Use a consolidated gate queue for integrated jobs  https://review.opendev.org/66075111:48
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Use a consolidated gate queue for integrated jobs  https://review.opendev.org/66075111:50
fanfi@noonedeadpunk @noonedeadpunk it's works. I added the setting to user-variables than I saw wrong authentication  in log :) ...and that was easy to fixed :) thanks11:51
noonedeadpunkjrosser: have you ever seen https://review.opendev.org/#/c/584857/5 ?:)11:54
ptonoonedeadpunk: I have removed the horizon container including files, and the facts files and this time removed from inventory too. Still failing on horizin. The configs are working from the run before murano was activated12:04
*** rfolco has joined #openstack-ansible12:04
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Fix upgrade jobs for bind-to-mgmt  https://review.opendev.org/75846112:08
*** cloudxtiny has joined #openstack-ansible12:14
cloudxtinyHello12:14
cloudxtinyit seems gnocchi_service_setup_host variable is getting duplicated in /etc/ansible/roles/os_gnocchi/defaults/main.yml so database setup fails12:15
cloudxtinyis that file created dynamically so I must have set something up wrong12:19
*** yasemind has quit IRC12:21
*** ericzolf has joined #openstack-ansible12:24
noonedeadpunkI don't see it being dublicated.....12:25
noonedeadpunkwhat version arte you using?12:25
cloudxtinynoonedeadpunk how do I tell the version? I think i pulled the lastest12:26
noonedeadpunkwhen you've pulled openstack/openstack-ansible you probably did checkout to some version (it's not required but most likely)12:28
noonedeadpunkand /etc/ansible/roles/os_gnocchi/defaults/main.yml is not created dynamically, it's taken from https://opendev.org/openstack/openstack-ansible-os_gnocchi/src/branch/master/defaults/main.yml12:29
cloudxtinyI cloned master :-(12:36
cloudxtinygit clone -b master https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible12:36
cloudxtinyI was following this tutorial -----> https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html12:37
noonedeadpunkok, I don't see this option being dublicated in master12:37
cloudxtinyhumm. I should have selected the latest release version.12:37
cloudxtinythanks12:37
noonedeadpunkcan you comment out no_log statement in /etc/ansible/roles/os_gnocchi/tasks/db_setup.yml and try running role one more time? (if it's the task where it failed)12:38
cloudxtinyyeah I commented out line 75 and it worked12:39
cloudxtiny#gnocchi_db_setup_host: "{{ ('galera_all' in groups) | ternary(groups['galera_all'][0], 'localhost') }}"12:40
cloudxtinyseems my version was trying to use the galera node as the utility node for setting up the database12:41
noonedeadpunkoh my12:41
noonedeadpunkyou're right and I'm blind12:41
cloudxtiny:-) . Just happy to help out :-)12:42
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_gnocchi master: Remove dublicated gnocchi_db_setup_host  https://review.opendev.org/76191012:42
noonedeadpunkthat's super useful12:42
*** cloudxtiny has quit IRC12:43
noonedeadpunkmerging https://review.opendev.org/#/c/760818/ would be super cool12:43
*** cloudxtiny has joined #openstack-ansible12:44
jrosserdone12:46
jrossernoonedeadpunk: i've never seen that actually needed in a cert, the IP: {{ internal_vip }} SAN12:48
jrosserbut then again i've never seriously tried to make the selfsigned stuff actually be trusted12:48
jrosserthe patch would suggest that there wasnt even internal DNS pointing to the VIP12:49
noonedeadpunkyeah, me neither.12:49
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-galera_server master: Ability to take mariadb backups using mariabackup  https://review.opendev.org/75526112:50
jrosserthat pattern is used in some of the ops repo stuff though12:51
noonedeadpunkactually patch is almost 3 years old, but it still feels interesting... maybe worth taking into consideration during ssl redesign12:52
jrosseryes, it would certainly be good if when a selfsigned cert was made it used a CA12:53
jrosserthat patch also does create a CA cert which the commit message doesnt really talk about12:53
openstackgerritMerged openstack/openstack-ansible master: Bump ansible version to 2.10.3  https://review.opendev.org/76144312:53
jrosserbut in a way thats really the most useful thing it does12:53
*** spatel has joined #openstack-ansible12:55
*** spatel has quit IRC12:59
cloudxtinyAre the ceilometer APIs not exposed via haproxy?13:00
cloudxtinycan't see them here: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/haproxy/haproxy.yml13:01
cloudxtinyNevermind.13:03
noonedeadpunkyeah, ceilometer does not have api....13:08
cloudxtinyyeah, just realised that what gnocchi is for now :-)13:11
cloudxtinynoonedeadpunk similar issue for Aobh as well  ----> "/etc/ansible/roles/os_aodh/defaults/main.yml" [Modified] line 5813:41
*** d34dh0r53 has quit IRC13:46
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_aodh master: Remove dublicated aodh_db_setup_host  https://review.opendev.org/76192213:46
noonedeadpunk /o\13:46
cloudxtiny:-)13:47
noonedeadpunkwell, good that you've picked up master :)13:47
cloudxtinyhappy to help13:47
*** d34dh0r53 has joined #openstack-ansible13:52
*** rh-jelabarre has joined #openstack-ansible13:54
*** spatel has joined #openstack-ansible13:59
ptoI know its a little OT, but have anyone here tested OpenStack Migrate (https://os-migrate.github.io/os-migrate/index.html)?14:02
noonedeadpunkwe do upgrades, so have no idea why this even exist, except you stick with newton and want ussuri in 1 step14:04
noonedeadpunkas you kind of need a lot of extra hardware here14:04
ptoI know, but there is no plausible upgrade path from queens to ussuri14:05
openstackgerritMerged openstack/openstack-ansible-os_glance stable/stein: Do not symlink ceph libraries for distro path  https://review.opendev.org/76081814:05
*** cloudnull has quit IRC14:11
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/stein: Bump stable/stein for last release  https://review.opendev.org/76193114:13
ThiagoCMCMorning guys! How can I make one of my haproxy nodes, the "main one"?14:14
noonedeadpunkwell it's up to keepalived to decide where ip will be spawned14:16
noonedeadpunkit has weights14:16
*** cloudnull has joined #openstack-ansible14:18
noonedeadpunkso haproxy is main where vip is spawned14:18
ThiagoCMCI know... I tried to "systemctl status haproxy.service" but it came back14:19
ThiagoCMCBut, no worries... Thanks!14:20
ThiagoCMCCurrently, I have a bigger problem... lol14:21
noonedeadpunkkeepalived has haproxy script which returns haproxy when keepalived is active14:21
ThiagoCMCOh, ok14:21
noonedeadpunkand keepalived will tend to return to the same master as it has higher weight set explicitly14:21
ThiagoCMCGot it!14:21
ThiagoCMCNice!14:21
ThiagoCMCI forgot about keepalived...14:22
noonedeadpunkwe have some variables for that I think but can't instanly recall14:22
ThiagoCMCNo problem14:22
ThiagoCMCSo, another issue... I reinstalled all controllers (fresh deployment) but I kept the compute nodes, now, they're all trowing the following error: http://paste.openstack.org/show/799811/14:23
ThiagoCMCAny idea about how to clean it up?14:23
spatelThiagoCMC: just adjust priority and you be able to decide which will be primary all the time.14:23
-spatel- [root@infra-lxb-1 conf.d]# cat /etc/keepalived/keepalived.conf | grep priority14:23
-spatel- priority 10014:23
ThiagoCMCspatel, awesome!!!  :-D14:23
ThiagoCMCI tried to "nova-manage cell_v2 delete_host --cell_uuid <CELL_UUID> --host <HOST>" but it didn't solved the above issue...14:25
spatelThiagoCMC: look like nova-placement related issue.14:25
ThiagoCMCyep14:25
ThiagoCMCI'm wondering if there is a way to clean it up without re-deployibng the whole thing, agian.14:25
ThiagoCMC*again14:25
spatelyou need to delete your compute nodes from placement and let them re-create new UUID14:26
ThiagoCMCAny docs about how to achieve that?14:26
spatelhttp://paste.openstack.org/show/799827/14:27
spateltry delete <compute_node>  do it on 1 node and restart nova-compute on compute node and it should re-create new UUID14:28
ThiagoCMCTrying it now14:30
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/stein: Switch to stable/stein for EM  https://review.opendev.org/76193714:37
noonedeadpunkjrosser: when you have enought time would be great to take a look at https://review.opendev.org/#/c/756313/14:39
noonedeadpunkI can set reversed backend, as the integrated reo fails14:39
noonedeadpunkbut I checked that it was passing on patchset 914:39
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Added Openstack Adjutant role deployment  https://review.opendev.org/75631014:41
*** gshippey has joined #openstack-ansible14:49
fanfifolks, could you help me please again :) ...on the compute node when I would like to start new instacnce i getting following error https://pastebin.com/1r4hjbwH   aborted: Unable to update attachment.(Bad or unexpected response from the storage volume backend API: Driver initialize connection failed (error: Unexpected error while running command14:51
fanfiif I run the commad manulaly..works14:52
fanfiand the package python3-ceph-argparse  is already installed.14:53
noonedeadpunkthat;s interesting14:54
jrosserdistro install :(14:54
fanfiyes, it's distro14:55
noonedeadpunkah....14:55
noonedeadpunkthan it explains a lot14:55
fanfi:( ...but how I can fixed ?14:56
noonedeadpunkdo you have these patches? https://review.opendev.org/#/q/status:merged+topic:ceph_client_distro14:56
noonedeadpunkI'm wondering also what python nova is using...14:58
jrosserfanfi: i'm interested to know why you prefer the distro install?14:58
jrossernoonedeadpunk: this is kind of zero test coverage really, centos + distro + ceph14:59
noonedeadpunkoh, it's centos14:59
jrosseri *think* so15:00
noonedeadpunkmeh15:00
jrosserfanfi: can you confirm which OS you are using?15:00
noonedeadpunkit' liekly that nova is py2 then at all15:00
fanfijrosser I do not have any prefernces...I was think it was more quick and better :( but it was wrong idea probably15:01
fanfiyes centos815:01
noonedeadpunkon centos 8 it should be py3...15:01
fanfiyes15:01
openstackgerritJames Denton proposed openstack/openstack-ansible-os_tempest master: Allow deployer to skip default resource creation  https://review.opendev.org/73389215:02
fanfiit's better to reinstall OSA and use source insall mettod15:04
fanfi?15:04
noonedeadpunkwell, it's better to use source method, yes:)15:04
noonedeadpunkI really not sure why this error happens. It's worth looking at what python nova is actually usng, as it might be that it's different one from system python, and so missing libraries...15:06
noonedeadpunkthis shouldn't be actually the case but I don't really see why this issue might happen so needs deeper look15:06
spatelfanfi: yes source is best way to go, i am running pretty big cloud on CentOS using source and so far so good15:06
fanfiah....ok :) i will go to reinstall the environment15:08
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Fix upgrade jobs for bind-to-mgmt  https://review.opendev.org/75846115:09
noonedeadpunkat least source is better tested and more reliable for sure15:10
ThiagoCMCJust curious about something... Have you guys ever tried to run the `neutron-*-agens` inside of a LXD/LXC Container? The command: `ip netns exec qdhcp-BLAH bash -i` returns: "mount of /sys failed: Operation not permitted" and I'm wondering what exactly do I have to allow at the LXD host... (just for fun)  :-P15:10
noonedeadpunkThiagoCMC: mgariepy did I think15:11
ThiagoCMCCool!15:11
noonedeadpunkbut iirc he said it's broken now :(15:11
ThiagoCMCI have all my Ceph OSD, and KVM hosts, inside of LXD containres and now, time to do the same with Network Nodes!  lol15:12
ThiagoCMCOh, okay...15:12
noonedeadpunkwell there was solid performance penalty and some things were weird so we decided to move networking out of containers by default a while ago15:12
ThiagoCMCYep, I know!15:13
noonedeadpunkbut I think it's technically possible and I guess worth fixing anyway15:14
ThiagoCMCI'll give this another shot!15:14
mgariepydmsimard, https://ara.recordsansible.org/ cert expired ?15:14
dmsimard:(15:18
jamesdentonjrosser Any thoughts on this requirements failure? https://review.opendev.org/#/c/588372/15:18
mgariepythe let's encrypt bot needs some monitoring ;p15:19
noonedeadpunkI think GitPython is not on openstack available requirements file15:19
dmsimardmgariepy: fixed ty15:19
mgariepydmsimard, thanks15:20
noonedeadpunkwe can use what is in https://opendev.org/openstack/requirements/src/branch/master/global-requirements.txt15:20
noonedeadpunkoh, hm, it's in it15:20
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Use parallel git clone  https://review.opendev.org/58837215:22
*** nurdie has joined #openstack-ansible15:22
jamesdentonnoonedeadpunk thx15:23
noonedeadpunkwhat I don't like here is location of the library....15:25
*** yann-kaelig has joined #openstack-ansible15:28
ThiagoCMCnoonedeadpunk, the "security.nesting=true" did the trick with the "ip netns"! Now, checking other problems...  :-D15:33
noonedeadpunkwould be great to know full list of steps to recover that path:)15:34
spateljamesdenton: i have successfully deploy ovs+dpdk using OSA (i have some draft patch which i am going to submit for CentOS-8 support).15:36
spatelnoonedeadpunk: do we have OVS+DPDK CI job for validation?15:37
openstackgerritSiavash Sardari proposed openstack/openstack-ansible-os_neutron master: Remove securitygoup section due to duplication in agents config file  https://review.opendev.org/76195415:37
*** nurdie has quit IRC15:41
jamesdentonspatel that's great to hear!15:42
openstackgerritSiavash Sardari proposed openstack/openstack-ansible-os_neutron master: Remove securitygoup section due to duplication in agents config file  https://review.opendev.org/76181515:42
spateljamesdenton: Now i am testing SR-IOV + DPDK combine deployment. (where i will add VF as a DPDK interface to solve my bonding issue)15:43
*** nurdie has joined #openstack-ansible15:45
jamesdentonwas it not enough to create an openvswitch bond?15:47
*** rpittau is now known as rpittau|bbl15:52
jrosserjamesdenton: i wonder if it is becasue there is no constraint supplied?15:53
spateljamesdenton: in my case i have only 2x10G nic and if i assigned both nic to dpdk then i don't have any mgmt nic to ssh into host.15:53
jamesdentonspatel gotcha.15:54
spatelBest practice is having multiple nic and dedicated nic for dpdk but in my case i have blade center and i can't add NIC in it.15:54
*** cloudnull has quit IRC15:55
spatelI am thinking i am going to create multiple VF using SR-IOV and assigned those VF from both pysical nic to br-mgmt and ovs to bond them. for br-mgmt i will use LinuxBridge and for br-vlan i will use OVS+DPDK15:56
spateleach VF has own PCI BUS ID so i will use that to build bond inside OVS15:57
spateli need to create very smart PXE boot to handle all this stuff without human intervene15:58
jamesdentonindeed15:58
spateljamesdenton: do we have zuul CI job to validate ovs+dpdk code?16:00
spatelI don't think so but still asking :)16:00
jamesdentonwe do not.16:00
spatelno worry.16:00
*** cloudnull has joined #openstack-ansible16:02
ThiagoCMCAny idea about this error: "neutron-linuxbridge-agent ..... ERROR neutron.agent.linux.ip_lib [req-...] Device brq3cdcc787-c4 cannot be used as it has no MAC address" ?16:06
ThiagoCMCSafe to ignore?  lol16:07
*** jgwentworth is now known as melwitt16:12
cloudxtinyhummm... ionic install seems to be dependent on swift. I am not setting up swift. Is there anyway around that?16:16
jrossercloudxtiny: https://github.com/openstack/openstack-ansible-os_ironic/blob/master/defaults/main.yml#L107-L11616:17
jrosserthe defaults/main.yml of all of these ansible roles is the bits you can tweak with16:18
cloudxtinysweet thanks16:18
cloudxtinyFor this "This requires http_root and16:22
cloudxtinycan I just use the repo container?16:22
jrosseras far as i can see you don't need to do anything, this is all automatic https://github.com/openstack/openstack-ansible-os_ironic/blob/master/templates/ironic.conf.j2#L66-L6916:24
jrosseri think it will make a web server in the ironic container itself, but i've never deployed this16:25
jrosserjamesdenton: ^ you've done this I think?16:25
*** pto has quit IRC16:25
jamesdentonlooking16:25
jamesdentonwith ironic_enable_web_server_for_images it creates a local http server, yes. no need for swift. i am not sure i have tested it with multi-node16:27
jrossernot sure if that uses the loadbalanber or if the local dhcp server will just point next-server to it's local http server....16:28
jrosseri.e each container does it's own thing and the one which does the dhcp wins16:28
jamesdentonright, that's what i would want to verify.16:29
*** klamath_atx has joined #openstack-ansible16:30
jrosserewwww https://github.com/openstack/openstack-ansible-os_ironic/blob/master/files/dhcpd.conf#L3916:32
jamesdentonwhen ironic_standalone16:34
jamesdentonbut yeah, that needs some work16:35
ThiagoCMCjamesdenton, hey buddy! Sorry to ask again... lol  - Do you know if it would be ok to ignore that "Device brq3cdcc787-c4 cannot be used as it has no MAC address" error?16:53
jamesdentonwhat are the circumstances of the error?16:53
ThiagoCMCWhen I create an L3 Router, it shows that. AND, it's running inside of a LXD container (it's actually working - has connectivity). Just curious if you see this message ofter too, or not.16:55
ThiagoCMCSorry, ERROR is actually from neutron-linuxbridge-agent and neutron-dhcp-agent, not l3.16:56
jamesdentoncan you send a paste of 'ip link shwo brq3cdcc787-c4' ?16:56
jamesdenton*show16:56
ThiagoCMCHere: http://paste.openstack.org/show/799834/16:57
*** klamath_atx has quit IRC17:02
jamesdentonhttps://opendev.org/openstack/neutron/src/branch/master/neutron/agent/linux/ip_lib.py#L955-L96717:04
jamesdentonmight be safe to ignore if its then going back and setting it up17:04
ThiagoCMCHmm... Cool, thanks!17:11
*** rpittau|bbl is now known as rpittau17:22
*** viks____ has quit IRC17:25
*** cloudxtiny has quit IRC17:32
openstackgerritJames Denton proposed openstack/openstack-ansible-os_ironic master: Update Inspector listener address and network  https://review.opendev.org/76066017:35
ThiagoCMCBTW, OSA Ussuri is also affected by: https://bugs.launchpad.net/neutron/+bug/188728117:50
openstackLaunchpad bug 1887281 in neutron "[linuxbridge] ebtables delete arp protect chain fails" [Medium,In progress] - Assigned to Lukas Steiner (steinerlukas)17:50
ThiagoCMCI manually applied the patch, ERROR gone!  :-P17:50
jamesdentonIs this CentOS specific17:55
jamesdentonnoonedeadpunk re: uwsgi for neutron. I am not able to replicate the failure locally with centos distro install17:58
*** ericzolf has quit IRC17:59
ThiagoCMCIt was happening with Ubuntu 20.04 as well.18:01
ThiagoCMCI've never used CentOS in my life!   :-P18:02
jamesdentongood to know, thank you18:02
jamesdentonhah18:02
ThiagoCMCNP!18:02
*** andrewbonney has quit IRC18:09
ThiagoCMCWheee! My latest OSA deployment is finally working! It's unique! 1- Controllers are QEMU VMs; 2- Ceph OSDs are LXD Containers; 3- Compute Nodes are LXD; 4- Network Nodes are also LXD Containers!18:27
ThiagoCMCI'm not seeing any performance issue with the Neutron Agents as LXD containers!   :-D18:27
*** gyee has joined #openstack-ansible18:30
*** miloa has quit IRC18:33
ThiagoCMCCheck it out! https://imgur.com/a/D5JckcD <- This Ubuntu QEMU/KBM Instance (OSA Ussuri) is running inside of a bare-metal LXD Container!  LOL18:35
ThiagoCMCAlso it's L3 Router!18:35
mgariepynice18:38
mgariepyThiagoCMC, where is your blog ?18:38
mgariepy;p18:38
* jrosser still has WIP lxd roles for OSA....18:42
*** nsmeds has joined #openstack-ansible18:43
jrosserThiagoCMC: did you get anywhere with qdrouterd?18:43
*** rpittau is now known as rpittau|afk18:45
*** yann-kaelig has quit IRC18:47
ThiagoCMCmgariepy, I don't have any blog!  (facepalm) LOL18:55
ThiagoCMCjrosser, it failed but, I'm willing to try again. The OS Services could not authenticate against qdrouterd (auth failed)18:56
jrossermaybe something good to poke at in an AIO build18:59
ThiagoCMCHmmm... That sounds like a good idea!19:00
ThiagoCMCI still have to work to enable Erasure Code in my Ceph pools for OSA, also better understand Cinder integration with Ceph and then, I'll give qdrouterd another shot.19:02
ThiagoCMCThis is a Cloud in my basement and I only have 48TB, with replica 3, I'm down to 16TB, which sucks19:03
ThiagoCMCI might enable EC and Compression lol19:04
mgariepyThiagoCMC, for EC, i can give you some hint if you want.19:11
ThiagoCMCAwesome!!!19:12
ThiagoCMCI have a few questions before getting there =P19:12
ThiagoCMCBTW! I have a couple of questions for the Ceph masters here! Under "admin/hypervisors", I can see Ceph as a "Local Storage" and it shows up 48T BUT, because of replica 3, I don't actually have 48T available. Is there any way to tell OpenStack what "ceph df" shows?19:12
ThiagoCMCAnd, when with Ceph, is there any way to still use the local storage of each Compute Node?19:13
mgariepyyou can do different things. like having nova use ephemeral storage on your computes, and use ceph in cinder via volume19:13
ThiagoCMCGreat!19:14
mgariepybut you won't be able to use the same hdd/ssd as both in the same time ;P19:14
ThiagoCMCSure  :-D19:14
mgariepyas for the df stuff i'm not sure haha19:14
ThiagoCMCI have good 1T NVMe on each compute node, doing nothing... It would make sense to use them but, they don't show up anywhere!19:15
mgariepyThiagoCMC, https://github.com/openstack/openstack-ansible-os_nova/blob/master/templates/nova.conf.j2#L237-L24019:20
mgariepytldr, if you don't have nova_libvirt_images_rbd_pool your local storage will be used.19:21
mgariepyif formated  and mounted at the correct place.19:21
mgariepylike: /var/lib/nova19:21
openstackgerritMerged openstack/openstack-ansible-os_manila master: Start using uWSGI role  https://review.opendev.org/70493519:31
ThiagoCMCmgariepy, so, do I have to upload the same image, twice, to Glance? One RAW for Ceph and another QCOW2 for "regular"?19:32
mgariepyit's possible to config cinder caching for the images19:41
mgariepyThiagoCMC, https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html19:42
mgariepyit works pretty well . i didn't had issue with that.19:43
ThiagoCMCThanks!19:58
ThiagoCMCAbout EC pools, what do you think of this post: https://themeanti.me/technology/2018/08/23/ceph_erasure_openstack.html ?19:59
*** tosky has quit IRC20:17
*** tosky has joined #openstack-ansible20:28
*** cshen has quit IRC20:29
*** cshen has joined #openstack-ansible20:37
djhankbHey folks - is anyone able to help me figure out my cloudkitty installation? I was able to get containers built with VENVs, but UWSGI is bombing out trying to find the 'encodings' module, which I thought was a builtin... http://paste.openstack.org/show/799846/20:53
*** klamath_atx has joined #openstack-ansible20:55
*** klamath_atx has quit IRC21:01
*** rfolco has quit IRC21:28
*** klamath_atx has joined #openstack-ansible22:10
*** klamath_atx has quit IRC22:15
*** nurdie has quit IRC22:36
*** klamath_atx has joined #openstack-ansible22:38
ThiagoCMCadmin0, hey buddy! I'm curious about something, in your FAQ: "https://www.openstackfaq.com/openstack-ansible-ceph/" - you configured the "cinder_backends:" 3 times (for example, at conf.d/cinder.yml) but, at the documentation "https://docs.openstack.org/openstack-ansible/latest/user/ceph/full-deploy.html" it's actually at the user_variables.yml declared just once. Why you did it 3 times, is there any22:45
ThiagoCMCdifference from the two ways?22:45
*** klamath_atx has quit IRC22:48
*** spatel has quit IRC22:49
ThiagoCMCIn my environment, I ended up with the two! But, from what I'm seeing, only the values from "user_variables-ceph.yml" are going to cinder_volumes' cinder.conf file. So, I'm not sure if the "container_vars:" under "conf.d/cinder.yml" is being ignored, or not.  :-P23:05
*** cshen has quit IRC23:06
*** tosky has quit IRC23:33
*** nurdie has joined #openstack-ansible23:47
*** nurdie has quit IRC23:51
*** cshen has joined #openstack-ansible23:55

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!