Monday, 2020-12-07

*** spatel has quit IRC00:01
*** luksky has quit IRC00:04
*** MickyMan77 has quit IRC00:04
*** MickyMan77 has joined #openstack-ansible00:05
*** tosky has quit IRC00:20
*** jamesdenton has quit IRC00:48
*** jamesden_ has joined #openstack-ansible00:49
*** cp- has joined #openstack-ansible00:52
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Add openstack-ansible-uw_apache focal job  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/75412301:05
*** watersj has joined #openstack-ansible01:10
openstackgerritlikui proposed openstack/openstack-ansible master: Update doc8 version  https://review.opendev.org/c/openstack/openstack-ansible/+/76569501:19
*** watersj has quit IRC01:55
*** spatel has joined #openstack-ansible03:29
*** spatel has quit IRC03:32
*** vesper11 has quit IRC03:34
*** vesper11 has joined #openstack-ansible03:35
*** d34dh0r53 has joined #openstack-ansible03:59
*** jamesden_ has quit IRC04:34
*** jamesdenton has joined #openstack-ansible04:35
*** vesper has joined #openstack-ansible04:36
*** vesper11 has quit IRC04:37
*** raukadah is now known as chandankumar05:17
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** chandankumar is now known as chkumar|ruck06:09
*** SiavashSardari has joined #openstack-ansible06:12
*** chkumar|ruck is now known as chkumar|rover06:14
*** chkumar|rover is now known as chkumar|ruck06:35
*** pto has joined #openstack-ansible07:01
*** miloa has joined #openstack-ansible07:21
*** luksky has joined #openstack-ansible07:40
*** sep has joined #openstack-ansible07:50
*** kleini has joined #openstack-ansible08:06
*** cshen has joined #openstack-ansible08:12
*** shyamb has joined #openstack-ansible08:14
*** pcaruana has joined #openstack-ansible08:15
openstackgerritAndrew Bonney proposed openstack/openstack-ansible master: Ensure kuryr repo is available within CI images  https://review.opendev.org/c/openstack/openstack-ansible/+/76576508:27
*** andrewbonney has joined #openstack-ansible08:28
*** mensis has joined #openstack-ansible08:28
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314108:30
*** stduolc1 has quit IRC08:36
*** rpittau|afk is now known as rpittau08:46
*** shyamb has quit IRC09:01
*** shyamb has joined #openstack-ansible09:05
noonedeadpunkmornigns09:06
noonedeadpunkjrosser: how do you think - should I branch galera-client repo or should we retire it?09:19
noonedeadpunkI think worth branching as no time for retirement...09:20
noonedeadpunkah, and we need deprecation of it not retitrement...09:22
jrosseryeah we shouldnt retire10:05
noonedeadpunkjust forgot about difference between these 2 options at the moment of writing10:05
noonedeadpunkseems zun super close10:06
*** mensis has quit IRC10:06
andrewbonneyHopefully just a battle with the Docker rate limits remaining :)10:06
jrosseryes it is really close10:06
jrosserthere seems to be some things could be better in the zun tempest tests but thats not an OSA problem10:07
jrosseri think andrewbonney is reasonably happy with the patches now and we will look at testing in the lab rather than AIO next10:08
noonedeadpunkand tempest passes10:10
noonedeadpunkagree, it's really great work10:10
jrosserstrange fail on centos there10:12
noonedeadpunkit's currently run tempest10:12
noonedeadpunkah well thats were it failed10:13
noonedeadpunkand failed just single test10:15
jrossererror while evaluating conditional ((groups['oslomsg_rpc_all'] | length) < 1): 'dict object' has no attribute 'oslomsg_rpc_all'10:15
jrosserwhich is kind of true i think becasue not sure where oslomsg_rpc_all really is defined10:16
jrosserhttps://zuul.opendev.org/t/openstack/build/584c607c6b534f8aaa74dc59f00f4f87/log/job-output.txt#1323610:16
noonedeadpunkwell it's upgrade jobs....10:16
noonedeadpunkand we got things broken on U kind of...10:17
jrosseryes i think the role really is unusable on U10:18
noonedeadpunkso we probably want to packport things to U as well?10:19
andrewbonneyI haven't tested with U yet, but hopefully it'll backport cleanly10:20
jrosserwe'd need the kuryr fix backporting as well wouldnt we?10:21
andrewbonneyThey're all pulling things from master at the moment (admittedly non-ideal) as they're pinned within the role10:22
jrosserin order to keep the versions controlled properly for the victoria release i think we'll need another section here https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml#L347-L35110:27
noonedeadpunkoh, yes10:28
andrewbonneyOk, I'll add that10:28
noonedeadpunkWorth adjusting https://review.opendev.org/c/openstack/openstack-ansible/+/765765 - it's failing currently anyway10:29
noonedeadpunklet's jsut wait for 763141 to pass first10:29
noonedeadpunknah, centos has failed again10:30
noonedeadpunkmaybe we need smth like `tempest_run_concurrency: 0` in https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables_zun.yml.j210:31
jrosserinteresting ansible docs don't seem to mention playbook defaults in variable precedence10:31
*** spatel has joined #openstack-ansible10:35
*** spatel has quit IRC10:40
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_client master: Deprecate openstack-ansible-galera_client role  https://review.opendev.org/c/openstack/openstack-ansible-galera_client/+/76577910:41
openstackgerritMerged openstack/openstack-ansible master: Update doc8 version  https://review.opendev.org/c/openstack/openstack-ansible/+/76569510:47
*** yann-kaelig has joined #openstack-ansible11:04
*** tosky has joined #openstack-ansible11:22
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314111:26
admin0morning \o11:40
*** shyamb has quit IRC11:46
*** shyamb has joined #openstack-ansible11:48
*** mensis has joined #openstack-ansible11:50
*** masterpe has joined #openstack-ansible11:53
masterpeafternoon11:53
*** rfolco has joined #openstack-ansible11:54
*** yann-kaelig has quit IRC12:00
jrossero/ hello12:03
admin0masterpe, thank you for the config .. i made some progress ..  br-lbaas had to be in compute node also, else the agent complained and died:  2020-12-06 20:59:44.974 86303 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge br-lbaas for physical network lbaas does not exist. Agent terminated!12:07
admin0so after creating the agent in all , amphora is up ..12:07
admin0but its in its own bridge like an isolated island12:07
admin0will be doing 1 more test to confirm12:07
*** odyssey4me has joined #openstack-ansible12:15
masterpeadmin0: so you created the br-lbaas bridge youself via netplan?12:29
noonedeadpunkjrosser: should I wait for zun stuff to land before branching? I think it's pretty important to wait for it...12:35
*** priteau has quit IRC12:40
*** shyamb has quit IRC12:47
*** spatel has joined #openstack-ansible12:52
*** cshen has quit IRC12:53
jrossernoonedeadpunk: centos fails due to missing multipathd config and service not being enabled, just looking at that here12:56
*** spatel has quit IRC12:57
*** shyamb has joined #openstack-ansible12:57
jrosserso either that gets fixed today or we can make centos8 nv and backport a fix, should probably try not to wait too much longer12:57
jrosserit looks like there is a similar thing in the nova role where debuntu gets multipathd installed but centos does not12:58
noonedeadpunkYeah was also thinking as making it nv just to merge13:06
noonedeadpunkbtw, I've placed patches to revive monasca roles - mensis said he have fully working roles for U, and today pinged me about it :(13:07
noonedeadpunkbut I think we should leave it for W or branch afterwards13:07
noonedeadpunkit's not smth I will be waiting for13:07
admin0masterpe, this is what i have so far: https://gist.github.com/a1git/27e47c7ebcd6f65566663226ba4019eb13:09
*** mgariepy has quit IRC13:14
*** mgariepy has joined #openstack-ansible13:19
*** priteau has joined #openstack-ansible13:27
*** shyamb has quit IRC13:34
*** shyamb has joined #openstack-ansible13:35
openstackgerritAndrew Bonney proposed openstack/openstack-ansible master: Ensure kuryr repo is available within CI images  https://review.opendev.org/c/openstack/openstack-ansible/+/76576513:40
*** CeeMac has joined #openstack-ansible13:44
*** miloa has quit IRC14:10
*** shyamb has quit IRC14:18
*** owalsh has quit IRC14:19
mgariepyjrosser, https://review.opendev.org/c/openstack/nova/+/75876114:25
admin0masterpe, yes ..  the bridges are created self .. else neutron complained bridge not found even when the octavia playbooks were not run14:38
admin0sorry for late reply .. was eating14:38
*** owalsh has joined #openstack-ansible14:42
*** owalsh has quit IRC14:47
*** cshen has joined #openstack-ansible14:49
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Add possibility to configure systemd sockets  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/76319414:49
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Add option to create systemd native service overrides  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/76581514:51
*** owalsh has joined #openstack-ansible14:58
*** spatel has joined #openstack-ansible15:11
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Add option to create systemd native service overrides  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/76581515:14
*** SiavashSardari has quit IRC15:44
*** ajg20 has joined #openstack-ansible15:44
*** ajg20 has quit IRC15:56
*** jamesdenton has quit IRC15:58
*** jamesdenton has joined #openstack-ansible15:59
admin0masterpe, can you show be a brctl output of a amphora instance you have, on how exactly the bridge connections have16:03
admin0how exactly its supposed to work/connect16:04
noonedeadpunkjrosser: feels we have new resolver? https://b50ef5623c6036d09535-7575ab0330f7da34328cde6b7d597146.ssl.cf2.rackcdn.com/763194/5/check/openstack-ansible-linters/703ce8e/job-output.txt16:07
noonedeadpunkbut can't reproduce locally...16:07
jrosserhrrm we fixed this in one place, which wasfor the neutron functional tests i remember16:10
jrosserhttps://github.com/openstack/openstack-ansible-tests/commit/aaecefaee91d468f6f96cac26bc8a119d5fd2dca16:10
jrosseroh16:10
jrosserright so because the systemd_service role won't be testing by using python_venv_build i expect we need a different fix16:11
noonedeadpunkI'm not sure linters does use it though16:11
noonedeadpunkbut at this point I see no pip upgrade in log...16:12
jrosserthis is a bionic node so will be running ancient pip 9.x16:13
jrosserbut i guess the error comes from pip in the tox env and i'm not sure how that is getting there16:17
noonedeadpunkwell on sandbox I'm getting 20.0.2 pip16:18
noonedeadpunkbut it's the same as I had system-wise16:18
jrosserthis looks kind of messed up https://github.com/openstack/ansible-role-systemd_service/blob/master/tox.ini16:19
jrosserseems really arbitrary which tox envs use upper constraints and which dont16:19
jrosserwhat i was going to suggest was putting another -c in here https://github.com/openstack/ansible-role-systemd_service/blob/master/tox.ini#L3716:21
jrosserpointing to this https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt16:22
noonedeadpunkwell we would need to do this everywhere then16:22
noonedeadpunkas neutron also fails now16:23
jrosserthat seems to indicate the CI image is different16:23
jrosseror whatever tox uses to build the venv is now different16:23
jrosserthe/its16:23
noonedeadpunkwell, error is different there..16:23
jrosserso we have a very new version of virtualenv, only 5 hours old16:24
jrosseryes thats it "Bump pip to 20.3.1, setuptools to 51.0.0 and wheel to 0.36.1 - by @gaborbernat. (#2029)"16:25
noonedeadpunkuh...16:27
noonedeadpunkwell, probably we should just ensure tox version somewhere in https://opendev.org/openstack/openstack-ansible-tests/src/branch/master/test-ansible-env-prep.sh16:28
jrosserwell when i say "thats it" i can see how the new resolver is getting there16:28
jrosserperhaps not seeing why you cant reproduce it16:28
*** macz_ has joined #openstack-ansible16:31
*** fresta has quit IRC16:36
*** fresta has joined #openstack-ansible16:37
*** macz_ has quit IRC16:41
admin0the amphora instance is up ..one lb is in pending_create, without any logs16:43
*** macz_ has joined #openstack-ansible16:44
*** fresta has quit IRC16:45
*** fresta has joined #openstack-ansible16:47
*** fresta has quit IRC16:47
openstackgerritJonathan Rosser proposed openstack/ansible-role-systemd_service master: Use upper-constraints for all tox environments  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/76583116:54
guilhermespadmin0: are you ocatvia containers able to reach the amphoras? you can see the status of amphoras by "openstack loadbalancer amphora list"16:54
jrossernoonedeadpunk: looks like integrated repo linters also fail https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f33/765765/2/check/openstack-ansible-linters/f336a98/job-output.txt16:57
noonedeadpunkuh16:57
noonedeadpunkbut they failed differently16:58
jrosseryeah they did16:58
*** rpittau is now known as rpittau|afk16:58
admin0guilhermesp, i see 1 in booting phase16:59
admin0for hours16:59
guilhermespand the instance is active and running? ports allocated and etc?17:00
guilhermespthis sounds to me that octavia health manger is not able to contact the amphoras17:00
jrosseradmin0: did you wire br-lbass to the underlying interface?17:01
admin0i had them via netplan17:01
jrosserand you see that with brctl? becasue your paste earlier only showed on member on the bridge i think?17:02
admin0you mean for every amphora crated, i have to manually add a connection between that bridge and br-lbaas ?17:02
jrosserneutron will plug each amphora onto the bridge17:02
jrosserbut if the bridge does not connect off the host then they won't be able to talk to the controllers17:03
*** fresta has joined #openstack-ansible17:03
guilhermespon computes, you should have this configuration `physical_interface_mappings = <provider_network>:<bond>`17:05
admin0let me go step by step .. just like we create br-mgmt br-storage etc, i have br-lbaas in every node ( controllers, compute,network)17:05
guilhermespif the bonding is correctly wired, neutron should create the bridges and you should be able yo reach then through octavia containers17:05
admin0guilhermesp, there is where i am lost i think .. in the 1st step ..    who creates br-lbaas ?17:07
admin0i do right ?17:07
admin0on all controllers, compute and network node17:07
guilhermespyes, but i dont think you need on computes. The only reason to create the br-lbaas on controllers is to make lxc able to create the bridges for the containers. So octavia container are able to connect to port on computes/network nodes through your bonding17:09
guilhermespi suggest you to test your link first between controllers and computes17:10
admin0guilhermesp, when i did not create in computes, it happend as soon as i ran neutron playbook:  ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge br-lbaas for physical network lbaas does not exist. Agent terminated!17:10
guilhermespprobably your linux bridge agent on computes are mapping to br-lbaas. which doestn exists afaik17:11
*** klamath_atx has quit IRC17:11
jrosserbr-lbaas needs to exist on the computes for a flat network type17:11
jrossernot for vlan17:11
admin0do we need vlan or do we need flat ?17:12
jrosser:) I said many times it is up to you17:12
jrosseryou can choose, then you make config to suit which one you want17:12
admin0yes, but i need just one that works .. and i have not been able to make any work17:12
admin0say we chose flat17:12
admin0because its easier for operators like me to have one extra  br-lbaas just like br-mgmt etc17:12
admin0that makes it easy17:13
admin0so if we do flat, the IP on br-lbaas is also unrouted right ? like br-mgmt ?17:13
admin0so far, so good on that .. ..   create br-lbaas on all, just treat it like br-mgmt, give it an unrouted ip range on /22 .. that i did17:13
jrosserthere is no IP on br-lbaas17:14
* jrosser not sure thats what you mean?17:14
admin0the docs have 172.29.232.0/22 as an ip range for br-lbaas if i recall17:15
admin0so we don't need ips on br-lbaas ? it is just a bridge like br-vlan ?17:18
jrossercorrect, on a compute node it is a L2 bridge that allows the amphora to talk to the controller17:19
jrosserfor a flat network17:19
jrosserif you make it vlan none of this is needed at all and it's just a regular neutron provider network17:19
admin0https://developer.rackspace.com/docs/private-cloud/rpc/master/rpc-octavia-internal/octavia-install-guide/17:19
admin0for a vlan based one, can you give me a few pointers ? like how it would look . gist/pastebin if possible17:20
admin0i have br-vlan and lets assume we can use vlan200 for this17:20
jrosserhere is an example https://satishdotpatel.github.io//openstack-ansible-octavia/17:24
jrosserthere is no 'right way' to do this, you just need to have connectivity from the amphora to the containers on the controllers17:25
jrossereveryone has a slightly different approach in some way because all deployments are a little different17:25
admin0does the br-lbaas have an ip ?17:27
admin0in vlan example ?17:27
jrosserno, the IP are on eth14 in the octavia containers17:28
jrosserthe bridge is just for LXC to plug those onto17:28
admin0ok17:28
spateladmin0: In my case i do have IP on br-lbaas but that bridge isn't attached to any physical interface17:31
spatelreason i put ip just to do ping test, its not required to have IP17:32
spatelMy octavia working great without any issue the way i have deployed.17:33
admin0so suppose my vlan is on eth4 .. and i want to use vlan20, in that case, in controllers, i create br-lbaas which is on top of eth4.20 ?17:33
admin0or is it just an empty bridge with nothing underlying it ?17:33
admin0ok17:35
admin0empty bridge .. got it17:35
admin0i will try :)17:35
admin0rebuilding again17:35
spateladmin0: check out this thread also - http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017598.html17:35
spateljust create br-lbaas bridge file but don't attach it and when you run my veth script it will map it with br-vlan and your lbaas VLAN which you going to specify in veth script.17:37
admin0ok17:37
admin0give me a few hrs to rebuild and come back :)17:37
spatelcool!17:38
jrossernoonedeadpunk: perhaps we pin virtualenv back for now just here https://github.com/openstack/openstack-ansible-tests/blob/bb630f047f4807451fa4ecb4ab2539458f659b95/run_tests_common.sh#L7917:40
* jrosser heads out17:43
*** gyee has joined #openstack-ansible17:54
noonedeadpunkyeah, good idea17:58
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-tests master: Bump virtualenv to version prior to 20.2.2  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/76583918:01
jrosserI also could not reproduce the linters error here in a vm18:07
*** andrewbonney has quit IRC18:17
admin0spatel, so the net_name and ip_from_q has no issues if its named lbaas or octavia ?18:35
admin0you have named it octavia, diff from all other examples18:35
admin0but i could not find how it will be used later on18:35
*** mmercer has joined #openstack-ansible18:36
spateltruly speaking i have no idea what ip_from_q: "octavia" does, i just get that directly from OSA official example18:36
spateljrosser: or noonedeadpunk may know about that18:37
admin0i think i can replicate your setup .. will use vlan 27 as well :D18:38
admin0just to make sure i am not making any mistake18:38
admin0spatel, so if the br-lbaas does not need any ips, then lbaas: 172.27.40.0/24  -- is not also required right ?18:42
admin0sorry .. what is this for : octavia_container_network_name: lbaas_address18:44
admin0will copy for now and worry about it later :)18:46
*** cshen has quit IRC18:46
spateloctavia_container_network_name: lbaas_address  its related to  lbaas: 172.27.40.0/2418:48
admin0your exact copy failed because you have octavia in net-name and i_from_q while you mentioned lbaas on top18:51
admin0matching those 3 made it worked18:51
admin0i also found that not having horizon_enable_neutron_lbaas: False will break horizon18:52
jrosseradmin0: would be great if you can patch/fix that horizon issue18:55
admin0let me make this work first .. been on it for 2+ weeks fulltime now18:56
admin0but most of the time was spent rebuilding platform when it broke18:56
jrosserimho go slower and debug18:56
jrosserthis does not just magically work by running with osa defaults, kind of similar to magnum18:57
spateladmin0: you are right about i_from_q, let me fix my documentation19:00
spateli do have lbaas in production19:00
spateljrosser: what is ip_from_q ?19:02
spateladmin0: i have fixed my blog19:04
admin0been with osa for 8 years now .. i kind of expect things to run smoothly when its included ..19:17
admin0the defaults just work flawlessly .. so the expectation is high19:17
*** cshen has joined #openstack-ansible19:21
jrosserspatel: it tells the dynamic inventory which ip range to take addresses from when assigning them to container interfaces “ip from queue”19:25
jrosserso for octavia his will typically only be needing 3, and the rest of the cidr should be covered by “used_ips”19:26
jrosserthat used_ips range can be given to neutron as the set it’s allowed to use for the amphora19:26
spateldoes it matter if i change ip_from_q: foo ?19:27
jrosserfor an existing deployment maybe?19:27
jrossernot sure19:27
admin0what i did was created the br-lbaas via netplan, then used the script from spatel to connect it to vlan2719:31
admin0now setting up the rest of the infra/openstack so that i have a clean system to test this19:31
jrosserif you did put an ip on the bridge your first validation could be to check the bridges all can ping each other19:33
jrosserif not, no matter, but check eth14<>eth14 in your octavia containers after setup_hosts.yml19:34
admin0i only am using 1 controller for this19:35
admin0i mean this test setup is c1 + n1 + h1  ( controller1, network node1,  hypervisor1)19:35
jrosserok, no problem. you can always add ip temporarily on n1/h1 to check the underlying connectivity from the container19:36
admin0so far, i have only created the bridge in the controller19:39
admin0do i need to create the bridge in the network also ?19:39
jrosseroh err no sorry19:41
jrossernot for vlan19:41
admin0ok19:43
*** luksky has quit IRC19:54
*** luksky has joined #openstack-ansible19:54
spateljrosser: thanks for explanation19:56
spotzjrosser: We've released Victoria no?20:02
jrossernot yet :/20:02
spotzjrosser: hehe good thing I asked and didn't go patch docs:)20:02
jrosserbranching real soon for rc20:02
spotzOk mI'll keep an eye out and patch on release:)20:04
spateljrosser: +1 for Victoria release. (hope tomorrow)20:09
*** jamesdenton has quit IRC20:15
*** jamesdenton has joined #openstack-ansible20:15
jrosserspatel: it will be a release candidate first20:26
jrossereveryone testing ++20:26
spateljrosser: but we can keep rolling out minor release until reach stable right ?20:26
jrosserrc1, rc2.....20:27
jrosserthe release proper when happy20:27
jrosserseems to be confusion here20:27
jrossercurrently there is no stable/victoria branch for osa, that will be done very soon20:27
jrosserthe tag will be a release candidate, not a 'proper release'20:28
spatelI am ok to deploy rc1 and then when stable come out i can do quick minor upgrade to stable20:28
spateljrosser: that is doable right?20:29
jrossersure20:29
jrosserit's just a tag on the same branch either way20:29
spatelperfect!20:29
*** nurdie__ has quit IRC20:58
*** watersj has joined #openstack-ansible21:04
watersjwhat network switches are you using and how do you manage various vLANs (and or vxlans) you create?21:13
*** luksky has quit IRC21:14
admin0watersj, what i do is tag a block of vlans in the switch .. like 3000-3025 etc .. and that would be enough for all current/future needs21:17
*** jamesdenton has quit IRC21:18
admin0mgmt, vxlan, storage, replication = 4 already ..21:18
admin0and some for the external networks21:18
watersjadmin0, trunk access port for you infra and nova nodes21:18
*** jamesdenton has joined #openstack-ansible21:18
watersjmakes sense, what about ironic provisioning21:19
admin0not using ironic (yet)21:22
admin0would love a good howto on it21:22
admin0some switches allow tagged vlans with ports on some default vlan21:22
admin0so ssh goes on the untagged on, rest all are tagged21:22
watersjwe are going to have multiple customer that will have dynamically allocated baremetal for. Can't have all ports tag for those nodes, but nova nodes and infra sure21:25
*** luksky has joined #openstack-ansible21:33
*** watersj has quit IRC21:42
spatelironic required some special PXE vlan etc.. to make it work21:47
*** sshnaidm has quit IRC21:49
*** csmart has quit IRC22:02
*** odyssey4me has quit IRC22:02
*** johnsom has quit IRC22:02
*** jhesketh has quit IRC22:06
*** jhesketh has joined #openstack-ansible22:08
*** sshnaidm has joined #openstack-ansible22:08
*** mensis has quit IRC22:08
*** tobberydberg has quit IRC22:14
*** tobberydberg has joined #openstack-ansible22:19
*** pcaruana has quit IRC22:21
*** cshen has quit IRC22:28
*** johnsom has joined #openstack-ansible22:30
*** csmart has joined #openstack-ansible22:31
*** rfolco has quit IRC22:34
*** spatel has quit IRC22:40
*** rfolco has joined #openstack-ansible22:43
*** rfolco has quit IRC22:48
*** ThiagoCMC has joined #openstack-ansible23:09
*** ThiagoCMC has quit IRC23:34
*** ThiagoCMC has joined #openstack-ansible23:36
*** cshen has joined #openstack-ansible23:45
*** cshen has quit IRC23:49
*** luksky has quit IRC23:51
*** spatel has joined #openstack-ansible23:57

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!