Tuesday, 2019-07-16

*** enriquetaso has quit IRC00:15
*** lixiaoy1 has joined #openstack-cinder00:21
*** brinzhang_ has joined #openstack-cinder00:34
*** brinzhang has quit IRC00:37
*** TxGirlGeek has joined #openstack-cinder00:48
*** Liang__ has joined #openstack-cinder00:58
*** imacdonn has quit IRC01:15
*** zhengMa has joined #openstack-cinder01:16
*** imacdonn has joined #openstack-cinder01:16
*** spatel has joined #openstack-cinder01:27
openstackgerrithjy proposed openstack/cinder master: Add MacroSAN cinder driver  https://review.opendev.org/61231101:35
*** baojg has joined #openstack-cinder01:42
*** ruffian_sheep has joined #openstack-cinder01:49
*** tkajinam has quit IRC02:23
*** tkajinam has joined #openstack-cinder02:24
*** hemna has quit IRC02:30
*** whoami-rajat has joined #openstack-cinder02:42
jungleboyjrosmaita hemna_  Ugh, they changed the hotel policy.  I thought your hotel would be ok, but it isn't.  So I had to switch to the Staybridge Suites Raleigh Durham Airport02:43
jungleboyjhttps://www.irccloud.com/pastebin/IbeKi1pF/02:44
*** ruffian_sheep has quit IRC02:52
*** ruffian_sheep has joined #openstack-cinder03:01
*** logan- has quit IRC03:11
*** logan- has joined #openstack-cinder03:14
*** psachin has joined #openstack-cinder03:26
*** Kuirong has quit IRC03:31
*** Kuirong has joined #openstack-cinder03:33
*** ruffian_sheep has quit IRC03:35
*** TxGirlGeek has quit IRC03:59
*** udesale has joined #openstack-cinder04:09
*** pcaruana has joined #openstack-cinder05:08
*** zhengMa has left #openstack-cinder05:27
*** spatel has quit IRC05:28
*** udesale has quit IRC05:39
*** udesale has joined #openstack-cinder05:40
*** udesale has quit IRC05:42
*** vishalmanchanda has joined #openstack-cinder05:55
*** udesale has joined #openstack-cinder05:57
*** jmlowe has quit IRC06:07
*** jmlowe has joined #openstack-cinder06:10
*** ruffian_sheep has joined #openstack-cinder06:13
*** ruffian_sheep34 has joined #openstack-cinder06:23
*** ruffian_sheep has quit IRC06:26
*** ruffian_sheep34 is now known as ruffian_sheep06:28
*** dpawlik has joined #openstack-cinder06:40
ruffian_sheepwhoami-rajat:Hi,I have correct the driver by the suggestion of reviewer.06:42
ruffian_sheephttps://review.opendev.org/#/c/612311/06:42
ruffian_sheepwhoami-rajat:But there is some error in the test_volume_boot_pattern of cinder-tempest-dsvm-lvm-lio-barbican06:43
ruffian_sheepwhoami-rajat:I remember this is a problem before?06:44
whoami-rajatruffian_sheep: Hi, yes i'm seeing that in other patches too, for now it should be fixed by a recheck06:44
ruffian_sheepwhoami-rajat:Should the recheck command I sent be correct?06:45
*** nikeshm has joined #openstack-cinder06:45
whoami-rajatruffian_sheep: yes, did you check the driver checklist sean mentioned in the review?06:51
whoami-rajatruffian_sheep: the macrosanCI doesn't seem to have run a single time on this patch06:57
*** rcernin has quit IRC07:00
ruffian_sheepwhoami-rajat:Yes, I saw it. Many places have been solved according to that document. Is there anything else I missed? I temporarily shut down the CI environment. Currently, I will run the corresponding function on the CI environment before submitting the driver. I can execute the run-* command to trigger if needed07:01
whoami-rajatruffian_sheep: need to do an entry here https://github.com/openstack/cinder/blob/master/doc/source/reference/support-matrix.ini07:04
ruffian_sheepOk, the following rst file needs to be modified?07:06
whoami-rajatit's an .ini file, it's for letting users know the features your driver supports https://docs.openstack.org/cinder/latest/reference/support-matrix.html07:07
whoami-rajatruffian_sheep: could you explain why there are so many sleep calls in the driver? most of the methods are synchronized so there won't be race conditions (if that is what the sleep is for)07:10
*** Alon_KS has quit IRC07:15
*** tosky has joined #openstack-cinder07:21
ruffian_sheepwhoami-rajat:Please wait a moment, I will confirm with the developer the reason for this part of the operation, and will give you an answer. In addition, I would like to ask the support-matrix.rst file I also need to modify it?07:21
*** spatel has joined #openstack-cinder07:24
whoami-rajatruffian_sheep: i don't think so, the .ini file is included in the rst file so there isn't any need for changes in the rst file.07:24
*** sahid has joined #openstack-cinder07:25
ruffian_sheepwhoami-rajat:Ok, the developers are meeting. I will reply to you after verifying the reason. ;)07:25
ruffian_sheep;)07:26
whoami-rajatruffian_sheep: sure07:26
*** spatel has quit IRC07:29
ruffian_sheepwhoami-rajat:Just confirmed, it seems that there is a bug in this place long time ago, so add this sleep to solve. We are now considering deleting this part and making a new submission after verification.07:37
*** helenafm has joined #openstack-cinder07:41
*** udesale has quit IRC07:46
*** udesale has joined #openstack-cinder07:46
whoami-rajatruffian_sheep: hmm, so you're planning to fix it in current implmenetation?07:52
ruffian_sheepwhoami-rajat:yep,after test07:53
whoami-rajatruffian_sheep: great.07:53
ruffian_sheepwhoami-rajat:Can I ask for extra? I am now trying to use --subunit to generate html files to bring html through py files, through such instructions. Tox -e all -- '^(?=.*volume)(?!.*test_volume_boot_pattern).*' --concurrency=1 --subunit Where will this html file be generated?07:53
ruffian_sheepwhoami-rajat:Can I ask for extra? I am now trying to use --subunit to generate html files instead of generating html via py files, through such instructions. Tox -e all -- '^(?=.*volume)(?!.*test_volume_boot_pattern).*' --concurrency=1 --subunit Where will this html file be generated?07:54
ruffian_sheepI don't see the specific usage method from the link, I want to ask this question.07:57
whoami-rajatruffian_sheep: i'm not much familiar with tempest code, you can ask the same in #openstack-qa , AFAIK tempest uses stestr internally to run tempest tests, so you can try to search for it here https://stestr.readthedocs.io/en/latest/MANUAL.html08:09
ruffian_sheepok08:10
*** tkajinam has quit IRC08:25
dpawlikHello everyone, small question. Can I create a volume and specify desired backend cluster? E.g. I want to create a volume that will be created on specified ceph cluster, but I don't want to create new cinder type. Can I do that using --hint param?08:44
dpawlikcc jungleboyj08:44
*** ruffian_sheep has quit IRC08:46
*** lixiaoy1 has quit IRC08:53
*** ruffian_sheep has joined #openstack-cinder08:54
whoami-rajatdpawlik: if you have existing volume(s) in that backend, you can use the scheduler hints (--hint) option to create volume in that backend cluster. https://docs.openstack.org/cinder/latest/cli/cli-cinder-scheduling.html08:54
dpawlikwhoami-rajat: thanks for reply.09:06
dpawlikIm trying using such query09:06
dpawlikcinder  create 10 --name DP-test-ceph --hint  os-vol-host-attr:host=rbd:volumes@ceph_cluster_109:06
dpawlikbut it doesn't work09:06
dpawlikSo I need to create one volume on this cluster that I want09:07
dpawlikand then using e.g. openstack volume create --hint same_host=ceph_1_cluster --size 10  my-base-volume09:09
dpawlikhmm09:09
dpawlikpartially it is a solution, but I was hoping that there will be some direct way to spawn on specified cluster09:10
dpawlikthanks whoami-rajat09:10
dpawlikbtw, whoami-rajat is that available in cinderv2 or it requires cinderv3 ?09:14
whoami-rajatdpawlik: so --hints doesn't provide the flexibility to use any attribute, it currently supports same_host and different_host hints only AFAIK.09:19
whoami-rajatdpawlik: it was implemented in v2 so it should work with v209:20
*** Liang__ has quit IRC09:20
dpawlikk, thx09:30
whoami-rajatdpawlik: np09:34
openstackgerritXuan Yandong proposed openstack/cinderlib master: Bump openstackdocstheme to 1.20.0  https://review.opendev.org/67099009:35
*** boxiang has joined #openstack-cinder09:41
whoami-rajatgeguileo: Hi09:52
geguileowhoami-rajat: hi!09:52
whoami-rajatgeguileo: I was implementing a bp for cinder for making volume types mandatory, and cinder gate was failing due to cinderlib tests, as i remember we discussed right?09:54
geguileowhoami-rajat: I remember09:56
whoami-rajatgeguileo: the intention of my patch was to make the tests (12 failing) to have volume types, do we still need coverage of cases for no volume type cases as it opposed what i'm trying to do in cinder.09:57
geguileowhoami-rajat: If Cinder is going to enforce having volume types, then it's not enough to change the tests09:58
geguileowhoami-rajat: we have to change cinderlib as well to always create a volume type09:58
geguileowhoami-rajat: or to have a default volume type for all volumes that don't have one09:58
geguileoso it should be a change in the library itself, not the tests09:58
geguileobtw you can do Depends-On between Cinder and cinderlib09:59
whoami-rajatgeguileo: yes, i was confused regarding that, i'm really not familiar how things work for cinderlib, so i don't need to propose a new spec to change cinderlib code right?10:00
geguileowhoami-rajat: no, there's no need10:00
geguileoif there's one in Cinder you should reference it10:00
geguileosince that's the one driving the change in cinderlib10:01
whoami-rajatgeguileo: ok thanks.10:03
whoami-rajatIIUC cinderlib uses cinder.conf file to get backend conf and then calls the respective driver do_setup from cinder to configure the backend, the backend object in cinderlib contains the volume_type info10:03
geguileowhoami-rajat: Cinderlib doesn't uses cinder.conf, but cinderlib's functional tests do (if we don't set the config via other method)10:04
geguileowhoami-rajat: the backend doesn't contain a volume_type10:06
geguileovolume_type is something volumes have, not backends (at least right now)10:06
geguileojungleboyj: I believe I will attend, though it's not closed yet10:07
whoami-rajatgeguileo: ok, i will make appropriate changes in the create volume flow. Thanks for the help.10:16
geguileowhoami-rajat: thank you for working on this10:16
whoami-rajatgeguileo: np :)10:17
*** tosky__ has joined #openstack-cinder10:17
*** tosky has quit IRC10:17
geguileowhoami-rajat: does the code you have for cinder automatically create a DB entry for the default volume type?10:17
*** tosky__ is now known as tosky10:18
whoami-rajatgeguileo: yes10:18
whoami-rajatgeguileo:  https://review.opendev.org/#/c/639180/23/cinder/db/sqlalchemy/migrate_repo/versions/132_create_default_volume_type.py10:19
geguileowhoami-rajat: OK, then you'll need to use it in the create volume like you say, but also create it in memory for the other non DBMS persistence plugins in cinderlib10:20
*** jojoda has joined #openstack-cinder10:20
*** udesale has quit IRC10:29
whoami-rajatgeguileo: ok, I need to understand cinderlib first. :) thanks.10:33
*** jojoda has quit IRC10:48
geguileowhoami-rajat: oh, and you'll have to change cinderlib/persistence/dbms.py  method delete_volume so it doesn't delete the volume_type if it's the default10:49
*** jojoda has joined #openstack-cinder10:58
*** brinzhang has joined #openstack-cinder10:59
*** davidsha has joined #openstack-cinder11:01
geguileowhoami-rajat: I have created a bug for the patch you are working on https://bugs.launchpad.net/cinderlib/+bug/183672411:02
openstackLaunchpad bug 1836724 in cinderlib "Cannot snapshot volumes with volume types" [Undecided,New]11:02
whoami-rajatgeguileo: oh, that would been a bummer. but why does the delete_volume call delete the associated volume type ?11:02
geguileowhoami-rajat: because in cinderlib we don't have soft-deletes11:03
*** brinzhang_ has quit IRC11:03
geguileoall deletes are hard-deletes, as we won't have cron-jobs to remove soft-deletes11:03
geguileothat way the library self-cleans when you delete volumes11:03
geguileowell, volumes, snapshots, etc.11:04
whoami-rajatgeguileo: ok understood.11:05
whoami-rajatgeguileo: if i create 5 volumes with same type and delete one volume then it clears out the entry of volume type from db ?11:05
whoami-rajatgeguileo: what would happen to the 4 remaining volumes associated with that type?11:05
geguileowhoami-rajat: yes  https://opendev.org/openstack/cinderlib/src/branch/master/cinderlib/persistence/dbms.py#L298-L31611:06
geguileothe other 4 remaining volumes would most likely have issues11:07
geguileoor the delete would fail if we are enforcing integrity on the DB11:07
whoami-rajatgeguileo:  thanks for the bug, i think i need to split up my patch 1) for the bug 2) for adding volume type flow11:08
geguileowhoami-rajat: that would be great11:08
whoami-rajatgeguileo: isn't this a big issue, or cinderlib is designed for particular use cases ?11:08
geguileowhoami-rajat: what's a big issue? the bug you found?11:09
geguileothe bug you found IS A VERY BIG ISSUE11:09
geguileobecause cinderlib cannot properly work with QoS or Extra-specs as it is :-(11:09
whoami-rajatgeguileo: no, the remaining volumes having issues when the volume type is gone11:09
geguileowhoami-rajat: not for cinderlib, because cinderlib doesn't reuse volume types11:10
geguileoeach volume has its own volume type11:10
geguileoand that's probably the easiest solution to the change you want to do11:10
geguileoif there is no volume type, create a new one with empty stuff11:10
geguileoit's not the most efficient, but it's the easier to code without bugs11:11
geguileos/easier/easiest11:11
whoami-rajatgeguileo: oh, cinderlib does new volume type entry for each volume created with a volume type, hmm11:13
geguileowhoami-rajat: yes, because it doesn't expose the concept of volume types to users11:14
geguileousers define the extra specs and the qos specs, and it's cinderlib's job to figure out what to do with it11:14
*** sahid has quit IRC11:14
whoami-rajatgeguileo:  also is the cinderlib code associated in any way with default_volume_type config in cinder.conf?11:14
geguileoin this case it creates a VolumeType OVO, and if you are using the DBMS persistence plugin, then it creates the appropriate DB entries11:14
geguileowhoami-rajat: no, it doesn't use it11:15
geguileowhoami-rajat: because in Cinder that volume type (untill now) had to be manually created by the installation tool11:15
whoami-rajatgeguileo: that is a great way to take  'what is important' input from user.11:16
whoami-rajatgeguileo: so my current implementation follows | user provides volume type > get default volume type from cinder.conf > use __DEFAULT__ type present in db |11:18
geguileowhoami-rajat: it's not as efficient as Cinder, since we'll have a lot more data in the DB, but it seemed like the best solution at the time11:18
whoami-rajatgeguileo: i think i just need to handle the last part in cinderlib11:19
geguileowhoami-rajat: for cinderlib I would just create the volume type in the DBMS persistence plugin whenever you are saving the volume11:20
geguileoinstead of having an 'if'11:20
geguileothat would solve the problem afaik11:20
whoami-rajatgeguileo: do we need to create a volume type for every volume created with no type. can't we reuse the __DEFAULT__ type already present in db?11:23
geguileowhoami-rajat: you can11:23
geguileowhoami-rajat: then you have to load its ID in https://opendev.org/openstack/cinderlib/src/branch/master/cinderlib/persistence/dbms.py#L4811:23
geguileoand use it on an else here https://opendev.org/openstack/cinderlib/src/branch/master/cinderlib/persistence/dbms.py#L21411:24
geguileoand update the if by adding ` and volume.volume_type_it != self.DEFAULT_TYPE_ID:` here: https://opendev.org/openstack/cinderlib/src/branch/master/cinderlib/persistence/dbms.py#L29811:25
geguileoThat should also work and be more efficient11:25
geguileowhoami-rajat: that actually seems like the best approach11:26
geguileoas long as the Cinder driver code does not start assuming it always has a volume type11:27
geguileobecause in that case you need to make more parts of cinderlib aware of the 'default type' concept11:27
whoami-rajatgeguileo: oh, seems like you fixed it already, hah11:28
geguileoXD11:28
whoami-rajatgeguileo: thanks for the walkthrough of the cinderlib code (that would take me hours to find), will try your approach and do bit testing for surety.11:28
geguileowhoami-rajat: no problem11:29
geguileowhoami-rajat: yeah, getting to understand cinderlib's code is non-trivial11:29
geguileowhoami-rajat: you can set Depends-On in your Cinder patch against the patch you submit to cinderlib to confirm that it actually works with the new code   ;-)11:30
*** tesseract has joined #openstack-cinder11:30
whoami-rajatgeguileo:  yes, i started out the journey to cinderlib to fix my cinder change but seemingly i will end up doing important changes so want to make sure cinderlib is also in healthy state therefore the testing :)11:32
geguileowhoami-rajat: sounds good to me  :-)11:33
whoami-rajatgeguileo: again thanks for all the insights. i will start some coding now.11:34
geguileowhoami-rajat: np, and good luck :-)11:34
*** lpetrut has joined #openstack-cinder11:57
*** boxiang has quit IRC12:17
*** lpetrut has quit IRC12:28
*** ruffian_sheep has quit IRC12:30
*** lpetrut has joined #openstack-cinder12:32
*** Luzi has joined #openstack-cinder12:32
*** raghavendrat has joined #openstack-cinder12:35
*** mchlumsky has joined #openstack-cinder12:48
*** udesale has joined #openstack-cinder12:52
*** helenafm has quit IRC12:52
*** tesseract has quit IRC12:53
*** helenafm has joined #openstack-cinder12:53
*** tesseract has joined #openstack-cinder12:55
*** lpetrut has quit IRC12:57
*** mchlumsky has quit IRC12:58
*** raghavendrat has quit IRC12:58
*** mchlumsky has joined #openstack-cinder12:59
*** laurent\ has quit IRC12:59
jungleboyjgeguileo:  Cool.  Please let me know when you know for sure.13:08
jungleboyjHope to see you there.13:08
geguileojungleboyj: will do :-)13:08
*** laurent\ has joined #openstack-cinder13:17
*** eharney has quit IRC13:22
*** hemna has joined #openstack-cinder13:27
*** brinzhang has quit IRC13:28
*** vishalmanchanda has quit IRC13:32
*** helenafm has quit IRC13:34
*** ganso has quit IRC13:34
*** helenafm has joined #openstack-cinder13:36
*** jrubenst has joined #openstack-cinder13:37
*** Kuirong has quit IRC13:38
*** trident has quit IRC13:38
*** trident has joined #openstack-cinder13:39
*** enriquetaso has joined #openstack-cinder13:43
*** Luzi has quit IRC13:43
*** ganso has joined #openstack-cinder13:44
*** carloss has joined #openstack-cinder13:45
*** TxGirlGeek has joined #openstack-cinder13:48
*** psachin has quit IRC13:52
*** TxGirlGeek has quit IRC14:03
*** TxGirlGeek has joined #openstack-cinder14:04
*** TxGirlGeek has quit IRC14:08
*** TxGirlGeek has joined #openstack-cinder14:10
*** TxGirlGeek has quit IRC14:10
*** eharney has joined #openstack-cinder14:23
*** dpawlik has quit IRC14:36
*** nikeshm has quit IRC14:52
hemnageguileo: ping15:05
hemnageguileo: is there a way to not change the iscsi scan mode to manual and leave it as auto?15:06
hemnare: https://review.opendev.org/#/c/455394/15:06
hemnahave customers that are losing their iscsi volumes on reboot15:06
geguileohemna: isn't Nova making calls to OS-Brick to ensure that the attachments are there?15:07
hemnaso they have a volume attached to the compute host for local storage too15:07
hemnait's a non-openstack volume from the same SAN backend15:07
geguileohemna: oooooh, I see15:08
hemnaso when an openstack instance is stood up on that host, the mode is changed to manual15:08
geguileoso OpenStack works, but it's messing with their other volumes15:08
hemnathen reboot = ouch15:08
hemnayup15:08
geguileothere's a simple hack they can do  XD15:08
hemnayah I think that was one of the concerns I had with a previous attempt at changing it to manual15:09
hemnaa few years ago15:09
geguileohemna: there was no manual a few years ago, since I added the feature to Open iSCSI myself... XD15:09
hemnaI had seen a patch trying to force it a while ago15:09
geguileothis is not the manual part of not connecting to the SAN, it's not doing scans15:09
hemnaso how can we disable it for this case?15:10
hemnaso it doesn't get changed back again15:10
hemnaor is there a workaround for this at all?15:10
geguileoI wouldn't disabble it, since then they may have leftovers from OpenStack15:10
geguileothey can just go ahead and add the target as a hostname instead of the IP15:10
geguileoI believe that will work15:11
hemna?15:11
geguileoassuming the backend is not using discovery15:11
hemnaI think most backends use discovery to find the targets15:11
geguileoMost don't15:11
geguileobecause it's a terrible idea15:11
geguileoat least last time I checked15:12
geguileoit used to introduce a single point of failure15:12
hemnaI think we need a setting to allow someone to configure the compute host to not force the scan mode to manual in this case15:12
hemnaand they can live with the unexpected vols15:12
geguileothough maybe we changed it recently15:12
hemnaas their compute host is unusable after reboot15:12
geguileofor your customer they can add the IPs of the SAN to /etc/hosts15:12
geguileoAdd the target using the given hostnames15:13
geguileoAnd then OS-Brick won't touch it15:13
geguileoafaik15:13
hemnaadd the target using the given hostnames?15:13
hemnaI'm not following that15:13
geguileothe volumes they are using out of openstack are added to the initiator using the IP of the SAN as the target, right?15:14
hemnaI'm not sure15:14
hemnamost likely15:14
hemnait's a dell unity driver15:15
geguileoso, if they add a names in /etc/hosts for those IPs, such as sanip1 and sanip215:15
geguileothen they add the target-portal to the iSCSI initiator using the sanip1 and sanip2 names instead of the actual IPS15:16
geguileothat way OS-Brick should not reuse those sessions and create new ones using the IPs15:16
hemnaok I'll give that feedback to the customer and see if they can do that15:18
geguileoI hope it works for them15:19
hemname too :)15:19
hemnaI still think we might want an option to not set that to manual15:25
hemnaas others will run into the same exact problem15:25
hemnathe default was that we never touched that previous to that patch15:26
hemnanow it's defaulted to manual always15:26
geguileono, only to manual if iSCSI initiator supports it15:27
hemnaright, so in this case the customer doesn't want that ever set15:28
hemnadue to this problem15:28
geguileoit was not that the default was not to touch it15:28
geguileoit's that the feature didn't exist15:28
geguileoand it was the automatic scans that were introducing unwanted devices in the system and messing things up15:28
hemnaeither way, it was always auto prior to the feature being added right?15:28
geguileoyes15:29
hemnaok, so the default was always auto15:29
geguileoand it was a mess for all customers15:29
hemnaand after that os-brick patch (and the open-iscsi feature) we changed it to manual15:29
hemnawhich is now causing the issue15:29
hemnawhich causes host volumes to vanish upon reboot15:30
geguileoyeah, but the issue happens in what, 1% of the customers?15:30
geguileowhereas leaving automatic scans messes up 100% of the customers15:31
hemnaI wouldn't say 1%15:32
*** e0ne has joined #openstack-cinder15:32
hemnaI can see a lot of folks use volumes on their compute hosts for host storage15:33
hemnaoutside of openstack15:33
geguileoI haven't seen any so far, but I have seen almost hundreds of cases of iSCSI leftover devices15:34
geguileoI'll see if I can think of a solution that automatically detects this situation and creates a different session in OS-Brick, leaving the other one as it is15:34
hemnaso, if we aren't going to allow a conf setting for this, then we need to document the workaround15:38
hemnaI can see this being a common enough issue that folks will run into15:38
*** eharney has quit IRC15:46
*** eharney has joined #openstack-cinder15:46
*** helenafm has quit IRC15:52
geguileohemna: yeah, if OS-Brick cannot be smart about it and create a different session (which I don't know if we can yet), then we definitely need to document this15:59
*** vishalmanchanda has joined #openstack-cinder16:06
*** hemna has quit IRC16:22
*** mmethot has quit IRC16:25
jungleboyjpots: You around?16:26
*** tesseract has quit IRC16:33
*** TxGirlGeek has joined #openstack-cinder16:40
*** atahardjebbar has joined #openstack-cinder16:43
*** enriquetaso has quit IRC16:43
*** enriquetaso has joined #openstack-cinder16:44
*** hemna has joined #openstack-cinder16:44
*** mmethot has joined #openstack-cinder16:47
*** hemna has quit IRC16:49
*** sapd1_x has joined #openstack-cinder16:50
geguileohemna_: the solution I proposed, it won't work, sorry.  I'll keep thinking how to resolve the issue (besides going back to an earlier iscsid version)16:58
*** sapd1_x has quit IRC17:04
*** ag-47 has joined #openstack-cinder17:06
*** ganso has quit IRC17:06
*** ganso has joined #openstack-cinder17:08
*** udesale has quit IRC17:09
*** davidsha has quit IRC17:11
*** atahardjebbar has quit IRC17:13
*** hemna has joined #openstack-cinder17:14
*** TxGirlGeek has quit IRC17:17
*** hemna has quit IRC17:18
*** e0ne has quit IRC17:20
*** chhagarw has joined #openstack-cinder17:24
*** hemna has joined #openstack-cinder17:34
*** hemna has quit IRC17:38
*** ag-47 has quit IRC17:46
*** TxGirlGeek has joined #openstack-cinder17:53
*** hemna has joined #openstack-cinder17:54
*** hemna has quit IRC17:59
*** tosky has quit IRC18:01
*** eharney has quit IRC18:02
*** eharney has joined #openstack-cinder18:06
*** dpawlik has joined #openstack-cinder18:10
*** chhagarw has quit IRC18:20
*** hemna has joined #openstack-cinder18:37
*** eharney has quit IRC19:08
potsjungleboyj: Hi Jay19:19
jungleboyjpots:  Hey buddy.  Just saw your e-mail.  Thanks for the quick response.  It makes sense to me.19:19
jungleboyjFingets crossed that that is the issue.19:20
jungleboyj*Fingers19:20
potsI'm pretty sure it is, I am setting up a system to verify.19:20
potsWhile I've got you, can you take a look at https://review.opendev.org/#/c/639870/?19:21
jungleboyjOk.  Cool.  Thank you for continuing to support us!19:21
jungleboyjpots:  Looking now.19:21
potsAlso, I want to re-enable the driver & CI for the dothill driver--do I need a blueprint for that?19:21
* jungleboyj laughs19:22
jungleboyjReally?19:22
potsJust to make things complicated, I'm renaming the underlying driver from dothill to stx/Seagate19:22
* jungleboyj feels like there is a story there.19:22
jungleboyjHow is that going to impact the Lenovo driver?19:23
potsi am changing the class references in the lenovo and other drivers that inherit from the dothill driver in the same patch.19:24
potsi can imagine various alternative approaches, like introducing a separate driver first and then changing the lenovo driver in a later patch, but I have this deadline coming up.... :)19:25
jungleboyjOk, so I just approved your other patch.  Sorry for the bikeshedding there.  Those were both mistakes I made, so I merged it.19:25
jungleboyjFor some reason I always thought it was MMSA.  Sorry about that.19:26
potsawesome, thanks.19:26
jungleboyjSo, what is your deadline on the dothill driver?19:26
potsI think we have a July 22-ish deadline for new drivers / features and Python 3.7 compliance, right?19:26
jungleboyjYes.19:27
jungleboyjOk, so you are shooting to meet that deadline.19:27
jungleboyjDo you have the patch up for the dothill update?19:28
potsoh yeah, and what a headache that Python 3.7 thing is.  I thought it would be as easy as upgrading my CI to bionic where Python 3.7 is available, but the bionic kernels crash with a nested virtualization issue.19:28
jungleboyj:-(  Ugh.  I am sorry.19:28
potsnot yet, i was hoping to get it tested with py37 first but i'll post it as a WIP so you can see it.19:28
jungleboyjYou will add a job to your 3rd Party CI for it?19:29
potsi'm curious if you are keeping a tally of successful py37 CIs so we can maybe learn from each other.19:29
potsyes, there'll be a "seagate-ci" user posting comments like the other two.19:29
jungleboyjNot, yet, I haven't had a chance to go look at results.19:30
potshow are you evaluating compliance, are you just grepping logs for version numbers or something?19:30
jungleboyjThat was my plan at the moment.19:31
potsok.  i may have 1-2 bug fixes but I assume those can go in shortly after the deadline?19:33
jungleboyjYeah.19:33
*** eharney has joined #openstack-cinder20:06
*** dasp has quit IRC20:45
*** dasp has joined #openstack-cinder20:47
*** pcaruana has quit IRC21:01
*** dpawlik has quit IRC21:15
*** jrubenst has quit IRC21:24
*** harlowja has joined #openstack-cinder21:57
harlowjaan out of the blue question, not really openstack related, i know, but any redhat folks here know any of the folks in https://github.com/storaged-project  (for a non-openstack question)21:57
*** whoami-rajat has quit IRC22:31
*** rcernin has joined #openstack-cinder22:36
*** tkajinam has joined #openstack-cinder22:53
*** vishalmanchanda has quit IRC23:05
openstackgerritEric Harney proposed openstack/cinder master: Compress images uploaded to Glance  https://review.opendev.org/66894323:09
*** carloss has quit IRC23:17
openstackgerritMerged openstack/cinder master: Update support matrix entries for MSA and Lenovo arrays.  https://review.opendev.org/63987023:22
*** hemna has quit IRC23:24
*** brinzhang has joined #openstack-cinder23:49

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!