Wednesday, 2011-05-18

termiecomstud: it's pretty old code00:00
comstudnow i'm seeing a.. supposedly unrelated error in tests00:01
comstudgetting a NoMoreAddresses for CloudTestCase.test_associate_disassociate_address00:01
comstudbut00:01
termiethat happens if you use the real queue, usually00:02
comstudonly when I run the tests the whole way through00:02
comstudwhen i run the test individually, it's fine00:02
comstudbut if i go back to trunk, the tests run the whole way through00:02
termiecomstud: if you switch nova.tests.fake_flags to have fake_rabbit=False they don't00:03
comstudit's there00:03
termiefake_rabbit is True in trunk00:03
termieyou need to flip the flag to test against a real rabbitmq00:03
comstuder00:03
comstudok00:04
comstudi have fake_rabbit = True in both places here00:05
comstud1 fails, 1 does not00:05
termieright00:05
comstudchanging to false00:05
termiebut i am telling you that in trunk it fails with False, so presumably if you made it act more like the real rabbit it might also fail00:05
comstudaha.00:05
comstudthat is what i've done, essentially00:06
comstudfakerabbit's Backend.consume() is not an iterator like real rabbit00:06
comstudfakerabbot tries to pull out of a queue, call callback, and then raises StopIteration00:06
comstudwhich is completely wrong00:06
termiei would argue slightly wrong since an entire project has been able to successfully use it for a year ;)00:07
comstudhaha00:07
comstudsure :)00:07
comstudnow a different test just hangs with real rabbit00:08
comstudhrmph00:08
termiei don't get hangs with the real rabbit, only a couple failing tests00:08
termie(in trunk)00:08
comstudthis is related to my changes with the Topic consumers00:08
comstudinstead of calling fetch() every 0.1s00:09
comstudi'm just calling iterconsume() which iterates forever with limit=None00:09
comstudusing that with 3 consumers at once00:10
comstudobviously, that was never used before00:10
comstudanywhere in the code00:10
comstudwell00:11
termiewhich test is hanging?00:11
comstudtest_ajax_console00:11
termieall those tests are kind of funny00:11
comstudi had it hang before, also, when i thought i'd fixed fakerabbit properly00:11
termiewhenever i mess with sync/async bits those ones are the ones that complain00:11
comstudwhat's annoying is that i'm pretty positive the rpc.py code is correct00:12
comstudhehe00:12
comstudI need a test for the test.00:12
termieprobably worth looking at what those tests are doing00:13
comstudmost of this is completely foreign to me at this point00:13
termiehehe, that just means you'll be an expert once you fix it00:13
comstudgoing to push up the fakerabbit fixes here00:13
comstudyeah, really.00:13
comstudwell, pushed what I think is the correct fakerabbit fixes into lp:~cbehrens/nova/rpc-improvements00:15
comstudi'll have to spend more time on the tests tomorrow00:15
termiei'll look it over and see whether anything catches my eye00:16
comstudthanks00:16
comstudtaking a break for now00:17
*** salv-orlando has left #openstack-dev00:22
*** galstrom has joined #openstack-dev00:26
*** galstrom has quit IRC00:28
*** galstrom has joined #openstack-dev00:29
*** galstrom has quit IRC00:29
*** Tv has quit IRC00:33
*** blamar__ has joined #openstack-dev00:49
*** dragondm has quit IRC01:03
*** exlt has quit IRC01:11
*** jdurgin has quit IRC01:12
*** dragondm has joined #openstack-dev01:12
*** Tv has joined #openstack-dev01:13
*** Tv has left #openstack-dev01:13
*** Tushar has quit IRC01:14
*** namaqua has joined #openstack-dev01:26
*** namaqua has quit IRC01:27
*** dragondm has quit IRC01:58
*** jkoelker has quit IRC01:58
*** blamar__ has quit IRC02:02
*** markvoelker has joined #openstack-dev02:27
vishycomstud, termie: fixed the bug lp:~vishvananda/nova/rpc-improvements02:56
vishyconsumers weren't ever getting removed02:56
comstudlooking02:58
comstudahh, good call02:59
comstudthanks for the assist.  like the changes03:11
comstudhave a test that's failing yet tho03:11
comstudGreenletExit exception03:12
comstudlooks like you try to catch it tho03:12
* comstud looks03:12
comstudah, not here03:15
* comstud tries this03:15
comstudfixes.  just need to catch it for csetthread.wait().. when service shuts down03:23
comstudhttp://paste.openstack.org/show/1366/03:25
comstudthis one looks interesting03:25
comstudtermie made reference to this, i think03:27
comstudah03:43
comstudthis report_interval stuff again, and tests relying on this03:43
*** Zangetsue has joined #openstack-dev03:55
*** blamar__ has joined #openstack-dev03:55
*** blamar has quit IRC04:09
*** jaypipes has quit IRC04:09
*** _0x44 has quit IRC04:09
*** jaypipes has joined #openstack-dev04:11
*** blamar has joined #openstack-dev04:12
*** _0x44 has joined #openstack-dev04:12
comstudtime to learn mox i guess04:16
vishycomstud: yeah that is just because we changed how service loads the consumers but not the test05:01
comstudyeah05:02
comstudi'm struggling to modify the tests 'the right way'05:02
comstudi got something to work, but it's a hack05:03
comstudok, i think i *just* figured it out05:04
comstudok, well, side effect is fixing lp78372005:11
pyholeLP Bug #783720 in OpenStack Compute (nova): "services don't listen for rpc calls when report_interval is 0" [Status: In Progress, Assignee: Chris Behrens] https://bugs.launchpad.net/bugs/78372005:11
uvirtbotLaunchpad bug 783720 in nova "services don't listen for rpc calls when report_interval is 0" [Undecided,In progress]05:11
uvirtbotLaunchpad bug 783720 in nova "services don't listen for rpc calls when report_interval is 0" [Undecided,In progress] https://launchpad.net/bugs/78372005:11
comstudgood, all tests pass now05:13
vishycomstud: nice05:15
vishycomstud: hopefully that gives us a big performance boost :)05:16
comstudthat's what I'm hoping05:16
comstudunfortunately h48 was split up into 4 zones today and doesn't ahve the compute nodes set up05:16
comstudso I can't test yet05:16
comstudhopefully tomorrow05:16
comstudok, connection pool size is a flag now05:17
vishyand now termie can finish up his multicall code05:21
vishyand i can finish my no-db-messaging :)05:21
*** blamar__ has quit IRC05:21
comstudno idea what either of those are, but cool ;)05:25
comstudlinked this branch to 771512, also, and updated comments there05:25
comstudlp77151205:26
pyholeLP Bug #771512 in OpenStack Compute (nova): "Timeout from API with 50 Simultaneous Builds" [Status: In Progress, Assignee: Chris Behrens] https://bugs.launchpad.net/bugs/77151205:26
uvirtbotLaunchpad bug 771512 in nova "Timeout from API with 50 Simultaneous Builds" [High,In progress]05:26
uvirtbotLaunchpad bug 771512 in nova "Timeout from API with 50 Simultaneous Builds" [High,In progress] https://launchpad.net/bugs/77151205:26
comstudeventlet patch helps, but ultimately this is probably the real fix.05:26
comstudhopefully.05:26
comstudIMO, 'nova boot' still takes too long to get a response from the API... I need to investigate where the delay is05:29
comstudvishy: thanks again for the help with the tests05:35
comstudwant to pound on this before merge prop.. hopefully tomorrow05:35
comstudi'm outtie for now05:35
* comstud & poof05:35
sorencomstud: Are you really gone?05:37
comstudalmost05:38
comstud:)05:38
vishycomstud: np.. see you tomorrow05:38
comstudsoren: almost05:38
sorencomstud: Just saw your comment above.05:38
sorencomstud: ""eventlet patch helps, but ultimately this is probably the real fix."05:38
comstudyeah05:38
sorencomstud: does that mean you don't need the patch applied?05:39
comstudi don't know yet05:39
comstuduntil I can stress test it05:39
comstudunfortunately the environment I was using is currently down05:39
comstudwaiting on ant for that05:39
comstudi suspect that it'll eliminate the requirement for the eventlet patch05:40
comstudeven though we agree the eventlet issue is certainly a bug05:40
sorencomstud: So your new patch reduces the time for each connection?05:40
sorencomstud: Oh, ok.05:40
comstudlp:~cbehrens/nova/rpc-improvements05:40
comstudimplements connection pooling05:40
comstudwhich will put a cap on # of connections to rabbit05:40
comstudand will re-use connections05:41
sorencomstud: I see, ok.05:41
comstudso all of the pounding of connect() should go away05:41
sorencomstud: I'll build eventlet packages with the patch nonetheless.05:41
comstudalrighty05:41
sorencomstud: Thanks. Have a good night :)05:41
comstudno problem:)  you too.05:42
comstudor05:42
comstudhave a good day05:42
comstud:)05:42
* comstud & gone for reals05:42
*** Zangetsue has quit IRC05:57
*** Zangetsue has joined #openstack-dev06:08
*** zaitcev has quit IRC07:26
*** dysinger has joined #openstack-dev08:07
*** dysinger has quit IRC09:25
*** markvoelker has quit IRC10:56
*** markvoelker has joined #openstack-dev10:57
*** markvoelker has quit IRC11:26
*** markvoelker has joined #openstack-dev11:26
BK_mancomstud:11:29
*** dysinger has joined #openstack-dev11:43
*** dprince has joined #openstack-dev12:44
*** mattray has joined #openstack-dev13:23
*** Zangetsue_ has joined #openstack-dev13:30
*** Zangetsue has quit IRC13:32
*** Zangetsue_ is now known as Zangetsue13:32
*** lorin1 has joined #openstack-dev13:36
*** mattray has quit IRC13:39
*** openstackjenkins has quit IRC13:42
*** rnirmal has joined #openstack-dev13:54
*** jmeredit has joined #openstack-dev14:10
*** bcwaldon has joined #openstack-dev14:12
*** Zangetsue has quit IRC14:14
*** jkoelker has joined #openstack-dev14:31
*** dragondm has joined #openstack-dev14:56
*** cp16net has joined #openstack-dev15:10
*** alex-meade has joined #openstack-dev15:21
bcwaldonHow long should it take for Hudson to pick up a MP that is set to "Approved" and merge it in?15:37
jk0usually only a few minutes15:37
jk0but I've seen it take much longer15:37
bcwaldonThat's what I thought. I have one in glance that was approved an hour ago and hasn't been picked up yet15:37
bcwaldonhttps://code.launchpad.net/~rackspace-titan/glance/api-results-filtering/+merge/6105615:38
jk0ah, the glance one I'm not sure about15:38
jk0but nova is usually pretty quick15:38
bcwaldonI'd assume it is the same system, but I have no idea15:38
jk0same hudson, but something could be backed up15:39
*** zaitcev has joined #openstack-dev15:40
sorenhttp://jenkins.openstack.org/ seems to be acting up.15:42
sorenShould be on its way back15:44
sorenbcwaldon: ^ This is why your mp wasn't picked up.15:44
bcwaldongotcha15:44
*** openstackjenkins has joined #openstack-dev15:44
sorenopenstackjenkins: Welcome back.15:45
openstackjenkinssoren did you mean me? Unknown command 'Welcome'15:45
openstackjenkinsUse !jenkinshelp to get help!15:45
comstudyawns15:48
*** alex-meade has quit IRC15:53
*** ameade has joined #openstack-dev15:54
openstackjenkinsProject swift build #262: SUCCESS in 27 sec: http://jenkins.openstack.org/job/swift/262/16:02
openstackjenkins* Tarmac: Editing the install docs based on testing in the internal Swift training course this week.16:02
openstackjenkins* Tarmac: Adding container stats collector, unit tests, and refactoring some of the stats code.  There will have to be changes to both the swift and rackswift conf files before this can be released.  Please DO NOT approve this branch for merge until glange's stats stuff is all ready to go.  gracias.16:02
*** salv-orlando has joined #openstack-dev16:10
*** salv-orlando has left #openstack-dev16:11
*** salv-orlando has joined #openstack-dev16:12
*** blamar__ has joined #openstack-dev16:20
*** johnpur has joined #openstack-dev16:25
*** ChanServ sets mode: +v johnpur16:25
bcwaldonsoren: weird build error on glance, can you check it out?16:25
*** agarwalla has joined #openstack-dev16:36
*** jdurgin has joined #openstack-dev16:58
*** salv-orlando has quit IRC17:05
*** hub_cap has joined #openstack-dev17:13
*** jmeredit has quit IRC17:18
*** hub_cap is now known as hub-cap17:32
*** hub-cap is now known as hub_cap17:36
*** blamar has quit IRC17:43
*** antonyy has joined #openstack-dev17:43
openstackjenkinsProject nova build #912: SUCCESS in 2 min 45 sec: http://jenkins.openstack.org/job/nova/912/17:44
openstackjenkinsTarmac: Added missing xenhost plugin. This was causing warnings to pop up in the compute logs during periodic_task runs. It must have not been bzr add'd when this code was merged.17:44
*** mgius has joined #openstack-dev17:45
*** hub-cap has joined #openstack-dev17:53
*** hub_cap has quit IRC17:55
*** hub-cap is now known as hub_cap17:55
*** markvoelker has quit IRC18:22
openstackjenkinsProject nova build #913: SUCCESS in 2 min 41 sec: http://jenkins.openstack.org/job/nova/913/18:24
openstackjenkinsTarmac: Implements a basic mechanism for pushing notifications out to interested parties. The rationale for implementing notifications this way is that the responsibility for them shouldn't fall to Nova. As such, we simply will be pushing messages to a queue where another worker entirely can be written to push messages around to subscribers.18:24
*** jmeredit has joined #openstack-dev18:26
*** blamar_ has joined #openstack-dev18:43
comstudvishy around?18:52
vishycomstud: aye18:53
comstudrunning into a fun flags problem18:53
comstudwe have FLAGS.host for hostname18:53
vishyis it the one ant reported?18:53
comstudyeah18:54
comstudbut18:54
comstudi know the problem18:54
comstudi just don't know that it's solvable easily without renaming a flag18:54
comstudproblem is --host is matching --host_state_interval18:54
comstudafter --host_state_interval is defined in compute18:54
*** mszilagyi has joined #openstack-dev18:54
vishyweird, matching how?18:55
comstudthe ParseNewFlags stuff in nova/flags.py is only passing in the new flags in compute to match18:55
comstud'host=' is not in the list18:55
comstudso when it sees --host option, it's matching the new --host_state_interval18:55
comstudi think because gflags allows you to shortcut flag names?18:55
johan___oh, i hate that18:56
johan___getopt has similar behavior18:57
vishyeew18:58
vishywow just noticed there is a typo in gflags19:00
vishyer nova/flags.py19:00
*** jmeredit has quit IRC19:01
vishycomstud: yes looks like it uses getopt underneath19:04
comstudyeah19:05
comstudso19:05
comstudshort term fix is to rename host_state_interval19:05
comstudor easy fix19:05
comstudor rename 'host' to 'hostname'19:05
comstudbut that would potentially break configs, so19:06
comstudso probably the former.. although it'd be nice to change this behavior in general19:06
vishycomstud:19:09
*** jmeredit has joined #openstack-dev19:09
vishydoes this fix it: http://pastie.org/1923091 ?19:09
comstudtrying19:10
comstudseems so, off hand19:11
comstudah19:12
comstudnevermind, no it doesn't19:12
vishydarn :)19:12
comstudhehe :)19:12
vishyi was hoping reparsing would work19:12
comstudatm, nova/flags.py is like a foreign language to me19:13
comstudvery easy to reproduce19:15
comstudbin/nova-compute --host=foo --help19:15
vishyi'll get termie on it19:16
vishy-        except gflags.UnrecognizedFlagError:19:20
vishy+        except (gflags.UnrecognizedFlagError, gflags.IllegalFlagValue):19:20
vishywill stop the error at least...19:20
comstudahh19:21
comstudanother thing is patching getopt.long_has_args19:21
comstudbut that's kinda ugly19:21
comstudmaybe19:21
* comstud looks at that19:22
termieHEYO19:23
vishytermie: so due to getopt allowing short matching19:23
vishy--host is matching --host_state_interval19:23
vishywhich is causing issues...19:25
termiethat is only theoretically because we aren't passing in the full set of flags when we are "ParseNewFlags"ing?19:25
vishythat was my guess19:25
comstudyes19:25
comstudthat's correct19:25
termiek, taking a look, seems relatively easy to fix19:27
vishytermie: my pastie above was a first attempt at reparsing if any flag was dirty (includes a fix for misnamed __is_dirty), but the first parse actually fails because the types were different19:29
vishyso i thought about catching gflags.IllegalFlagValue as well as UnrecognizedFlagError, but it seems a little hacky19:29
termiei don't think that matters19:29
vishyso --host_state_interval is an int19:29
vishyso bin/nova-compute --host=foo actually crashes19:30
termiei don't think that stuff is going to apply19:30
termieit won't get the wrong flag anymore19:30
comstudif we want to disallow partial name matching:19:31
comstudhttp://paste.openstack.org/show/1368/19:31
comstudthat seems to work19:31
termiehttp://pastie.org/192319719:31
vishytermie: k19:31
termieyou guys sound like you are trying to fix a different problem19:32
*** dprince has quit IRC19:32
termiethe issue is that we are only checking the new flags when we reparse, not all the flags19:32
comstudyeah, that's the real issue.19:32
termiethat patch makes it check all the flags, it will no longer match host_state_interval because host will be in there19:32
termieanyway, try that patch?19:32
comstudtrying19:32
termieand/or tell me how you are testing19:32
termieseems to fix it for me19:33
termie./bin/nova-compute --host=foo --help crashes before it19:33
termiebut not after19:33
comstudseems to19:34
comstudlooks good to me19:35
vishyyes19:36
vishy     def ClearDirty(self):19:36
vishy-        self.__dict__['__is_dirty'] = []19:36
vishy+        self.__dict__['__dirty'] = []19:36
comstudyeah, got that in here too19:36
vishythat fix cuts out about half of the reparses19:36
termiei'll add that in and shoot a merge prop, or should i just send my changes?19:36
comstudgo for it19:36
comstudyou get the bug #?19:36
termienope19:37
jk0lp78474319:37
termievish is the whole patch from http://pastie.org/1923091 necessary?19:37
pyholeLP Bug #784743 in OpenStack Compute (nova): "IllegalFlagValue: flag --host_state_interval" [Status: New, Assignee: None] https://bugs.launchpad.net/bugs/78474319:37
uvirtbotLaunchpad bug 784743 in nova "IllegalFlagValue: flag --host_state_interval" [Undecided,New]19:37
uvirtbotLaunchpad bug 784743 in nova "IllegalFlagValue: flag --host_state_interval" [Undecided,New] https://launchpad.net/bugs/78474319:37
comstudthere ya go19:37
termievishy: or just the bottom part?19:37
comstudjust the bottom part is what I have19:38
termievishy: i don't understand the purpose of the top part19:38
comstudit was another attempt at fixing it19:39
comstudthat didn't work19:39
vishytermie: no just the is_dirty part19:39
comstud:)19:39
termiek, running tests then pushing19:39
vishythe otehr stuff was just me trying to get it to reparse if it matched part of a dirty flag19:39
vishyi guess we should probably add a test for this as well19:44
termiehttps://code.launchpad.net/~termie/nova/flags_reparse/+merge/6146819:53
*** jmeredit has quit IRC20:02
*** jmeredit has joined #openstack-dev20:03
vishytermie: http://pastie.org/192336820:04
*** dprince has joined #openstack-dev20:05
termiepushedd20:17
termieupgedating differator20:17
comstudthnx20:22
termiemtaylor: mist01 (that random ip address) is back up now20:23
vishycomstud: we added a test20:23
termiemtaylor: and if you could give me access to look at the job configs, my user on jenkins is termie20:26
*** Tv has joined #openstack-dev20:26
termiemtaylor: and access info to the servers20:28
*** dprince has quit IRC20:30
comstudi'm not understanding how that test passes, though20:39
comstudparticularly this line:20:39
comstud101         self.assert_(self.global_FLAGS.duplicate_answer_long, 60)20:39
ironcamelvishy: you still there?20:39
ironcamelvishy: i think you may be able to help me. i keep getting cloud-setup: failed 15/30: up 28.84. request failed20:39
ironcamelin the cosole.log file, of a vm instance20:40
ironcamelcloud-setup: failed 15/30: up 28.84. request failed20:40
ironcamelwget: server returned error: HTTP/1.1 500 Internal Server Error20:40
comstudah20:40
ironcamelknow where cloud-setup script is?20:40
vishycomstud: the first parse actually matches the long version20:40
*** johnpur has quit IRC20:40
comstudyeah20:41
comstudgot it20:41
vishycomstud: but the reparse fixes it20:41
comstudwould have been more obvious if the test checked for '60'20:41
comstudfor me20:41
comstud:)20:41
comstudbut yeah, okay20:41
comstudgot it.20:41
vishycomstud: doesn't it?20:41
comstudthe string 60, i mean20:42
comstudsince the flag is a string20:42
vishyoh that line is a typo20:42
vishyit is supposed to be the string20:43
vishyit is passing because it is assert_ instead of assertEqual20:43
comstudaha.20:43
comstudduh20:44
comstudupdated comment20:45
vishyironcamul: yes that means that metadata is failing20:46
vishyironcamel: ^^20:46
ironcamelvishy: any suggestions on how to track this down?20:46
vishyironcamel: you should check nova-api and see if there is an error message, it is returning 50020:46
vishyif not you might need to add a log.exception  somewere20:47
ironcamelnova-api is fine20:47
ironcamelwe are adding support for using arbitrary glance servers20:47
ironcamelby passing an imageRef when you create a server20:47
ironcamelwe got it to the point that the server is created with the image from the external image server20:48
ironcamelbut networking doesn't seem to be set up20:48
ironcamelcreating instances via the default glance server works fine20:48
ironcamelstarting DHCP forEthernet interface eth0  [  ^[[1;32mOK^[[0;39m  ]20:48
ironcamelcloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id20:48
ironcamelwget: server returned error: HTTP/1.1 500 Internal Server Error20:48
ironcamel^^^ is what is in console.log of the vm20:49
uvirtbotironcamel: Error: "^^" is not a valid command.20:49
*** agarwalla has quit IRC20:49
vishyironcamel: i understand20:50
vishynova-api is what responds to metadata20:50
vishyif you don't see error message20:51
vishythen try adding the following20:51
vishyto nova/api/ec2/metadatarequesthandler.py20:51
vishyon line 7420:51
vishytry:20:51
ironcamelec2? that seems strange, since i am working mostly inside the openstack api20:53
ironcamelbut i will take your word for it :)20:53
vishyironcamel: http://pastie.org/192367720:53
vishyyou should get a log in nova-api with that patch in with the exception that is causing metadata to fail20:54
ironcamelvishy: i will try that. thanks!20:54
vishy(you will have to restart nova-api after patching)20:55
*** heckj has joined #openstack-dev20:57
ironcamelvishy: your patch was right on target. shows us exactly what is wrong.21:02
vishycool21:09
vishywe should probably have that patch in there actually21:09
*** lorin1 has quit IRC21:10
*** markvoelker has joined #openstack-dev21:11
ironcamelvishy: ok, we will keep that in there21:12
vishyi'm going to propose it21:14
comstudinteresting merge failure on that flags fix21:15
termieyeah21:15
termiewe were just talking about it21:15
*** bcwaldon has quit IRC21:21
ironcamelvishy: do you have an opinion about this patch http://pastie.org/192381821:24
*** jmeredit has quit IRC21:26
ironcamelthe reason for that patch is that in osapi, we want to support image_id's that are not ints. we are going to support hrefs.21:27
*** hub_cap has quit IRC21:33
*** salv-orlando has joined #openstack-dev21:35
vishyironcamel: hmm21:37
throughnothingits not an ideal patch, but i dont think its very ideal for all servers to talk to the ec2 api even if they were never created using ec2?21:38
vishyironcamel: well metadata has to be set somehow21:38
throughnothingyeah21:39
vishyironcamel: seems like that needs to be changed inside of image_ec2_id21:39
vishythroughnothing: images will be switched to uuids and/or url + uuid anyway21:39
vishythroughnothing: so there will need to be some translation in ec2 to make it work21:40
vishyultimately ec2 will just have to provide some mapping21:41
vishyto ec2 style ids21:41
throughnothingok21:42
throughnothingvishy: so is this a little better: http://pastie.org/192392321:43
throughnothingessentially the same logic just moved to that function21:43
vishythroughnothing: I'm not sure i like the ec2 metadata service ever returning anything but an ec2_id21:45
throughnothingcurrently what happens if you pass in an osapi/glance id ?21:46
vishyit converts it21:46
vishyif glance is not using ids21:46
vishythen we need to maintain a mapping in ec2_api of glance ids to ec2 ids21:46
vishyor else ec2_api is broken21:47
ironcamelvishy: mapping how? in the db?21:48
vishyif this is just for an internal hack and you never want to support ec2 than the above is fine21:48
vishyironcamel: yes we'll need to have a db mapping21:48
ironcamelnope, this is a global hack for everyone to enjoy21:48
vishyironcamel: this is something that has to be done anyway, because we are switching to UUIDs for vms and volumes as well21:49
vishythere is even a blueprint about it :)21:49
vishyhttps://blueprints.launchpad.net/nova/+spec/ec2-id-compatibilty21:49
*** jmeredit has joined #openstack-dev21:49
ironcamelvishy: i don't think this breaks the ec2 api. the patch behaves as before it the image_id is an int21:49
ironcamelif it is an osapi style href, then it would behave differently. right?21:50
vishywell sure, but basically it means that images aren't compatible accross apis21:50
vishymy guess is that euca-describe-images would be broken if you had one of these images in21:51
vishyunless you are only changing the image_id in the instance record21:51
ironcamelwe are changing it in the db as well21:51
vishybut you aren't changing it in glance?21:51
vishyironcamel: is this for improvements to snapshotting?21:52
ironcamelin glance? our changes allow glance to support both id's and hrefs21:52
vishyironcamel: right, so if i request all images from glance, will some come back with hrefs?21:52
ironcamelvishy: no, this is to support images from arbitrary glance srevers21:52
ironcamelvishy: yes21:52
throughnothingour changes are trying to allow you to create a server using the osapi using an HREF to a glance server that is not the default configured glance server21:52
jk0vishy: mind taking another quick peak at https://code.launchpad.net/~johannes.erdfelt/nova/lp784134/+merge/61282 ?21:52
vishythroughnothing: ah i see21:53
vishyso it only changes it in the instance record21:53
vishyhmm perhaps we should make metadata just return a dummy image id...21:53
vishyi'm concerned that this will break some things but I don't really see another method until we have a mapping21:53
throughnothingthat actually sounds fine to me21:53
throughnothingreturning a dummy id21:54
vishymaybe have it return ami-FFFFFFFF ?21:54
throughnothingbut what is the condition on which we return that?21:54
vishyactually it is small21:54
vishyami-0000000021:54
vishyis maybe better21:54
vishyi guess if it can't parse it21:54
throughnothingok, so the same ValueError logic should be ok for now21:55
ironcamelthat could work21:55
vishyyeah, put a todo in there that it can come out when we have mapping in place21:55
vishyso if you launch from an external server through the ec2 api it will create a mapping21:55
throughnothingso the only diff here is rather than returning the raw 'image_id' if it cant parse, then we return something at least formatted like an ec2id21:56
vishythroughnothing: i think that is the most compatible solution21:56
throughnothingvishy: sounds good to me, thanks for your help!21:56
vishyit means bundle-image will probably be broken21:56
ironcamelsorry, what does bundle-image do?21:57
*** bcwaldon has joined #openstack-dev22:01
vishyit is the way of creating a new image from a running instance22:01
vishyit usually checks its parent ami's22:02
throughnothingah22:02
throughnothingso it'll only be broken for instances using the new href support22:02
vishyironcamel: does your code handle ami style images?22:02
ironcamelvishy: we are testing on ami style images22:03
ironcamelon libvirt22:03
vishyhow does it handle the separate kernel and ramdisk from another server?22:04
ironcameland it seems to be working22:04
vishyare ramdisk_id and kernel_id still forced to be ints?22:04
ironcamelthat is a good question22:04
ironcameli think we left them as ints22:05
vishyand if so, is there some logic somewhere to make sure they come from the other server?22:05
throughnothingvishy: kernel and ramdisks are ints22:05
ironcamelbecause the glance_client instance is created based on the imageRef22:05
throughnothingthey are assumed (and for now required) to be on the same glance server22:05
vishyah ok22:05
vishycool, sounds like a neat patch22:05
ironcameland it knows how to handle ramdisk ids22:05
throughnothingvishy: its finally working today with that last ec2 patch :)22:06
ironcamelvishy: we hope so. since you feel that way, i think you should get to review it :)22:06
vishyyay!22:06
vishy:p22:06
*** salv-orlando has quit IRC22:14
*** ameade has quit IRC22:15
*** jmeredit has quit IRC22:23
*** blamar__ has quit IRC22:40
*** markvoelker has left #openstack-dev22:42
*** jkoelker has quit IRC23:01
*** cp16net has quit IRC23:15
*** rnirmal has quit IRC23:21
*** antonyy has quit IRC23:22
*** mszilagyi has quit IRC23:24
*** westmaas_ has joined #openstack-dev23:27
*** cweidenk1ller has joined #openstack-dev23:28
*** blamar_ has quit IRC23:32
*** cweidenkeller has quit IRC23:32
*** westmaas has quit IRC23:32
*** Tv has quit IRC23:33
*** blamar_ has joined #openstack-dev23:40
*** heckj has quit IRC23:47

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!