*** m8 has quit IRC | 00:37 | |
CIA-89 | menesis * r121717 zope.app.applicationcontrol/ (19 files in 5 dirs): Conform to repository policy | 00:38 |
---|---|---|
CIA-89 | menesis * r121718 zope.app.applicationcontrol/ (LICENSE.txt COPYRIGHT.txt): Add license and copyright | 00:38 |
CIA-89 | menesis * r121719 zope.app.applicationcontrol/ (CHANGES.txt setup.py): Remove unneeded dependencies | 00:38 |
CIA-89 | jim * r121720 /Sandbox/J1m/customdoctests/ (4 files in 2 dirs): | 00:38 |
CIA-89 | Added tests for parser. | 00:38 |
CIA-89 | (In doing so, realized that overriding ps2 doesn't work | 00:38 |
CIA-89 | and it's not worth fixing.) | 00:38 |
*** sp0cksbeard has quit IRC | 00:44 | |
*** J1m has quit IRC | 01:11 | |
*** sm has quit IRC | 01:18 | |
*** davetoo has joined #zope | 01:26 | |
*** davetoo has left #zope | 01:27 | |
*** menesis has quit IRC | 01:30 | |
*** gqlewis has joined #zope | 01:46 | |
*** MrTango has quit IRC | 01:58 | |
*** evilbungle has quit IRC | 02:00 | |
*** mr_jolly has quit IRC | 02:07 | |
*** lcarvalho has quit IRC | 02:27 | |
*** supton has quit IRC | 02:27 | |
*** lane_ has joined #zope | 02:28 | |
*** CIA-107 has joined #zope | 02:29 | |
*** gqlewis has quit IRC | 02:32 | |
*** tiwula has quit IRC | 02:32 | |
*** CIA-89 has quit IRC | 02:32 | |
*** gqlewis has joined #zope | 02:32 | |
*** mr_jolly has joined #zope | 02:50 | |
*** daMaestro has quit IRC | 03:14 | |
*** webmaven has quit IRC | 03:14 | |
*** River_Rat has joined #zope | 03:32 | |
*** Spanktar has quit IRC | 03:34 | |
*** dayne has joined #zope | 03:34 | |
*** RiverRat has quit IRC | 03:35 | |
*** supton has joined #zope | 03:50 | |
*** sm has joined #zope | 03:51 | |
*** mr_jolly has quit IRC | 03:55 | |
*** dayne has quit IRC | 03:59 | |
*** lane_ has quit IRC | 04:00 | |
*** supton has quit IRC | 04:04 | |
*** davetoo has joined #zope | 04:06 | |
*** supton has joined #zope | 04:08 | |
*** supton has quit IRC | 04:10 | |
*** sm has quit IRC | 04:12 | |
*** mr_jolly has joined #zope | 04:21 | |
*** mr_jolly has left #zope | 04:29 | |
*** River-Rat has joined #zope | 04:37 | |
*** River_Rat has quit IRC | 04:40 | |
*** davisagli has quit IRC | 04:44 | |
*** davisagli has joined #zope | 04:45 | |
*** allisterb has joined #zope | 05:20 | |
*** allisterb has quit IRC | 05:20 | |
*** davisagli has quit IRC | 05:35 | |
*** davisagli has joined #zope | 05:36 | |
*** gqlewis has quit IRC | 05:40 | |
*** rump has joined #zope | 05:41 | |
*** gqlewis has joined #zope | 05:42 | |
*** gqlewis has quit IRC | 05:58 | |
*** allisterb has joined #zope | 06:20 | |
*** allisterb has quit IRC | 06:20 | |
*** supton has joined #zope | 06:27 | |
*** supton has quit IRC | 06:55 | |
*** davetoo has left #zope | 07:03 | |
*** ThePing has joined #zope | 07:56 | |
*** ThePing has left #zope | 07:56 | |
*** ccomb has joined #zope | 08:21 | |
*** hever has joined #zope | 08:24 | |
*** supton has joined #zope | 08:39 | |
*** digitalmortician has quit IRC | 08:48 | |
*** zagy has joined #zope | 08:56 | |
*** wosc has joined #zope | 08:56 | |
*** __mac__ has joined #zope | 09:01 | |
*** tisto has joined #zope | 09:02 | |
*** supton has quit IRC | 09:04 | |
*** menesis has joined #zope | 09:25 | |
*** slackrunner has joined #zope | 09:28 | |
*** agroszer has joined #zope | 09:30 | |
*** digitalmortician has joined #zope | 09:36 | |
*** avoinea has joined #zope | 09:46 | |
*** alexpilz1 has quit IRC | 09:55 | |
*** Wu has joined #zope | 09:58 | |
*** River-Rat is now known as RiverRat | 09:59 | |
*** ccomb has quit IRC | 10:02 | |
*** planetzopebot has quit IRC | 10:08 | |
*** shastry has quit IRC | 10:09 | |
*** planetzopebot has joined #zope | 10:09 | |
*** shastry has joined #zope | 10:10 | |
*** rump has quit IRC | 10:14 | |
CIA-107 | tlotze 2 * r121721 zc.buildout/ (16 files in 3 dirs): removed awareness of multiple Python interpreters, making code simpler to test and getting rid of some tests with errors | 10:29 |
CIA-107 | icemac * r121722 zc.ssl/ (CHANGES.txt src/zc/ssl/tests.py): | 10:29 |
CIA-107 | - Using Python's ``doctest`` module instead of deprecated | 10:29 |
CIA-107 | ``zope.testing.doctest``. | 10:29 |
CIA-107 | icemac * r121723 zc.ssl/ (bootstrap.py COPYRIGHT.txt LICENSE.txt): Conform to repository policy. | 10:29 |
CIA-107 | icemac * r121724 zc.table/ (6 files in 2 dirs): | 10:29 |
CIA-107 | - Using Python's ``doctest`` module instead of deprecated | 10:29 |
CIA-107 | ``zope.testing.doctest``. | 10:29 |
CIA-107 | - Removed deprecated slugs for ZPKG and ZCML. | 10:29 |
CIA-107 | icemac * r121725 zc.table/ (12 files in 2 dirs): Conform to repository policy. | 10:29 |
CIA-107 | icemac 1.0 * r121726 zc.ssl/ (bootstrap.py COPYRIGHT.txt LICENSE.txt): Conform to repository policy. | 10:29 |
CIA-107 | icemac * r121727 zc.tokenpolicy/ (COPYRIGHT.txt setup.py LICENSE.txt): Conform to repository policy. | 10:29 |
CIA-107 | icemac * r121728 zc.testbrowser/ (7 files in 2 dirs): Conform to repository policy. | 10:29 |
CIA-107 | wosc wosc-test-stacking * r121729 zope.component/ (NOTES.txt src/zope/component/tests.py): Add test that list valued lookups are not affected by stackable | 10:29 |
CIA-107 | wosc wosc-test-stacking * r121730 zope.component/NOTES.txt: Update todo list | 10:29 |
*** goschtl has joined #zope | 10:30 | |
*** alexpilz has joined #zope | 10:34 | |
*** fredvd has joined #zope | 10:39 | |
*** humanfromearth has joined #zope | 10:47 | |
*** alexpilz has quit IRC | 10:49 | |
*** alexpilz has joined #zope | 10:50 | |
*** sylvain has joined #zope | 10:52 | |
*** evilbungle has joined #zope | 11:05 | |
*** evilbungle has quit IRC | 11:05 | |
*** humanfromearth has left #zope | 11:08 | |
*** alexpilz has quit IRC | 11:19 | |
*** alexpilz has joined #zope | 11:19 | |
*** mr_jolly has joined #zope | 11:26 | |
*** lukasg|4tw has joined #zope | 11:43 | |
CIA-107 | adamg * r121731 zope.wineggbuilder/master.cfg: oops, I missed a previous commit? add ZTK 1.1 | 11:48 |
CIA-107 | hannosch * r121732 Products.ZCatalog/ (setup.py CHANGES.txt): Prepare Products.ZCatalog 2.13.14. | 11:48 |
CIA-107 | hannosch * r121733 /Products.ZCatalog/tags/2.13.14: Tagged Products.ZCatalog 2.13.14. | 11:48 |
CIA-107 | hannosch * r121734 Products.ZCatalog/ (setup.py CHANGES.txt): vb | 11:48 |
CIA-107 | hannosch 2.13 * r121735 Zope/ (doc/CHANGES.rst versions.cfg): Products.ZCatalog = 2.13.14 | 11:48 |
CIA-107 | hannosch * r121736 Zope/ (doc/CHANGES.rst versions.cfg): Products.ZCatalog = 2.13.14 | 11:48 |
*** alga has joined #zope | 11:56 | |
*** TomBlockley has joined #zope | 11:57 | |
lukasg|4tw | I've got a worker instance for use with zc.sync that dies as soon as it's trying to dispatch jobs in the queue (it just returns to the prompt when started in fg) | 11:57 |
lukasg|4tw | Does anyone know where in the ZODB zc.async persists its queue, so I can clean out the offending job? | 11:57 |
*** fredvd is now known as fredvd|away | 12:00 | |
*** mitchell`off is now known as mitchell` | 12:04 | |
*** evilbungle has joined #zope | 12:04 | |
kosh | sorry not something I have ever used | 12:08 |
*** JT has quit IRC | 12:08 | |
*** JT has joined #zope | 12:09 | |
*** eperez has joined #zope | 12:16 | |
*** gqlewis has joined #zope | 12:16 | |
*** ccomb has joined #zope | 12:24 | |
*** gqlewis has quit IRC | 12:24 | |
*** teix has joined #zope | 12:37 | |
CIA-107 | hannosch * r121737 Products.ZCatalog/ (2 files in 2 dirs): Fixed BooleanIndex' items method so the ZMI browse view works. | 12:41 |
*** mr_jolly has left #zope | 12:55 | |
*** tisto is now known as tisto|away | 13:19 | |
*** j-w has joined #zope | 13:20 | |
*** mejo has joined #zope | 13:25 | |
mejo | hello | 13:25 |
mejo | i'm trying to debug a zope ConflictError. | 13:25 |
mejo | i already found out, that it's possible to access the related object via the hexcode in event.log | 13:26 |
mejo | several times, i found something like 'from ZODB.utils import p64' and then 'print app._p_jar[p64(0xHEX)]' in the debug python shell | 13:27 |
mejo | now my problem is, that app doesn't exist: | 13:27 |
mejo | NameError: name 'app' is not defined | 13:27 |
mejo | i'm running zope 2.10.13. the search results from google are all around 2007. maybe 'app' has been renamed? | 13:28 |
mejo | or is there another way in zope2.10 to access the object via hexcode? | 13:28 |
*** davisagli has quit IRC | 13:31 | |
*** davisagli has joined #zope | 13:33 | |
betabug | mejo: I think you have to import app from somewhere | 13:37 |
mejo | yes, but from where? | 13:39 |
mejo | or is app the module that produced the ConflictError? | 13:39 |
betabug | hmmm, dunno, never used this stuff... I'd look on wiki.zope.org/Zope2 | 13:40 |
betabug | the ConflictError is produced when 2 request try to write to the same object | 13:40 |
betabug | assuming it's a "write conflict error" | 13:40 |
betabug | if it's a "read conflict error", just ignore it | 13:40 |
mejo | it's a database conflict error | 13:41 |
mejo | and it happens regularely | 13:41 |
mejo | sometimes it has unresolved conflicts. | 13:42 |
betabug | hmmm, what do you mean "database conflict error"? | 13:42 |
mejo | and twice, the zope instance even froze. | 13:42 |
betabug | hmmm, you'll have to find out which kind of objects get altered by what methods | 13:42 |
mejo | 2011-05-19T10:42:20 INFO ZPublisher.Conflict ReadConflictError at /VirtualHostBase/http/test.tdomain.de:80/knowit/VirtualHostRoot/url_decoder: database read conflict error (oid 0x63, class BTrees._OOBTree.OOBTree) (16 conflicts (0 unresolved) since startup at Thu May 19 10:03:08 2011) | 13:43 |
betabug | it's a "read conflict error" | 13:43 |
betabug | so no worries about this one | 13:44 |
mejo | and this one: | 13:45 |
mejo | 2011-05-19T10:43:00 INFO ZPublisher.Conflict ConflictError at /VirtualHostBase/http/test.tdomain.de:80/knowit/VirtualHostRoot/url_decoder: database conflict error (oid 0x65, class BTrees._OOBTree.OOBTree, serial this txn started with 0x038e6cbb81084c44 2011-05-19 10:35:30.241972, serial currently committed 0x038e6cc3019ad8aa 2011-05-19 10:43:00.376141) (18 conflicts (0 unresolved) since startup at Thu May 19 10:03:08 2011 | 13:45 |
betabug | dunno, it doesn't say "write", but it got resolved just fine | 13:45 |
mejo | unfortunately some of them don't get resolved | 13:47 |
mejo | and as written, the instance already froze twice | 13:47 |
mejo | no logs at freeze time, only lot's of ConflictErrors some minutes before | 13:48 |
betabug | ok, but running after the conflict errors which *got* resolved won't help you | 13:48 |
mejo | i see | 13:49 |
betabug | it's more likely that due to the resolving of conflict errors zope ran out of threads - and that's what you see as "frozen" | 13:49 |
mejo | still i don't know how to run after conflict errors at all | 13:49 |
betabug | "due" in the sense of "it takes longer to process the request" | 13:49 |
betabug | you read the code and try to see why it fails | 13:50 |
mejo | ;-) | 13:50 |
betabug | find out which objects are involved, which requests get stuck | 13:51 |
betabug | and then you'll narrow it down to the methods that take so long to run and/or that alter objects | 13:51 |
mejo | yes, but to find out the objects, i need to translate the OID hexcode into object name | 13:51 |
betabug | I never use OID for anything | 13:52 |
mejo | the event.log only mentions the oid | 13:52 |
betabug | I suggest you look in the Control_Panel debug info what your long running requests are doing | 13:52 |
betabug | or - if the thing is already stuck - install DeadLockDebugger and have it dump the traces | 13:52 |
mejo | thanks a lot for your help! | 13:53 |
betabug | no problem, glad to help | 13:53 |
mejo | i'll see what i find out and report back ;-9 | 13:53 |
betabug | no problem | 13:54 |
*** ccomb has quit IRC | 13:55 | |
mejo | sorry to ask again: | 13:56 |
mejo | how long may it take, when zope runs "out of threads"? | 13:56 |
mejo | and is there anything we can do about it? | 13:56 |
betabug | hmm, it's depending on your code | 13:56 |
betabug | if you have (as is standard) 4 threads configured and you have 4 people clicking on a link that takes each 120 seconds to process, then you're "closed" for 120 seconds | 13:57 |
betabug | now you could say "we need more than 4 threads!!", but the real problem is "why does something take 120 seconds?" | 13:57 |
mejo | any disadvantages of increasing the number of threads? | 13:57 |
betabug | yes, increasing the number of threads doesn't solve the problem | 13:58 |
mejo | maybe it's related to heavy mysql transactions | 13:58 |
betabug | it just postpones it a little | 13:58 |
mejo | we heavily use the mysql database adapter | 13:58 |
betabug | then you'd go and isolate them and either cache them or rewrite to find another solution (e.g. request and store, retrieve later) | 13:58 |
betabug | mejo: I'm working on a similar case for a customer right now, a big, old site and they get a deadlock after a few hours | 14:00 |
betabug | but it's a different cause at the bottom :-) | 14:01 |
mejo | betabug: the problem with debug info at Control_Panel is that it has no history | 14:05 |
mejo | so once the instance hangs, i'm no longer able to check which connections do exist | 14:05 |
betabug | once it hangs you need DeadLockDebugger | 14:06 |
mejo | in most situations, we don't have old connections at the debug info | 14:06 |
mejo | ok, I'll take a look | 14:06 |
betabug | but you can do a cronjob to write the debug output to disk every x minutes | 14:06 |
mejo | that's a good idea. | 14:06 |
mejo | just fetch with wget, or is there an easy solution to access it within a python script? | 14:07 |
betabug | I'd use lynx/wget/curl | 14:08 |
*** fredvd|away is now known as fredvd | 14:14 | |
*** J1m has joined #zope | 14:16 | |
*** J1m has quit IRC | 14:33 | |
*** dayne has joined #zope | 14:36 | |
kosh | betabug: you know what is funny is that on my sites when I tested then at 500 concurrent requests to 4 zope servers behind nginx and about 100K requests total I got not conflict errors for reading of any kind | 14:42 |
kosh | and write conflict errors seem absurdly rare in all my systems | 14:42 |
betabug | kosh: you have different code :-) | 14:43 |
kosh | yup | 14:43 |
kosh | I just wonder what kind of code people write that causes it so I can try even harder to avoid those issues | 14:43 |
kosh | one thing I found long ago though is that if I test if data is different before I write it that seriously sped up my zope apps and made conflict errors far less common | 14:43 |
kosh | so if two people try to write to the same object but try to write to different parts of it things pretty much just work | 14:44 |
kosh | but that is also rare as hell to have happen | 14:44 |
kosh | a simple way I can think of to cause conflicts is to keep a log of something in a basic python object | 14:45 |
kosh | so you write a tuple to a list on every page view or something like that | 14:45 |
kosh | the entire list would get resaved each time and be massively likely to cause conflicts and cripple the site speed wise after a while | 14:45 |
*** wosc has left #zope | 14:46 | |
betabug | kosh: that's the situation on one site I'm debugging | 14:49 |
betabug | we're talking a huge list, 7000 objects | 14:49 |
*** Wu has quit IRC | 14:52 | |
kosh | change it to an OOBTree | 14:53 |
kosh | if you need to keep order then use a full timestamp as a key for insertion | 14:53 |
kosh | that way you won't have any collisions and also don't need to check before insertion | 14:53 |
kosh | and it will naturally order | 14:53 |
kosh | however I have also seen systems have huge problems when you call out to an external db with bad queries | 14:55 |
kosh | so if you have a few poor mysql query that takes 30 seconds to run that can deadlock things pretty fast | 14:55 |
betabug | well, I need order and I need to re-order | 14:59 |
betabug | so I wrote a litte "TreeList" object | 14:59 |
betabug | now is the task to integrate it and then to move the data | 14:59 |
kosh | how did you make it so you can change order and also add efficiently? | 14:59 |
betabug | well, changing order is not efficient right now | 15:11 |
betabug | but it's happening rarely | 15:11 |
betabug | efficiency is the underlying btree as storage | 15:11 |
betabug | integer keys | 15:11 |
betabug | so with inserts and changing order I have to renumber | 15:11 |
betabug | now thinking about migration strategies | 15:13 |
kosh | well the plus side is people screw up just as badly with a relational db as they do with zope | 15:17 |
betabug | yeah, they do with everything | 15:17 |
kosh | have seen queries that go from hours to less then a second with properly changing the query and indexing | 15:17 |
*** dayne has quit IRC | 15:23 | |
kosh | betabug: if you indexed them with floats you would be able to change the number to a float between the ones on either side of what you wanted | 15:27 |
kosh | betabug: that way a minimum of changes are needed to reorder | 15:27 |
betabug | yeah, thought about that | 15:27 |
betabug | but it said somewhere that FOBTree is available over a certain ZODB version, so I didn't investigate | 15:28 |
betabug | maybe too lazy | 15:28 |
betabug | the funny part was learning to mimic a python list :-) | 15:28 |
kosh | you could have done an OOBTree and that would have worked | 15:32 |
betabug | hmm, right | 15:35 |
betabug | I was too lazy | 15:35 |
betabug | but inserting/reordering is happening only in 2 rare cases, so I don't mind so much | 15:36 |
betabug | while time is a big point right now | 15:36 |
betabug | the thing I'm wondering is if I try to migrate on-the-fly or if I go through all the objects | 15:36 |
betabug | on-the-fly could take a long time, with visitors giving up on those requests | 15:37 |
kosh | migration though should be fairly fast | 15:38 |
kosh | 7000 inserts into an IOBTree shoudl be damn fast | 15:38 |
betabug | yes, but the heavy object loading would still be there | 15:38 |
betabug | it's what bogs the app down right now | 15:39 |
kosh | so very late at night when nobody is using it do a migration | 15:39 |
betabug | yeah, probably | 15:40 |
*** slackrunner has quit IRC | 15:50 | |
CIA-107 | janwijbrand * r121738 megrok.chameleon/ (3 files in 3 dirs): update changelog | 15:52 |
CIA-107 | janwijbrand * r121739 megrok.chameleon/ (CHANGES.txt README.txt): update readme and changelog | 15:52 |
CIA-107 | janwijbrand * r121740 megrok.chameleon/src/megrok/chameleon/README.txt: update another readme | 15:52 |
CIA-107 | janwijbrand * r121741 megrok.chameleon/ (buildout.cfg CHANGES.txt setup.py): reflect the RC-ness of z3c.pt and Chameleon in the version number for megrok.chameleon | 15:52 |
*** dayne has joined #zope | 15:56 | |
*** J1m has joined #zope | 15:59 | |
kosh | betabug: however you should be able to do thousands of records per second usually pretty easily | 16:05 |
kosh | hail evil J1m! | 16:06 |
*** dayne has quit IRC | 16:09 | |
*** sp0cksbeard has joined #zope | 16:14 | |
*** superdupersheep has joined #zope | 16:19 | |
superdupersheep | does anyone know if zope 2.11 is compatible with a 64 bit Python 2.4? | 16:20 |
*** pjfd4 has joined #zope | 16:24 | |
*** tisto|away is now known as tisto | 16:26 | |
*** davisagli has quit IRC | 16:29 | |
*** davisagli has joined #zope | 16:30 | |
kosh | superdupersheep: should be | 16:30 |
superdupersheep | we run a *huge* site | 16:30 |
kosh | superdupersheep: I have been running zope with 64bit python long before 2.4 | 16:30 |
kosh | so test it | 16:30 |
kosh | you would not deploy to a live site anyways | 16:30 |
superdupersheep | well, it was tested, it looked good, then in production it fell apart | 16:31 |
superdupersheep | everything "works", but certain sites fall to bits after several hours | 16:31 |
kosh | I have not seen any difference between 32bit zope and 64bit zope on any tests and all my production systems are currently 64bit | 16:31 |
kosh | and those systems all started as 32bit, same database and everything | 16:31 |
kosh | well what falls to bits? what errors do you have? | 16:32 |
superdupersheep | no errors per se (or at least, very little in the logs) - the symptom is that Zope just gets very, very slow | 16:32 |
superdupersheep | it seems to be linked to memory consumption, even though the machine has *tons* of RAM and doesn't seem to be CPU bound at all | 16:32 |
superdupersheep | when i say a huge site i mean a very sizable university site :) | 16:33 |
J1m | huh, why am I evil? | 16:33 |
kosh | J1m: no reason at all, but it is funnier the hello | 16:33 |
J1m | ah | 16:34 |
kosh | superdupersheep: so is the memory consumption pushing the machine into swap? | 16:34 |
superdupersheep | nope. | 16:34 |
kosh | why do you think it is memory consumption then? | 16:34 |
superdupersheep | well, debugging things like this with zope is basically impossible | 16:35 |
kosh | why do you think that? | 16:35 |
superdupersheep | because i have no diagnostic tools to introspect what the hell Zope's up to | 16:35 |
superdupersheep | i have the system's diagnostic tools | 16:35 |
superdupersheep | that's about it | 16:36 |
kosh | actually there are a lot of diagnostic tools available, you may not understand them though | 16:36 |
kosh | for example if you can find out which pages are causing problems you can find out what is causing the stalls | 16:36 |
kosh | if it is object loads it would show up in your cache page as a large load when that page is pulled | 16:36 |
*** __mac__ has quit IRC | 16:36 | |
kosh | if it is an external db that is stalling you would at least know which page is causing it | 16:36 |
superdupersheep | i should elaborate a bit more, before you go any further | 16:37 |
* betabug bets 20c on the p-word | 16:37 | |
superdupersheep | we have many 32bit machines that are in production, with the same version of python, same version of zope | 16:37 |
superdupersheep | they don't suffer from any slowdown in any way | 16:37 |
kosh | what os are you running on? how did you setup python for 64bit? did you compile all your c extensions or just copy them over? | 16:38 |
kosh | also the problem could still be something else | 16:38 |
superdupersheep | kosh: i'm not that stupid :) | 16:38 |
kosh | like if you have a postgres server that zope talks to that could be screwed up on your 64bit system from some config error | 16:38 |
superdupersheep | nope, no postgres involved. | 16:38 |
kosh | well subsitute anything else for that also like mysql, db2, oracle etc | 16:39 |
superdupersheep | all built as 64 bit | 16:39 |
kosh | computers are not random, if some pages are super slow and others are fine then it needs to be isolated about what those pages are doing that is so different | 16:39 |
kosh | maybe a library they need was forgotten | 16:39 |
kosh | what os? | 16:39 |
superdupersheep | rhel6 | 16:40 |
superdupersheep | (not using system python, obviously) | 16:40 |
superdupersheep | if a library was missing you'd expect it to not work immediately, rather than work fine for a period of time, then suddenly get slow | 16:40 |
kosh | getting slow after a period of times makes very little sense unless memory is running out | 16:41 |
kosh | how is zope setup? do you have zope talking to zeo servers? how are your caches setup? | 16:41 |
superdupersheep | talking to zeo servers; client cache for zeo 512MB, 120,000 objects | 16:43 |
kosh | I assume that zeo is on another machine? | 16:44 |
superdupersheep | yes. that machine isn't short on RAM, has basically 0 load averages, doesn't appear to be working hard at all | 16:45 |
*** j-w has quit IRC | 16:45 | |
* kosh tries to remember how the cache flipping things worked for slowing things down | 16:46 | |
kosh | do you have any warnings in your error log about cache flipping or conflict errors? | 16:46 |
*** MrTango has joined #zope | 16:47 | |
kosh | is it only specific areas that get slow or any area? | 16:47 |
kosh | also if you use something like chrome to view a page if you use inspect while on a fast page and then change the url to a slow page the network tab will tell you about each resource being loaded in | 16:48 |
kosh | maybe something else is stalling the page | 16:48 |
kosh | I have had that happen before where a page got slow for some other reason | 16:48 |
kosh | if a page is slow and you refresh it then is it fast again or still slow? | 16:48 |
superdupersheep | once the instance starts to become slow, that's it. i can't restart the instance in the middle of the day, so i just rewrite it off to another zope instance on another machine | 16:49 |
superdupersheep | i just push it off to one of the 32bit machines, boom, everything's fine | 16:49 |
kosh | if you are load balancing multiple instances why can't you just restart the instance? | 16:49 |
superdupersheep | i don't see any bad looking entries in my zeo.log | 16:49 |
superdupersheep | they're not load-balanced in that way | 16:50 |
superdupersheep | there's 4 x machines, each running 8 x instances of Zope | 16:50 |
betabug | what products are ou using on those zopes? | 16:50 |
superdupersheep | and here's where it gets dirty... | 16:50 |
superdupersheep | silva | 16:50 |
superdupersheep | they are exclusively silva | 16:50 |
kosh | just as a test for your 64bit systems try setting the cache-size for zeo to 0 | 16:50 |
kosh | but leave the regular cache-size outside of it | 16:50 |
superdupersheep | what's your thinking on that one? | 16:51 |
kosh | it will make sure that it is not a local cache issue or cache flipping causing any issues | 16:51 |
kosh | if your local network is fast it should still work fine | 16:51 |
kosh | I know in the case of zeo + zope on the same machine it is faster to make no zeo cache | 16:51 |
superdupersheep | local network is brutally fast as they're actually on blades in the same bladecenter | 16:52 |
*** goschtl has quit IRC | 16:52 | |
kosh | also if you could zope 2.13 would probably run vastly faster for you, I get about 3x the performance out of zeo with 2.13 | 16:52 |
*** goschtl has joined #zope | 16:52 | |
kosh | so try just getting rid of the zeo cache on the zope clients | 16:52 |
kosh | it sure seems strange that once it gets slow all the pages get slow with no load average | 16:53 |
J1m | kosh, zeo caches haven | 16:53 |
kosh | J1m: what? | 16:53 |
J1m | kosh, zeo caches haven't "flipped" in a long time. | 16:53 |
*** Arfrever has joined #zope | 16:53 | |
J1m | Not since ZODB 3.2. | 16:53 |
kosh | J1m: he is using an older zope and I did not remember when that was done | 16:54 |
J1m | probably zope 2.8 | 16:54 |
superdupersheep | zope 2.11 actually | 16:54 |
kosh | I just remember I used to see that but all my current systems have been upgraded to 2.13 which is much faster then previous versions | 16:55 |
kosh | and I almost have all my code ready to convert my sites to blobs | 16:55 |
superdupersheep | the reason i suggest memory consumption is an issue is that the slowness often manifests once an instance grows larger than 3G | 16:55 |
kosh | then I want to test zlibstorage on the stuff remaining | 16:55 |
betabug | anybody got a script handy to "migrate all objects of a certain meta_type" in a zodb? | 16:55 |
J1m | superdupersheep, that's interesting. | 16:56 |
J1m | 3G is a magic number for 32 bits. | 16:56 |
superdupersheep | obviously on a 32bit machine that isn't possible - it's moving to 64bit that's allowed that to happen | 16:56 |
superdupersheep | J1m: exactly. | 16:56 |
kosh | I have had zopes use more memory then that though without any issues | 16:56 |
kosh | have had them up to 7G without slowdowns | 16:56 |
superdupersheep | kosh: i'm very pleased for you. | 16:56 |
superdupersheep | :) | 16:56 |
J1m | We use 64 bit machines but out processes are all < 3G because it's too expensive for us to add more than 4G/core. | 16:57 |
kosh | just that it seems strange that it would slow down at that point | 16:57 |
*** zagy has quit IRC | 16:57 | |
kosh | I mostly use lots of rackspace cloudservers now and just get more machines | 16:57 |
superdupersheep | we're a big deployment, in a UK university | 16:58 |
kosh | but on a test system I have 8G of ram | 16:58 |
J1m | gotta go. Good luck. | 16:58 |
superdupersheep | our Data.fs is 13G, our existing 4 32bit servers have 16GB of RAM each | 16:58 |
superdupersheep | they use pretty much all of that ALL the time | 16:58 |
J1m | But if you have 8 processes per machine and only 16G, how to you get >3G. | 16:59 |
J1m | ? | 16:59 |
superdupersheep | we don't on the 32bit machines | 16:59 |
kosh | how many cpus do they have? how many zopes is each one running? how many threads does each zope have? | 16:59 |
J1m | oh | 16:59 |
J1m | sorry | 16:59 |
* J1m goes away now | 16:59 | |
superdupersheep | :) | 16:59 |
kosh | you could still try setting the zeo cache to 0 anyways on a test and then hit the hell out of it and see if that helps | 17:00 |
superdupersheep | 8 zopes per machine, 8 cores, dunno how many threads offhand | 17:00 |
superdupersheep | new machine is set up almost exactly the same, except as i said, for a 64 bit python build | 17:00 |
kosh | wel if you have 8 zopes on a system you would normally run 1-2 zserver-threads on each one | 17:01 |
kosh | if you use the default of 4 that can actually slow things down depending on how you load balance | 17:02 |
kosh | also it chews up a lot more ram that could be used to make the system faster | 17:02 |
superdupersheep | we've got 4 on each, i just checked | 17:03 |
superdupersheep | they aren't the problem | 17:03 |
superdupersheep | they run perfectly fine - bit slow on occasion but we've got caching to deal with that | 17:03 |
kosh | they cause memory to be used up much faster but can't realy speed things up since the threads can't truly run concurrently | 17:03 |
kosh | and your load balancers shoudl be distributing the load anyways | 17:03 |
superdupersheep | i've not seen the 32bit machines exhibit massive slowness like the 64 bit machine does | 17:03 |
kosh | it is very strange | 17:04 |
kosh | especially if once it gets slow it stays slow | 17:05 |
kosh | that is why I am suggesting various ways that your normally speed the system up in the first place | 17:05 |
mejo | does a simple solution exist to log the access times of zope queries? | 17:08 |
kosh | however zope itself works with 64bit since python supported 64bit which is a long time now | 17:08 |
superdupersheep | seriously, that's not the problem | 17:08 |
mejo | i mean the time zope (2.10) takes to return the requested page? | 17:09 |
superdupersheep | the system is ridiculously fast *when it works* | 17:09 |
CIA-107 | janwijbrand * r121742 megrok.chameleon/ (src/megrok/chameleon/README.txt README.txt): fixup REST syntax and small updates now that do again depend on z3c.pt | 17:09 |
CIA-107 | janwijbrand * r121743 megrok.chameleon/ (CHANGES.txt setup.py): Preparing release 1.0rc1 | 17:09 |
CIA-107 | janwijbrand * r121744 /megrok.chameleon/tags/1.0rc1: Tagging 1.0rc1 | 17:09 |
CIA-107 | janwijbrand * r121745 megrok.chameleon/ (CHANGES.txt setup.py): Back to development: 1.0rc2 | 17:09 |
superdupersheep | then boom, slowdown, dead | 17:09 |
kosh | mejo: not that I can think of | 17:09 |
kosh | superdupersheep: so something has to cause it to suddenly go dead like that and it might not even be zope, maybe silva has some very strange bug | 17:10 |
kosh | on a test system though I would still kill the zeo cache and change the zserver threads to 2 and run a test with that | 17:11 |
superdupersheep | god knows. the fact is this kind of crap is almost impossible to debug in any logical way, and you end up playing "change the components one by one until it stops" without any real understanding of what is *actually* happening | 17:11 |
kosh | it is not something I have EVER run into | 17:13 |
kosh | when my systems slowed down or stalled out it was specific pages that caused it and once the pages where isolated it was fairly easy to fix | 17:13 |
kosh | usually pages that loaded FAR too many objects or pages that waited for some external connection of some kind | 17:13 |
kosh | like if a page has something in it for zope to load some remote resource, if that resource takes too long or times out you can stall things out really rapidly | 17:14 |
kosh | however those issues I have run into on just about everything and none of them seemed to make debugging those faster | 17:14 |
kosh | there are some nice tools I have used that will check zope memory usage after every page load and I would just run wget over an entire site with that running and immediately isolate pages that had too many loads | 17:15 |
kosh | having the system get slow and stay slow is just very strange to ever have happen | 17:16 |
kosh | especially at 3GB, that is beyond strange | 17:16 |
superdupersheep | urgh | 17:17 |
*** supton has joined #zope | 17:17 | |
superdupersheep | times like these i hate being a sysadmin | 17:17 |
*** Wu has joined #zope | 17:20 | |
kosh | just recently I actually had to debug postgres locking up | 17:21 |
kosh | no errors in the log | 17:21 |
kosh | but it would not accept any connections | 17:21 |
kosh | it would run for a few hours and then just stop | 17:21 |
kosh | http://code.google.com/p/zope-memory-readings/ I have also used that and it worked without issues for me | 17:24 |
superdupersheep | i've looked at that, but to be honest with you, i don't see memory spikes | 17:25 |
superdupersheep | i see a smooth increase in memory usage to a point (at which point everything is still fine), then at some unspecified time, SLOWNESS AND DEATH | 17:25 |
superdupersheep | it's stable for a long time before it falls apart | 17:25 |
kosh | on a test I would probably try to crawl the entire site and see if it is a specific url that kills it after that point | 17:26 |
kosh | it just makes no real sense to suddenly get slow at a certain memory usage | 17:26 |
*** lcarvalho has joined #zope | 17:27 | |
*** supton has quit IRC | 17:28 | |
*** ViicT has joined #zope | 17:28 | |
lcarvalho | Does anyone have any documentation about REST in Zope2.12 ? | 17:28 |
lcarvalho | Actually I am trying to send a PUT method request to Zope server with a PATH_INFO which does not exists. | 17:29 |
lcarvalho | such request is handled by WebDAV.NullResource | 17:30 |
lcarvalho | I would like to know, if there is any extension of anything which could be able to provide a REST architecture over WebDAV in Zope2.12 | 17:30 |
*** slackrunner has joined #zope | 17:31 | |
lcarvalho | I do not want to publish object for REST methods. I want to handle PATH which does not exists instead of delegating it to WebDAV. | 17:31 |
kosh | no idea about that one | 17:32 |
betabug | hmmm, last I heard, zope did webdav | 17:36 |
betabug | but I didn't understand the question, to be honest | 17:36 |
kosh | see you weirdos later | 17:50 |
*** sm has joined #zope | 17:53 | |
lcarvalho | betabug: I will explain agian | 17:56 |
lcarvalho | imagine that you have a zope server running at localhost:8080 | 17:57 |
lcarvalho | so if you sent a PUT request to such server | 17:57 |
lcarvalho | with the the PATH_INFO with a resource which does not exists in your Zope instance | 17:57 |
lcarvalho | it will be handled by WebDAV | 17:57 |
lcarvalho | correctly? | 17:57 |
lcarvalho | I would like to be able to add a way to handle such REST requests | 17:58 |
lcarvalho | it can be PUT DELETE GET POST basically | 17:58 |
*** sm__ has joined #zope | 18:01 | |
*** sm has quit IRC | 18:04 | |
*** sm__ is now known as sm | 18:04 | |
mejo | betabug: we didn't find the reason for the freezes yet, but the DeadLockDebugger is a great help nevertheless. We'll keep an eye on requests/queries which put the instance under heavy load and hope to find the reasons soon. Thanks a lot for your help. | 18:09 |
betabug | mejo: no problem, glad to be of help! | 18:09 |
betabug | with the client that I had a similar problem, I seem to have resolved the problem right now :-) | 18:10 |
mejo | great to hear! | 18:12 |
mejo | bye and take care | 18:13 |
betabug | yeah, really glad I could nail that one down | 18:14 |
betabug | hf there | 18:14 |
*** __mac__ has joined #zope | 18:15 | |
*** pjfd2 has joined #zope | 18:15 | |
*** pjfd4 has quit IRC | 18:17 | |
*** digitalmortician has quit IRC | 18:17 | |
*** agroszer has quit IRC | 18:27 | |
*** mejo has quit IRC | 18:31 | |
*** sylvain has quit IRC | 18:32 | |
*** allisterb has joined #zope | 18:34 | |
*** pjfd4 has joined #zope | 18:37 | |
*** pjfd2 has quit IRC | 18:40 | |
*** tiwula has joined #zope | 19:02 | |
*** fredvd is now known as fredvd|cooking | 19:07 | |
*** eperez has quit IRC | 19:14 | |
*** Spanktar has joined #zope | 19:17 | |
*** mitchell` is now known as mitchell`off | 19:20 | |
CIA-107 | yuppie * r121746 zopetoolkit/ (ztk-versions.cfg zopeapp-versions.cfg): - set svn:eol-style | 19:21 |
CIA-107 | yuppie * r121747 zopetoolkit/ (zopeapp.cfg zopeapp-sources.cfg ztk.cfg ztk-sources.cfg): - moved sources to separate files | 19:21 |
CIA-107 | yuppie * r121748 zopetoolkit/ztk-sources.cfg: - added 2 more sources | 19:21 |
CIA-107 | yuppie * r121749 Zope/sources.cfg: - synced sources.cfg with versions.cfg, using the new ztk-sources.cfg | 19:21 |
CIA-107 | yuppie * r121750 CMF/sources.cfg: - removed packages that are now part of Zope/trunk/sources.cfg | 19:21 |
CIA-107 | yuppie * r121751 Zope/sources.cfg: - made it easier to reuse sources.cfg | 19:21 |
CIA-107 | yuppie * r121752 CMF/sources.cfg: - added new zopeapp-sources.cfg | 19:21 |
CIA-107 | yuppie * r121753 CMF/buildout-zope213.cfg: - updated Zope2 versions | 19:21 |
CIA-107 | yuppie 2.2 * r121754 CMF/ (buildout-zope212.cfg buildout-zope213.cfg): - updated Zope2 versions | 19:21 |
*** lukasg|4tw has quit IRC | 19:24 | |
*** hever has quit IRC | 19:26 | |
*** hever has joined #zope | 19:31 | |
*** daMaestro has joined #zope | 19:36 | |
*** TresEquis has joined #zope | 19:36 | |
*** avoinea has left #zope | 19:47 | |
*** alexpilz has quit IRC | 20:21 | |
*** evilbungle has quit IRC | 20:22 | |
*** TomBlockley has quit IRC | 20:31 | |
superdupersheep | if anyone is still here from earlier, we think we solved our slow zope problem | 20:32 |
superdupersheep | and it turns out that i'm stupid, and had our ESX VM set up with 1 vCPU instead of 4. | 20:32 |
* superdupersheep hangs head in shame, backs slowly out of the room | 20:33 | |
*** superdupersheep has quit IRC | 20:33 | |
*** hever has quit IRC | 20:36 | |
*** Wu has quit IRC | 20:44 | |
*** davisagli has quit IRC | 21:01 | |
*** giampaolo has joined #zope | 21:02 | |
*** davisagli has joined #zope | 21:03 | |
*** evilbungle has joined #zope | 21:04 | |
*** tisto has quit IRC | 21:08 | |
CIA-107 | jim * r121755 /Sandbox/J1m/customdoctests/ (7 files in 2 dirs): Added spidermonkey-support test and readied for release. | 21:20 |
CIA-107 | jim * r121756 /zc.customdoctests: Support for alternate example languages in doctests | 21:20 |
CIA-107 | jim * r121757 / (zc.customdoctests/trunk Sandbox/J1m/customdoctests): Support for alternate example languages in doctests | 21:20 |
CIA-107 | jim * r121758 /zc.customdoctests/tags/0.1.0: tag | 21:20 |
CIA-107 | jim 0.1.0 * r121759 zc.customdoctests/setup.py: *** empty log message *** | 21:20 |
CIA-107 | jim * r121760 zc.customdoctests/src/zc/customdoctests/spidermonkey.txt: Fixed example. Sigh. | 21:20 |
CIA-107 | jim * r121761 zc.customdoctests/src/zc/customdoctests/spidermonkey.txt: doc fix. | 21:20 |
CIA-107 | jim 0.1.0 * r121762 zc.customdoctests/src/zc/customdoctests/spidermonkey.txt: *** empty log message *** | 21:20 |
*** __mac__ has quit IRC | 21:24 | |
*** webmaven has joined #zope | 21:25 | |
*** lcarvalho has quit IRC | 21:31 | |
*** zagy has joined #zope | 21:38 | |
*** sm has quit IRC | 21:46 | |
*** alecm has quit IRC | 21:47 | |
*** alecm has joined #zope | 21:50 | |
*** alecm has joined #zope | 21:50 | |
*** teix has left #zope | 22:03 | |
*** zagy1 has joined #zope | 22:10 | |
*** zagy has quit IRC | 22:10 | |
*** slackrunner has quit IRC | 22:46 | |
*** zagy1 has quit IRC | 22:52 | |
*** ViicT has quit IRC | 22:56 | |
*** fredvd|cooking has quit IRC | 23:12 | |
*** __mac__ has joined #zope | 23:18 | |
*** J1m has quit IRC | 23:25 | |
*** __mac__ has quit IRC | 23:27 | |
*** Arfrever has quit IRC | 23:42 | |
*** davisagli has quit IRC | 23:50 | |
*** davisagli has joined #zope | 23:51 | |
*** pjfd4 has quit IRC | 23:55 |
Generated by irclog2html.py 2.15.1 by Marius Gedminas - find it at mg.pov.lt!