Author |
Share Topic Topic Search Topic Options
|
mikke63
Newbie
Joined: 01-Sep-2011
Location: Norway
Posts: 19
|
Post Options
Quote Reply
Topic: Bad CRC32 in GZIP stream Posted: 27-Mar-2012 at 5:20am |
Sometimes (when fetching larger amounts of data) I receive this exception: IdeaBlade.EntityModel.EntityServerException: Bad CRC32 in GZIP stream. (actual(98D6417E)!=expected(7989FED6)) ---> Ionic.Zlib.ZlibException: Bad CRC32 in GZIP stream. (actual(98D6417E)!=expected(7989FED6)) at IdeaBlade.EntityModel.RemoteEntityServerProxyBase.CheckConnection(Exception pException) at IdeaBlade.EntityModel.EntityServerProxy.ExecFunc[T](Func`1 func, Boolean funcWillHandleException) at IdeaBlade.EntityModel.EntityServerProxy.ExecuteOnServer[T](Func`1 func, Boolean funcWillHandleException) at IdeaBlade.EntityModel.EntityServerProxy.Fetch(SessionBundle bundle, IEntityQuerySurrogate query) at IdeaBlade.EntityModel.EntityManager.AsyncFetchWorker(AsyncEventArgs asyncArgs) The Actual and Expected values for the CRC change each time even though trying to fetch the exact same set of records. Also, occasionally the fetch of large data amounts completes successfully, i.e. the exception appears to be at "random". Where do I start fiddling to fix this problem? Using version 6.1.6 of DevForce, and the application is Silverlight 4.
|
 |
DenisK
IdeaBlade
Joined: 25-Aug-2010
Posts: 715
|
Post Options
Quote Reply
Posted: 27-Mar-2012 at 5:26pm |
Hi mikke63,
Unfortunately this is the first time we've seen this exception so we don't have any advice on places to look. However, it is likely that the underlying problem lies with data loss associated with large data transmission that eventually results in the mismatched CRC32 check.
I agree that DevForce should handle this more gracefully.
How large of data are you fetching when this happens?
And is there a pattern to the data types that you fetch? For example, it only happens with large binary data but not with other data types.
Any other info you can provide will be helpful as well.
|
 |
stephenmcd1
DevForce MVP
Joined: 27-Oct-2009
Location: Los Angeles, CA
Posts: 166
|
Post Options
Quote Reply
Posted: 28-Mar-2012 at 4:31pm |
We've seen this exception as well. It happens very rarely but I'd say that I see it at least a few times a week in normal development.
We try not to load too much data at once (mostly because we continually ran into StackOverflowExceptions during serialization - but we are on an older version before you guys introduced the alternate serialization method). I know in a few cases, our app will load a relatively large amount of data and I think (this is all based on memory....I'll try to pay more attention to what I was doing next time I see this) that is when I've seen this CRC error.
In the worst case scenario, we have 5 parallel InvokeServerMethod calls that return between 300k and 2mb of data each (with a total size of 4.5mb). (note: these are the numbers I see reported by Fiddler....I believe those sizes are after the gzip compression takes place? In which case, the actual data was likely much bigger).
As for the type of data that gets loaded, I can't think of anything special. It's mostly just text and numeric fields.
If you'd like any more info from me, let me know.
|
 |
DenisK
IdeaBlade
Joined: 25-Aug-2010
Posts: 715
|
Post Options
Quote Reply
Posted: 29-Mar-2012 at 4:15pm |
Thanks for the info stephenmcd1. It's much appreciated.
I'd like to wait to see what mikke63 has to add before I know what more info I need.
|
 |
mikke63
Newbie
Joined: 01-Sep-2011
Location: Norway
Posts: 19
|
Post Options
Quote Reply
Posted: 03-Apr-2012 at 1:55am |
Hi guys Sorry for the delay. I've been playing around with the database and Fiddler for some days. The problem I experience regarding the crc error is when fetching a project structure from SQL Server. A project in this context consists of metadata and a number of data records containing binary data. This binary data is stored as varbinary fields. For "normal sized" projects this works excellent, both storing and fetching the data. When fetching a project the complete structure is loaded in one single query. I've been testing lately using one specific project that is somewhat larger than "normal" in this context. This project has data records with binary fields of up to 4MB each. With the total number of data records along with the metadata the project size summarizes to 140MB in the database. Using Fiddler I can see that this is returned in a HTTP response of size 40MB. I haven't found a defined size limitation where the fetch starts to fail. Also, for this specific project it doesn't fail consequently. Sometimes the project loads fine a number of times, and then it results in the crc error on the next try. Using Fiddler I also can see that the 40MB HTTP response is returned in full extent even when the crc error is raised. This leads me to assume that it isn't truncating that is the problem, but rather a corruption of the data during sending, transmission or receiving. I will continue testing to see if I can spot anything that connects to when the fetch succeeds or fails, for instance other traffic on the server etc. Please let me know if you want me to collect other info using Fiddler or other tools.
|
 |
DenisK
IdeaBlade
Joined: 25-Aug-2010
Posts: 715
|
Post Options
Quote Reply
Posted: 04-Apr-2012 at 3:30pm |
Hi mikke63 and stephenmcd1,
Thank you for the info. If you already have an isolated repro solution, that would help. But if not, perhaps you can send me an edmx containing only the involved entity types. We'll try to do more testing ourselves here. You can PM me.
|
 |
stevef
Newbie
Joined: 15-Jul-2011
Location: NY
Posts: 9
|
Post Options
Quote Reply
Posted: 18-Apr-2012 at 5:30pm |
I too am experiencing this (on 6.1.6) but only occasionally. It just happened, as a matter of fact in one session (using IE 9). And once it happened several times in a row, yet doing the same operation on another machine worked just fine. It occurred in code that reads entities and related entities in a parallel coroutine. All the data is made up of strings, ints, and doubles. No varbinaries.
|
 |
MrTouya
Newbie
Joined: 24-Sep-2010
Location: Long Island, NY
Posts: 7
|
Post Options
Quote Reply
Posted: 30-Apr-2012 at 9:19am |
Hi Guys, I am experiencing the same exact problem. Except that it is consistent. It looks like it has something to do with the amount of data being sent over the wire. We can replicate it everytime. I am just populating a grid with standard data. If I set my row limit to 50K it works fine. If I double it, it gives me that error - each and every time. Have you guys been able to pinpoint an issue yet? Thanks, Stephane
|
 |
mikke63
Newbie
Joined: 01-Sep-2011
Location: Norway
Posts: 19
|
Post Options
Quote Reply
Posted: 03-May-2012 at 6:42am |
Hi, Sorry for being out of the loop for a while. We have reworked our code to work around the problem. As said previously we saw the "gzip" exception only when fetching large project trees, containing many binary data columns of up to 4MB each. Originally we fetched the complete project tree including the binary nodes in one bulk. This was easy and simple and ensured we had all data at once. What we do now is excluding the binary data rows in the original fetch, dramatically reducing the total data size. Then we fetch each binary data row on demand when requested by the gui. We had to rewrite some of the code (Silverlight) to allow for async fetches, but the overall experience is good. After this rewrite we haven't seen the "gzip" exception any more. We haven't done any systematic or conclusive tests, but overall it appears to us that it is the total size of the query result that causes the error. Mikael
|
 |
stephenmcd1
DevForce MVP
Joined: 27-Oct-2009
Location: Los Angeles, CA
Posts: 166
|
Post Options
Quote Reply
Posted: 05-Mar-2013 at 11:10am |
This is an old thread but I wanted to update it with some information that I recently found when debugging this. We started getting this exception a lot and so I was tasked with tracking it down. I found one case where I was easily able to reproduce it - so I broke into the debugger to try to figure out what was going on.
One thing to note, the exception we get is a bit different than in the original post. We get:
Ionic.Zlib.ZlibException: Bad CRC32 in GZIP trailer. (actual(7185B222)!=expected(6F8279F2))
instead of
Ionic.Zlib.ZlibException: Bad CRC32 in GZIP stream. (actual(98D6417E)!=expected(7989FED6))
I'm not sure whether that is just because the message has changed a bit in the last year....or maybe the test case I'm using manifests itself with a slightly different message. Or maybe they are two completely unrelated errors.
The short answer is that this was always an OutOfMemory exception that got incorrectly/misleadingly reported as the bad CRC error. It's possible that there are other causes of the CRC error.....but in our app, I'm very confident that the many times we saw it were all OOM.
Now for the long description for those that are curious. The order of events goes something like this: - The GZipMessageEncoderFactory is busy decompressing a big web request as the result of a query. (ReadMessage() is calling DecompressBuffer()).
- As we are decompressing the data into a memory stream, we get to a point where the MemoryStream needs to be resized. The resize ends up failing with OutOfMemoryException coming from MemoryStream.EnsureCapacity(). Presumably the runtime couldn't find a big enough contiguous block of memory to handle the big byte array.
- Since the code in DecompressBuffer is executing inside a using() statement, the Dispose method of the GZipStream gets called (even though we just ran out of memory - not a good time for more code to be running :-( )
- The GZipStream.Dispose() method closes its base stream (which is a ZlibBaseStream)
- The ZlibZBaseStream.Close() method calls the ZlibZBaseStream.finish() method
- The ZlibZBaseStream.finish() method does some math to calculate the CRC values and finds out that they don't match. The finish() method throws the all-too-familiar 'Bad CRC32' error. At this point, I suppose that makes sense since the stream didn't end up getting all the data so of course the CRCs don't match. Of course, since the data didn't arrive because we ran out of memory for our buffers, the error is misleading.
Hopefully that information helps somebody else. It was certainly driving us crazy for a while. The downside of this is that we still have an OutOfMemory condition to deal with....but that is a whole other issue.
|
 |
stevef
Newbie
Joined: 15-Jul-2011
Location: NY
Posts: 9
|
Post Options
Quote Reply
Posted: 05-Mar-2013 at 5:08pm |
Stephen, That's very useful info, thanks! How are you planning on dealing with the OOM condition?
|
 |
stephenmcd1
DevForce MVP
Joined: 27-Oct-2009
Location: Los Angeles, CA
Posts: 166
|
Post Options
Quote Reply
Posted: 25-Mar-2013 at 9:57am |
As it stands now, we don't really have a good solution to OOM issues. We're always trying to cut down on how much data we need to bring back at one time. We've also run into high memory loads when making a large number or queries at the same time - in that case, it seems that making so many new threads to service all the requests in a short period of time was making things worse and then we'd be very likely to run into the GZIP errors.
I wish I had a better answer...
|
 |
GeorgeB
Groupie
Joined: 03-May-2010
Posts: 66
|
Post Options
Quote Reply
Posted: 01-May-2013 at 5:46am |
Hi
We're also starting to see OOM issues as well as the Bad CRC32 in GZIP trailer.
I have a random freeze happening in a Silverlight app so I added a error logger and these are the same issues.
Any pointers will help alleviate this. I'm certainly not pulling a lot of data just very often.
Kr
George
|
 |
kimj
IdeaBlade
Joined: 09-May-2007
Posts: 1391
|
Post Options
Quote Reply
Posted: 01-May-2013 at 7:58am |
We haven't made any progress in fixing this within the product.
There are a couple of different things you can try though:
1) Remove the GZipMessageEncoding from the CustomBinding and use either standard binary or text message encoding, and use IIS Compression for outbound messages (such as query results).
2) Swap out the GZipMessageEncoding for a custom encoder. We can make the source for our encoder available, but it's based on an SDK sample from Microsoft published a few years ago. With a custom encoder you can choose your own compression library.
Personally, if I were experiencing this problem regularly in production, I would take a look at the first option.
|
 |
GeorgeB
Groupie
Joined: 03-May-2010
Posts: 66
|
Post Options
Quote Reply
Posted: 01-May-2013 at 8:48am |
Hi Kim
Would it resolve the OOM as well?
Kr
George
|
 |
kimj
IdeaBlade
Joined: 09-May-2007
Posts: 1391
|
Post Options
Quote Reply
Posted: 01-May-2013 at 10:29am |
George, it's hard to say. If the extra memory allocations done by the GZipMessageEncoder are at fault, then a standard message encoder may help.
|
 |