G
Guest
Hello everyone:
I am looking for everyone's thoughts on moving large amounts (actually, not
very large, but large enough that I'm throwing exceptions using the default
configurations).
We're doing a proof-of-concept on WCF whereby we have a Windows form client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.
Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package" them
into a hashtable, then return the hashtable to the calling client.
Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.
The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back before
we throw a "exceded buffer size" exception. Using the default values in our
app.config file that size is 65,536.
Not that moving 11,000 records is smart, but to be limited to only 64Kb in a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?
Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding answers
on this.
Greatly appreciate any and all comments on this,
Thanks
I am looking for everyone's thoughts on moving large amounts (actually, not
very large, but large enough that I'm throwing exceptions using the default
configurations).
We're doing a proof-of-concept on WCF whereby we have a Windows form client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.
Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package" them
into a hashtable, then return the hashtable to the calling client.
Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.
The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back before
we throw a "exceded buffer size" exception. Using the default values in our
app.config file that size is 65,536.
Not that moving 11,000 records is smart, but to be limited to only 64Kb in a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?
Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding answers
on this.
Greatly appreciate any and all comments on this,
Thanks