Basically I would do something like :
for each table (using GetOleDbSchemaTable to get all tables)
read columns for this table (using GetOleDbSchemaTable with
restrictions
to get columns only for this table)
process these columns
next
You never have all of them in memory. You just process each table in
turn....
--
Patrice
<
[email protected]> a écrit dans le message de (e-mail address removed)...
Thank you very much all, I will reply to each one:
Patrice, Miha
I know well the filtering mechanism but it does not seem
to help. Because, for my purposes, I DO have to read ALL
the records.
What I wished is not to have to store all of them in memory.
Just process and dismiss.
Your suggestion would turn to be "the" solution
if you could be able to provide a filtering
mechanism which allows to take N rows at a time
n and N and N and N .... and so on
Are you able to do that, for any DBMS and
without knowing a priori what is stored in the records -
(this condition is crucial, because I am working against
several kinds of DBMS. For instance SAP can be on SQL server,
Oracle, ...)
Cor,
Yes that one is fine, but it's another piece of information.
I am scanning all the schemaguids to get the complete
DB structure.
-tom
Miha Markic [MVP C#] ha scritto:
Wow, that's big.
Well, I still suggest you to do filtering and process the small amount
of
rows.
As another solution, Patrice already mentioned, do execute proper sql
statments to query the database metadata and you will have full
control.
Of course, it is database specific then.
--
Miha Markic [MVP C#, INETA Country Leader for Slovenia]
RightHand .NET consulting & development
www.rthand.com
Blog:
http://cs.rthand.com/blogs/blog_with_righthand/
Dear Miha , Patrice, ... and all
the advantage of having a mechanism similar to DataReader is obvious.
Assume I am getting only the very first field of the COLUMN schema
table.
If I connect to SAP and run the GetOleDbSchemaTable command I get a
datatable with 1 field
and over half million rows.
I would like to get 1 value at a time to process it instead of holding
over half million
values in memory at the same time, which I do not need.
Further I usually need to take and process many more columns at the
same time and the size of the datatable become enormous.
Of course with small systems this is not a big problem. But try to
work
with real world databases...
-tom
Patrice ha scritto:
English is not may native language either. I should just have been
more
explicit.
This is the second part then. Use a custom replacement for you DBMS
with
a
DataReader (note that a DataReader still uses a buffer).
What I wanted to convey with the first part is that the problem
lloks
like
to me that that you get all data at once. With restrictions (as
suggested
by
Miha) you can get only those you are interested using a limited
amount
of
memory. Though you may want to check that restrictions are processed
server
side it looks like quite a good solution.
--
Patrice
<
[email protected]> a écrit dans le message de (e-mail address removed)...
Hi Patrice. Thanks... Hmmm,
it seems I really have problem with my English
Sorry!
It's clear that - once one has a DataTable - one can read a
row at a time.
What I was trying to explain is that I do not want to get the
whole DataTable, but just get one record at a time.
This would be the same difference between:
DataAdapter + Fill() [gets the whole table at a time]
DataReader + Read() [gets 1 record at a time]
I would like to know how to implement something
similar to the DataReader + Read() schema
for OleDbSchemaGuid tables. And I want it works
for any OleDb connection (independently of the DBMS)
I hope now it is more clear my goal.
Thank you!
Patrice ha scritto:
...And just loop through the rows using the DataTable.Rows
collection
(it
would interesting to check if restrictions are applied on the
result
or
are
processed server side for your DB).
Else if you want really a replacement you'll have to use specific
statements
for this (for example for SQL Server you could use the
information_schema.columns view or the appropriate stored
procedures).
--
Patrice
<
[email protected]> a écrit dans le message de (e-mail address removed)...
Of course Miha, but I was talking about processing rows (not
fields).
It's clear that one
can select columns passing an argument to GetOleDbSchemaTable.
My question was another: I want a programmatic substitute of
GetOleDbSchemaTable
that allows me to consider 1 record at a time.
Thank you for your reply.
-Tom
Miha Markic [MVP C#] ha scritto:
You can pass an array of filter values as a last parameter.
--
Miha Markic [MVP C#, INETA Country Leader for Slovenia]
RightHand .NET consulting & development
www.rthand.com
Blog:
http://cs.rthand.com/blogs/blog_with_righthand/
Hi,
if we have an OleDbConnection it is possible to retrieve a
datatable
containing information about the database fields using the
statement
(watch out line breaking):
OleDbConnection.GetOleDbSchemaTable(System.Data.OleDb.OleDbSchemaGuid.Columns,
New Object() {})
This however return a whole DataTable, which in some cases
can
be
very
large. For instance if you do it with SAP you receive over
half
million
rows.
My question:
I would like to do the same operation programmatically and
not
using
the statement GetOleDbSchemaTable. Possibly taking 1 record
at
a
time,
similarly to the DataReader mechanism.
Can anyone suggest how to achieve that?
-Tom