J
JCardinal
Hello, I've run into something strange, I have the following SQL parameter
(c#):
cmd.Parameters.Add(new System.Data.SqlClient.SqlParameter("@_rate",
System.Data.SqlDbType.Float));
This is for a simple stored procedure that updates a float value in a SQL
server 2000 (MSDE) table.
The following line of code:
cmd.Parameters["@_rate"].Value=1.33F;
Results in 1.33000004291534 being stored in the record.
It's obviously a binary rounding issue, but I can't seem to pin it down, I
honestly don't think it should be doing that.
Is it my parameter definition or...?
(c#):
cmd.Parameters.Add(new System.Data.SqlClient.SqlParameter("@_rate",
System.Data.SqlDbType.Float));
This is for a simple stored procedure that updates a float value in a SQL
server 2000 (MSDE) table.
The following line of code:
cmd.Parameters["@_rate"].Value=1.33F;
Results in 1.33000004291534 being stored in the record.
It's obviously a binary rounding issue, but I can't seem to pin it down, I
honestly don't think it should be doing that.
Is it my parameter definition or...?