SHA1Managed class has different results in 2.0 vs. 1.1??

  • Thread starter Thread starter Bob
  • Start date Start date
B

Bob

We currently have an application running on .NET 1.1. It hashes certain
data using System.Security.Cryptography.SHA1Managed class. It has worked
out fine until we upgraded the app to .NET 2.0. SHA1Managed in 2.0 hashes
to a different stirng output when the input is exactly the same. Why would
this be the case? I thought the SHA1 algorithm is the same regardless of
the actual implementation. Here's my source code, which compiles file in
both 1.1 and 2.0

public static string HashThis(string salt, string password) {
System.Text.ASCIIEncoding encoding=new
System.Text.ASCIIEncoding();
string saltedPassword = salt + password;
byte [] saltByte = encoding.GetBytes(saltedPassword);
SHA1CryptoServiceProvider sha = new
System.Security.Cryptography.SHA1CryptoServiceProvider();
sha.ComputeHash(saltByte);
return encoding.GetString(sha.Hash);
}


Thanks a lot for any help.
Bob
 
ALl right, figured out the problem right after I sent the question. It's an
ASCII encoding issue. ASCII encoding behaves differently in 2.0 and 1.1,
not the hashing itself.
 
Bob said:
We currently have an application running on .NET 1.1. It hashes certain
data using System.Security.Cryptography.SHA1Managed class. It has worked
out fine until we upgraded the app to .NET 2.0. SHA1Managed in 2.0 hashes
to a different stirng output when the input is exactly the same. Why would
this be the case? I thought the SHA1 algorithm is the same regardless of
the actual implementation. Here's my source code, which compiles file in
both 1.1 and 2.0

public static string HashThis(string salt, string password) {
System.Text.ASCIIEncoding encoding=new
System.Text.ASCIIEncoding();
string saltedPassword = salt + password;
byte [] saltByte = encoding.GetBytes(saltedPassword);
SHA1CryptoServiceProvider sha = new
System.Security.Cryptography.SHA1CryptoServiceProvider();
sha.ComputeHash(saltByte);
return encoding.GetString(sha.Hash);
}

The problem is that your code is broken - it's converting from
arbitrary binary data to a string using an ASCII encoding. What do you
expect it to do when it comes across a byte outside the ASCII range
(i.e. anything over 127)?

Here's a program which demonstrates the problem:

using System;
using System.Text;

class Test
{
static void Main()
{
byte[] data = new byte[]{140};
string text = Encoding.ASCII.GetString(data);
Console.WriteLine ((int)text[0]);
}
}

Basically, you were relying on unspecified behaviour, and it's changed.
Now as to what you can do about that - the easiest thing would probably
be to emulate the previous behaviour. The simplest way of doing that is
something like:

static string OldBytesToAscii (byte[] data)
{
char[] c = new char[data.Length];
for (int i=0; i < data.Length; i++)
{
c = (char)(data&0x7f);
}
return new string (c);
}

A better solution for moving forward in the future is to base64 binary
data when you need it in a reliable text form.
 
Back
Top