You say megabyte, I say mebibyte

  • Thread starter Thread starter Grinder
  • Start date Start date
Paul said:
In CPUZ, if you go to "About", there is a Register Dump. It will
dump a file "cpuz.txt". In there, you'll see something like

Dump Module #1
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 80 08 07 0D 0A 02 40 00 04 50 60 00 82 08 00 01
10 0E 04 04 01 02 20 C0 00 00 00 00 28 28 28 28 40
20 70 70 40 40 00 00 00 00 00 37 46 30 28 50 00 00
30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 85 ...

Well, you won't see those particular values, because that is
a dump of my 512MB DDR stick, while yours is SDRAM.

JEDEC has documents to decode those numbers. This doc is
just a guess, as being the one for SDRAM. I used an outside
search engine, rather than messing with the JEDEC search option.
Byte 5 (decimal) and Byte 31 (decimal), as seen on page 20,
define the capacity of each bank. Byte 5 should say 0x02 hex,
meaning there are two banks (double sided). Byte 31 should
read 0x20 hex, to indicate 128MB per bank, for a total of
256MB.

http://www.jedec.org/download/search/4_01_02_05R12.PDF

Have a look at the dump table, and see what your byte 5
and byte 31 show for the module. (Note - the above table
is oriented in hex form, so byte 31 decimal is the second row
right-most byte, as in byte 0x1F, and has a value of 0x40.)

The stuff above 3F, I didn't copy, because it has things like
the serial number and manufacturer ID.

This is what I get:

Dump Module #1
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 80 08 04 0C 0A 01 40 00 01 75 54 00 80 08 00 01
10 8F 04 04 01 01 00 0E 00 00 00 00 14 0F 14 2D 20
20 15 08 15 80 00 00 00 00 00 00 00 00 00 00 00 00
30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 AD
...

I guess maybe there's the rub--byte 5 is 1, or single-sided.
 
Nope, I couldn't do it... Sorry Kony I have to argue again!

In the computing field, a network line rated at 1 megabit refers to
1,000,000 bits. So 8 megabits = 8,000,000 bits. There are 8 bits in a byte
so 8 megabits = 1 megabyte. Therefore 1 megabyte = 8,000,000 bits or
1,000,000 bytes. This agrees with the definition of mega, which is 1
million, or 10 to the power 6. There is no scientific definition of mega
where it means 1,024,000 - this is wrong. This definition of mega is widely
used (and dare I say accepted) by some in computing. There are even websites
that state mega = 1,024,000, or 2 to the power 10,

I guess you meant 2^20 and 1,048,576
but it is wrong. Mega is
a scientific, mathematical term meaning 10 to the power 6.-

maybe in data communications, mega means 10^6.

But in some areas of computers, like i think, RAM/"main memory". It
means EXACTLY 2^20 (1048576).
This is because just as with 2 decimal numbers, one can refer to 10^2
numbers. Likewise, with 20 bits(binary digits), one can refer to 2^20
numbers , that is 1048576 numbers. Each number referring to a memory
location

Also, some oddball character "Rod Speed"
in a previous thread.
Kept calling 10^6 decimal, and 2^20 binary.

I managed to get some sense out of him, and it resolves some of the
issues that are repeated here.

thread: Actual hard drive space?
newsgroup: alt.comp.hardware
link: http://groups.google.co.uk/group/al...6870c9d82b38918/e9c2d3c689bf725f?hl=en&lnk=st


Here is an extract of the discussion, where we reached some kind of
agreement.

I had argued that 2^20 is not binary. (for obvious reasons). Being
that binary is 1s and 0s..
And Kony spewed the same things about bytes being binary.. To which I
replied as you did, that that was rubbish.. That you can count
anything, bytes or fish, in any number system. Bytes - as a physical
thing(or logically presented physical thing), certainly store values
in binary, but that is another matter. Nobody got anywhere with Kony.

But I got somewhere with Rod Speed.. A horrible character..
I will quote him, since nobody should have to wade through all his
shit.


(around post 46 in that thread in google archive)
jameshanley39
Regarding the term decimal prefix and binary prefix.
I just found this article that discusses the 2^x and 10^x prefixes
Now I realise that binary prefix and decimal prefix are official
names.

Rod Speed
Yes they are, and I said that previously, tho not so explicitly.

jameshanley39
http://en.wikipedia.org/wiki/Binary_prefix
It seems to me that it's not that the prefix is in binary. But it's
just officially
named "binary prefix" because the prefix is "based on" multiples of
2.

Rod Speed
Thats what saying a prefix is binary means.

jameshanley39
It talks about those that "abuse SI prefixes ..using them..in ***a
binary sense*** ".
What they have named/titled "decimal prefix" and "binary prefix",
(just) means "prefix for decimal multiples", and "prefix for binary
multiples" respectively.
http://www.iec.ch/zone/si/si_bytes.htm
So, It doesn't mean it's in the binary or decimal number system.

Rod Speed
Yes, thats why I kept referring to PREFIXES.


Rod Speed said,
"
The decimal form is mostly whats used in computing, most
obviously with cpu speeds, comms speeds, etc etc etc .
....
Its only MEMORY that has an intrinsically binary organisation
where the binary form is in fact commonly used.
"
 
I guess you meant 2^20 and 1,048,576


maybe in data communications, mega means 10^6.

But in some areas of computers, like i think, RAM/"main memory".  It
means EXACTLY 2^20  (1048576).
This is because just as with 2 decimal numbers, one can refer to 10^2
numbers.  Likewise, with 20 bits(binary digits), one can refer to 2^20
numbers , that is 1048576 numbers. Each number referring to a memory
location

Also, some oddball character   "Rod Speed"
 in a previous thread.
Kept calling 10^6 decimal, and 2^20 binary.

I managed to get some sense out of him, and it resolves some of the
issues that are repeated here.

thread: Actual hard drive space?
newsgroup: alt.comp.hardware
link:http://groups.google.co.uk/group/alt.comp.hardware/browse_frm/thread/....

Here is an extract of the discussion, where we reached some kind of
agreement.

I had argued that 2^20 is not binary. (for obvious reasons). Being
that binary is 1s and 0s..
And Kony spewed the same things about bytes being binary.. To which I
replied as you did, that that was rubbish.. That you can count
anything, bytes or fish, in any number system.  Bytes - as a physical
thing(or logically presented physical thing), certainly store values
in binary, but that is another matter.  Nobody got anywhere with Kony.

But I got somewhere with Rod Speed.. A horrible character..
I will quote him, since nobody should have to wade through all his
shit.

(around post 46 in that thread in google archive)
jameshanley39
Regarding the term decimal prefix and binary prefix.
I just found this article that discusses the 2^x and 10^x prefixes
Now I realise that binary prefix and decimal prefix are official
names.

Rod Speed
Yes they are, and I said that previously, tho not so explicitly.

jameshanley39http://en.wikipedia.org/wiki/Binary_prefix
It seems to me that it's not that the prefix is in binary. But it's
just officially
named "binary prefix" because the prefix is "based on" multiples of
2.

Rod Speed
Thats what saying a prefix is binary means.

jameshanley39
It talks about those that "abuse SI prefixes ..using them..in ***a
binary sense***  ".
What they have named/titled   "decimal prefix" and "binary prefix",
(just) means  "prefix for decimal multiples", and "prefix for binary
multiples" respectively.http://www.iec.ch/zone/si/si_bytes.htm
So, It doesn't mean it's in the binary or decimal number system.

Rod Speed
Yes, thats why I kept referring to PREFIXES.

Rod Speed said,
"
The decimal form is mostly whats used in computing, most
obviously with cpu speeds, comms speeds, etc etc etc .
....
Its only MEMORY that has an intrinsically binary organisation
where the binary form is in fact commonly used.
1024x1024=1,048,576, "giga" means 1024x1024x1024=1,073,741,824,
"tera" means 1024x1024x1024x1024=1,099,511,627,776.

Rod Speed updates his list somewhere, to include "hard drive capacity"

" cpu speed, comms speed, hard drive capacity, "

I don`t plan on reading his badly presented points again, but I think
he was saying regarding hard drive capacities, that windows, by
reporting the 2^x number, reports it wrongly..
And so hard drive marketters, by using the 10^x format, are correct.
(which also gives them a higher number btw, since it is a smaller unit
than the corresponding 2^y. One reason why many consider them to be
doing so for dishonest reasons).
 
[series of strangely linked quotes all snipper]
I don`t plan on reading his badly presented points again, but I think
he was saying regarding hard drive capacities, that windows, by
reporting the 2^x number, reports it wrongly..
And so hard drive marketters, by using the 10^x format, are correct.
(which also gives them a higher number btw, since it is a smaller unit
than the corresponding 2^y. One reason why many consider them to be
doing so for dishonest reasons).

I'm sorry, I couldn't follow your last post - too bitsy!

In short, this argument will run and run for years and never be concluded
because certain companies and individuals refuse to use the clearly
specified industry standard terms/abbreviations correctly. The term MB means
MegaBytes. Mega is 10^6. Mega never means 2^20. That is Mebi.

Hard disk manufacturers *correctly* advertise their hard drives in MegaBytes
(millions of bytes). Windows reports a size in MebiBytes, but labels it
*incorrectly* as MegaBytes (MB). The reason Windows is incorrect is that the
number it displays is not MegaBytes (MB), but actually MebiBytes(MiB). There
is no problem with the 'binary' calculations using 2^10, 2^20 etc, the
problem is simply that when we use these numbers, we can't label the answer
as MB.

Here is a page on the National Institute of Standards and Technology website
that explains the terms clearly and simply...

http://physics.nist.gov/cuu/Units/binary.html
 
I'm sorry, I couldn't follow your last post - too bitsy!

In short, this argument will run and run for years and never be concluded
because certain companies and individuals refuse to use the clearly
specified industry standard terms/abbreviations correctly.
The term MB means
MegaBytes. Mega is 10^6.

Except in the computer industry, which already had defined
that term, probably long before you were born.
Mega never means 2^20. That is Mebi.

Except in the computer industry. Their use was standard and
far predates 1998. Quite simply if NIST wanted to create a
new term it would have to apply to the new value not the
binary one because in this industry the binary one is taken,
reserved, unavailable for them to *assign* some value to.

That's what a standard is, the thing that came first and
that everyone used. It doesn't change because someone else
says so, dozens of years later.
 
In short, this argument will run and run for years and never be concluded

I will not assume that you are illogical. If you were, then this would
indeed never be concluded.


because certain companies and individuals refuse to use the clearly
specified industry standard terms/abbreviations correctly. The term MB means
MegaBytes. Mega is 10^6. Mega never means 2^20. That is Mebi.

The term " Mebi ", and the rest are fairly new terms, introduced by a
standards authority, for obvious reasons. And In the hope that they
will become mainstream.

The term Megabyte, to the people that used it - programmers - when
dealing with RAM, referred to - exactly - 2^20.

(likewise the rest obviously, as you know. kilo, mega, blah blah blah.
I am just mentioning mega alone usually - for very beneficial brevity)

That covers our possible points of current disagreement..

So my position on SI notation, I don`t know if this tallies with
yours.

Of course, originally, the terms Mega, Giga, e.t.c. existed before
anybody used the word Byte. I guess they existed to make the
mathematical "scientific notation" easier to verbalise and think
about(in words). And so Mega, Giga, Tera , Nano, Milli, and the rest,
only ever meant 10^x where x is one of a range of positive or negative
numbers that is a multiple of 3. (oh, and centi
i.e. 10^-2 seems to be there too!).


Given the history. First the SI notation, then the computer scientists
using Megabyte in the context of RAM, to mean 2^20.

And then the very belated and unsuccessful attempt by a standards
authority to try to "fix the problem". Programmers working with RAM,
will not change a wonderful tradition of saying Megabyte and meaning
2^20. MebiByte is a bit of a tongue twister, like Megabyte gone wrong.
It`s like Peter Piper Picked.. In order to say it you have to shout
out the consonants. Infact, if you got Peter Piper picking MebiBytes,
you`d really keep kids occupied.

Given that - the history anyway - I would not say as you do that
Megabyte means 10^6 bytes and only 10^6 bytes.

Infact, I remember hearing of Mebibyte in an article in scott
mueller`s fat book.. Then it became some odd news article that people
scoffed at, and I recall lots of people saying they refused to use
those words. Standards Authorities are there to help. They cannot
trounce on an established tradition and expect it to work out. It has
not worked out for them. People/Society make words.. As long as
those people are techies, and not marketters, I do not have a problem
with it. I think it`s good. Techies when they used the term, knew what
they meant.. And so do techies now. And there is a logical reason why
they would use one or the other. So these terms like Mebi and Kili and
whatever, have not gained such widespread acceptance.. Certainly not
enough for you to take the line you are taking

It may be a bit of a nuisance for an end user trying to figure out how
much hard drive space he has - not out of technical interest - they do
not know binary either. They simply want to cry over a few gigabytes,
to make a fuss. Screw them. They are alien retards, and it`s not
their game.
And if they want to know, fine. HDD manufacturers label it as Mega
being 10^x. Smaller unit. Bigger number of them. Confusion cleared up.
As has been mentioned many times already. If they cannot use Google,
then screw them again. They can pay for you to come round, and they
do.

Assembly language programmers did not get confused either, talking of
Kilobytes of RAM, and meaning 2^10 not 10^3.. (and if where there
are still assembly language programmers, I am sure they still do not
get confused)

<snip>
 
kony said:
Except in the computer industry, which already had defined
that term, probably long before you were born.

You are wrong. The computer industry didn't define Mega. The term Mega was
clearly defined to mean 10^6 before you, I and even Babbage was born, so the
computer industry didn't define it, they just misused it!
 
You are wrong. The computer industry didn't define Mega. The term Mega was
clearly defined to mean 10^6 before you, I and even Babbage was born, so the
computer industry didn't define it, they just misused it!

The industry defined the value of the terms megabyte, etc.
Only AFTER they had done so and used the terms as standards
for dozens of years did people like you come along and thing
the _STANDARD_ should change. That's the opposite of what a
standard is.

I agree the computer industry should not have taken the
terms mega, etc, because it is a misuse of them to have done
so in the first place- but they _did_ take these terms and
did use them to mean a specific value unto the point that it
_is_ a standard in the industry for many many years.

If you or NIST wants to intrduce a new term for a new value,
go right ahead and do so, but not trying to revalue the
existing standardized term.
 
I'm sorry, I couldn't follow your last post - too bitsy!
The industry defined the value of the terms megabyte, etc.
Only AFTER they had done so and used the terms as standards
for dozens of years did people like you come along and thing
the _STANDARD_ should change. That's the opposite of what a
standard is.

No, the _STANDARD_ was defined centuries ago. The computer industry simply
misused it for their own purposes. The term Mega was never adopted as a
standard for 2^20. Sure, the industry used it for ages, but it has never
been the _STANDARD_ interpretation of Mega.
I agree the computer industry should not have taken the
terms mega, etc, because it is a misuse of them to have done
so in the first place- but they _did_ take these terms and
did use them to mean a specific value unto the point that it
_is_ a standard in the industry for many many years.

Not it is not a standard - it is a misuse of the standard term and has been
for years.
If you or NIST wants to intrduce a new term for a new value,
go right ahead and do so, but not trying to revalue the
existing standardized term.

The term MiB for the value 2^20 has taken years to achieve recognition, but
it is now a clearly defined and the interim solution of misusing MB is now
superceeded.
 
No, the _STANDARD_ was defined centuries ago.

Nonsense, centuries ago there was not any industry using
"gigabyte". There was only ONE widespread use of the term,
in the industry that DEFINED it.

The computer industry simply
misused it for their own purposes.

We can fairly say that when they adopted the term, their
definition was unconventional, but by the vast billions and
billions of people that then used it, it became a standard.

You don't really understand what it takes to become a
standard do you? It does not take waiting until NIST, or
you, decide to voice an opinion.
The term Mega was never adopted as a
standard for 2^20. Sure, the industry used it for ages, but it has never
been the _STANDARD_ interpretation of Mega.

WRONG.

It was adopted, and did become the standard.
Dozens of years ago. This is incredibly obvious, so much
so that you feel you need to argue against it.
 
kony said:
Nonsense, centuries ago there was not any industry using
"gigabyte". There was only ONE widespread use of the term,
in the industry that DEFINED it.

I'm talking about the SI standard prefix Giga, meaning 10^6. It was defined
a long time ago and can be used along with any unit to indicate 1,000,000
units. Hence GigaByte is 1,000,000 bytes. When the computer industry started
to use Giga to mean something other than 1,000,000 they made a mistake. This
mistake has never been adopted as a standard. The value 1,048,576 has never
been a _standard_ meaning for Giga.

This thread should be continued down the pub over several beers...
 
I'm talking about the SI standard prefix Giga, meaning 10^6. It was defined
a long time ago and can be used along with any unit to indicate 1,000,000
units.

It can also be used in the computer industry along with a
binary unit to mean a different value. In retrospect, maybe
the industry shouldn't have used the term giga, etc, but
they did and that's what mattered.

If they had wanted to be wacky and call it dogbyte instead
of gigabyte, it does not matter than the term dog means
something else outside of their use of the term, it only
matters what THEY mean by it.

Hence GigaByte is 1,000,000 bytes. When the computer industry started
to use Giga to mean something other than 1,000,000 they made a mistake. This
mistake has never been adopted as a standard. The value 1,048,576 has never
been a _standard_ meaning for Giga.

It might have been bad judgement, but nevertheless it was
and is a standardized computing term (even if you don't
accept it).

This thread should be continued down the pub over several beers...

Interrupt a perfectly good trip to the pub with this?
There are much better things to talk about while drinking.
 
The term " Mebi ", and the rest are fairly new terms, introduced by a
standards authority, for obvious reasons. And In the hope that they
will become mainstream. ....
MebiByte is a bit of a tongue twister, like Megabyte gone wrong.

Moreover, "Mebibyte", "Kibibyte", etc, sound (and look) _incredibly
stupid_. I'm sure as hell not going to use them, for that reason alone.

And if nobody uses them, they will go away.

This is why it's a good idea for standards authorities to stick to
standardizing common practice, instead of attempting to invent their
own. When they attempt to do the latter, then more often than not
simply screw it up royally, and then we're left with the annoyance of
having to ignore a "standard".

-Miles
 
Miles Bader said:
Moreover, "Mebibyte", "Kibibyte", etc, sound (and look) _incredibly
stupid_. I'm sure as hell not going to use them, for that reason alone.

I agree that they sound silly, but not using a standard term because you
don't like the sound/look of them is just immature!
And if nobody uses them, they will go away.

Unfortunately for you, there are plenty using them - Ubuntu for example.
They actually look pretty good on screen - GiB looks very 'technical'.
This is why it's a good idea for standards authorities to stick to
standardizing common practice, instead of attempting to invent their
own. When they attempt to do the latter, then more often than not
simply screw it up royally, and then we're left with the annoyance of
having to ignore a "standard".

But people ignoring the standard is the exact annoyance that we have now.
Mega is a standard, defined long ago as 10^6. The computer industry can't
change that, but instead chooses to bury its heads in the sand and ignore
that long standing standard and use Mega to mean 2^20 instead! Standards
have to be defined by controlling bodies with authority, they don't just
become standard because lots of people use them! If we want Mega to mean
2^20, then we will have to have the standard changed and come up with a new
standard prefix for 10^6 for the rest of the world to use - I'll let you
tell them that they will need to change just about everything mathematical
ever written!
 
Moreover, "Mebibyte", "Kibibyte", etc, sound (and look) _incredibly
stupid_.  I'm sure as hell not going to use them, for that reason alone.

And if nobody uses them, they will go away.

This is why it's a good idea for standards authorities to stick to
standardizing common practice, instead of attempting to invent their
own.  When they attempt to do the latter, then more often than not
simply screw it up royally, and then we're left with the annoyance of
having to ignore a "standard".


Exactly.

Of course, the Oxford English Dictionary do just that.

One finds that technical society only ignores standards in exceptional
cases. When they have very good reason to.

that is how language develops and evolves.

There is no historical precedent for standards "authorities",
redefining technical terms against the established ways society uses
them. They do not have that authority anyway.

There are certain expressions that sound really stupid, and so people
do not use them. That is not silly/immature. An old fellow once told
me he "had a fanny". (a fanny is of course a "woman hole"). What he
meant was he had a funny thing, a strange problem, with a set of
(accountancy) accounts. That expression was news to me.

People obviously did not like using that expression, it died out,
language evolved. And of course female accountants have never
casually strolled out of their offices and declared that they have a
penis.

There are organisations that want to change the way we spell , to make
it more intuitive, easier for people that cannot spell. No doubt, with
such a plan, they have written "standards" covering all commonly used
words. But those standards are not going to be adopted.

For better or worse, we do not talk like shakespeare anymore. Society
changed it. People at the Oxford Dictionary scramble to keep up to
date with society, and they write a good dictionary.

Let mathematical society use its technical terms the way it wants.

And computer techies use theirs.

Societies decide.

In the old days, of assembly language programming, most people doing
things with computers, computer scientists and others, were
mathematicians or people with a strong mathematical streak. And they
still said Megabyte knowing they meant 2^20, and Kilobyte - 2^10
e.t.c.. For obvious reasons.

<snip>

The people writing this standard are ants and mice compared to all the
old assembly language programmers. That is their place in society.
Any "body" can write a standard like this!! We have not heard much
more than a squeak out of these people. It is not to be compared to
a standard for a technology, written by those that invented the
technology. Those people are not ants. Neither are those that wrote
the original mathematical SI notation standard prefixes. And sensibly
these standards have been widely adopted. But not completely.

I hate to defend marketers, but the fact is that if they are not
technical, and they have to deal with end users who are not technical,
and they are trying to sell to end users and business managers. And
not make them run in fear. So they use terms in a non technical,
liberal way, and they use terms that will catch on to other idiot non
technical people, end users. Nobody has put marketers or end users in
prison - unfortunately. But they are often pushed away by technical
society. Often unfortunately, they are not pushed away, when (bad)
technical people have to work with them, they humour them or they make
compromises.

If one wants to talk about standards bodies having "AUTHORITY".

Ethernet inventors have authority to define words related to their
technology. In their standards specifications.

The differences are obvious. They are completely different kinds of
standard.


One cannot compare ignoring this standard with ignoring that standard.
Not all standards are equal.

RFCs do not have authority, though they are widely accepted. They also
only say they are "guidelines". And also, they are not
specifications.. (unfortunately, in the case of RFCs, one reason why
they call themselves guidelines, is because they know they are crap!!
And as they themselves say , the terms are only defined the way the
RFC they are in "defines", err, uses, them).

<snip>
 
I agree that they sound silly, but not using a standard term because you
don't like the sound/look of them is just immature!

They aren't standard terms, they are an attempt to redefine
what the standard terms are and always were.

One last time - in the computer industry these terms were
developed and used as standard terms, long before NIST tried
to redefine them. The industry ignores NIST because they
are not the industry that standardized the term.

If you want your quest to succeed you have to convince the
industry to stop using the terms and use yours instead. The
manufacturers and windows have to change first, THEN the
industry has done what it did in the first place, decided on
a standard term. No 3rd party can change the meaning. It is
irrelevant that the prefixes mean something else in the
decimal system because computers don't use the decimal
system, it is only converted to that for the benefit of the
users.

Unfortunately for you, there are plenty using them - Ubuntu for example.
They actually look pretty good on screen - GiB looks very 'technical'.

No, it looks like they can't even understand basic computer
terms that've been around for decades.
 
It is
irrelevant that the prefixes mean something else in the
decimal system because computers don't use the decimal
system, it is only converted to that for the benefit of the
users.

oh boy

well, this aspect of your argument is unique to you. I would be
suprised if anybody else nodded in agreement to that. Looks like
mishmash. You said this sort of thing in the previous thread that I
mentioned.

I only point that out, because if GT picks you up on this, and looks
good. It has no bearing on the rest of what GT is saying here, or his
main argument, which is his own sillyness.
 
GT said:
Standards have to be defined by controlling bodies with authority,
they don't just become standard because lots of people use them!

Hardly.

Standards authorities can help smooth the way, but they can equally well
just get in the way and botch things up. We should shun them when they
do the latter.

-Miles
 
Back
Top