M
Maria
Hi,
My understanding to "a hardware support for any operation" is that
this operation does not have to be coded neither by the ordinary user
nor in the OS. It is implemented without *wasting any CPU memory
cycle*.
However, I find it hard to understand how this can be done when
updating the Valid Bit/Dirty Bit/ Use (modified bit)
Let's consider each bit separately
1- Valid Bit: i understand the utility of this bit. But what does
hardware support for "valid bit" stand exactly for? Does it mean
this bit is checked once a page table entry is read and loaded into a
memory management unit (MMU) register? I mean is there an AND operator
to check if this bit is one or zero, so that the CPU is interrupted if
the valid bit is zero.
I looked desperately in google for detailed hardware architecture of
the Memory Management Unit (MMU) but in vain...any link?
Also it claimed that the OS set all page table entries valid bits to
zero once it allocates a page table to a process. Does it have to be an
entry by entry write operation?
2- Dirty Bit/Use bit: those are in particularly important in Virtual
memory. They are part of each page table entry. i really don't
understand how those bits are set without losing CPU cycles. I don't
believe that the DRAM has a set bit line to set/reset these bits. Am I
wrong????
In http://www.stanford.edu/class/cs140/projects/pintos/pintos_5.html
, section 5.1.2.3
the author states
"Most of the page table is under the control of the operating system,
but two bits in each page table entry are also manipulated by the CPU.
On any read or write to the page referenced by a PTE, the CPU sets the
PTE's accessed bit to 1; on any write, the CPU sets the dirty bit to 1.
The CPU never resets these bits to 0, but the OS may do so."
In either case we will be losing CPU memory cycles and as such any
application execution is slowed down since each instruction consumes
some (unnecessary) CPU cycles to update the reference bit and probably
the dirty bit. any comment?
In another reference
http://www.linuxrocket.net/index.cgi?a=MailArchiver&ma=ShowMail&Id=361273
Linus Torvalds says
"The thing is, we should always set the dirty bit either atomically
with the access (normal "CPU sets the dirty bit on write") _or_ we
should set it after the write (having kept a reference to the page)."
What does he mean by set atomically? And how this is done?
Many thanks for your help
My understanding to "a hardware support for any operation" is that
this operation does not have to be coded neither by the ordinary user
nor in the OS. It is implemented without *wasting any CPU memory
cycle*.
However, I find it hard to understand how this can be done when
updating the Valid Bit/Dirty Bit/ Use (modified bit)
Let's consider each bit separately
1- Valid Bit: i understand the utility of this bit. But what does
hardware support for "valid bit" stand exactly for? Does it mean
this bit is checked once a page table entry is read and loaded into a
memory management unit (MMU) register? I mean is there an AND operator
to check if this bit is one or zero, so that the CPU is interrupted if
the valid bit is zero.
I looked desperately in google for detailed hardware architecture of
the Memory Management Unit (MMU) but in vain...any link?
Also it claimed that the OS set all page table entries valid bits to
zero once it allocates a page table to a process. Does it have to be an
entry by entry write operation?
2- Dirty Bit/Use bit: those are in particularly important in Virtual
memory. They are part of each page table entry. i really don't
understand how those bits are set without losing CPU cycles. I don't
believe that the DRAM has a set bit line to set/reset these bits. Am I
wrong????
In http://www.stanford.edu/class/cs140/projects/pintos/pintos_5.html
, section 5.1.2.3
the author states
"Most of the page table is under the control of the operating system,
but two bits in each page table entry are also manipulated by the CPU.
On any read or write to the page referenced by a PTE, the CPU sets the
PTE's accessed bit to 1; on any write, the CPU sets the dirty bit to 1.
The CPU never resets these bits to 0, but the OS may do so."
In either case we will be losing CPU memory cycles and as such any
application execution is slowed down since each instruction consumes
some (unnecessary) CPU cycles to update the reference bit and probably
the dirty bit. any comment?
In another reference
http://www.linuxrocket.net/index.cgi?a=MailArchiver&ma=ShowMail&Id=361273
Linus Torvalds says
"The thing is, we should always set the dirty bit either atomically
with the access (normal "CPU sets the dirty bit on write") _or_ we
should set it after the write (having kept a reference to the page)."
What does he mean by set atomically? And how this is done?
Many thanks for your help