Who needs a clock?

  • Thread starter Thread starter Jan Panteltje
  • Start date Start date
microcontroler with powernow/speedstep, nothing to see here
Eye problem? ;-)

Key features

* Clockless 32-bit RISC CPU core <--------------------------------------------------------------
* ARMv5TE architecture compliant
* 32-bit ARM and 16-bit Thumb® instruction sets
* Enhanced Memory Protection Unit (MPU)
* Non-maskable interrupt
* Hardware divide
* Dual AMBA AHB-Lite interface
* Fast 32-bit MAC
* Standard synchronous interfaces
* Supports tightly-coupled memories for instruction and data
* Scan test support


Then you can also look up the pdf and read the 'advantages':
# Low power consumption (lowest-power ARM9E processor implementation)
# Low current peaks
# Ultra low electromagnetic emission

There is a discussion on this in comp.arch.fpga now.

Sure, with the ever stronger desire for lower power (say notebooks), if even
a PART of an existing processor design could use this technology, it is
worth every bit of attention.

It will not be easy to incorporate such a 'async handshake' design perhaps,
you cannot say: this unit will execute in n clocks, so it will execute in less
clock in a slower processor (from the CPU pov).
So pipeline problems? But hey they now have 8051 and arm clockless,
so somebody figured it out...
 
Eye problem? ;-)

Key features

* Clockless 32-bit RISC CPU core <-------------------------------

AFAIT its clockless in a sence that you dont have a standard clock
generator on PCB, its in the chip itselfe
Then you can also look up the pdf and read the 'advantages':
# Low power consumption (lowest-power ARM9E processor implementation)
# Low current peaks
# Ultra low electromagnetic emission

everything standard for a chip that can lower its clock to lets say KHz
There is a discussion on this in comp.arch.fpga now.

which thread?
Sure, with the ever stronger desire for lower power (say notebooks), if
even a PART of an existing processor design could use this technology,
it is
worth every bit of attention.

why? you always could underclock parts to lower power consumption
I just dont get all that excitement about its not so clock_less "clockless"
 
AFAIT its clockless in a sence that you dont have a standard clock
generator on PCB, its in the chip itselfe
No, it is a so called ASYN design (the chip).
Look here:
http://www.eet.com/news/design/showArticle.jhtml?articleID=179101800




which thread?
'Aync processors'
why? you always could underclock parts to lower power consumption
I just dont get all that excitement about its not so clock_less "clockless"

If the design is static you can even stop-clock.
The issue is that in aa ASYC processor ONLY those gates change state
when data on the input changes, no zillions of flipflops every clock everywhere.
 
[snipped]
The issue is that in aa ASYC processor ONLY those gates change state
when data on the input changes, no zillions of flipflops every clock everywhere.

You'll have to come up with a better theory than that, as ONLY those flops
change state when data on the input changes.

As for "the issue", I wouldn't want to be involved in the design verification
effort trying to predict just how fast a clockless cloud of a zillion gates is
going to work across P/V/T ranges...
 
The issue is that in aa ASYC processor ONLY those gates change state
when data on the input changes, no zillions of flipflops every clock
everywhere.

ok, now I get it, thants for the layman's version.
And there I was thinking that in this time and age processors do allready
put to slep not used units.
 
[snipped]
The issue is that in aa ASYC processor ONLY those gates change state
when data on the input changes, no zillions of flipflops every clock everywhere.

But they may not change only once for a given input change. Every
transition costs power.
You'll have to come up with a better theory than that, as ONLY those flops
change state when data on the input changes.

OTOH, you don't have to feed the clock trees.
As for "the issue", I wouldn't want to be involved in the design verification

Me neither. ;-)
effort trying to predict just how fast a clockless cloud of a zillion gates is
going to work across P/V/T ranges...

There are techniques for doing self-timed logic (e.g. Mueller C),
but they're still ugly and are in pretty much the "solution looking
for a problem" state. The world is still clocked.
 
[snipped]
The issue is that in aa ASYC processor ONLY those gates change state
when data on the input changes, no zillions of flipflops every clock everywhere.

You'll have to come up with a better theory than that, as ONLY those flops
change state when data on the input changes.
Come again??

As for "the issue", I wouldn't want to be involved in the design verification
effort trying to predict just how fast a clockless cloud of a zillion gates is
going to work across P/V/T ranges...
This is discussed no end by rickman et all in comp.arch.fpga.
Plz also read the replies by 'fpga_toys'.

Yes, things are not simple.
Neither was designing the opteron I am sure with a clocked design.
I'd see it as a challenge.
Not as a reason not do do it because it is difficult.
Then you better also stop eating peonut butter sandwiches.
 
hackbox.info said:
ok, now I get it, thants for the layman's version.
And there I was thinking that in this time and age processors do allready
put to slep not used units.

That's an impressive number of misspellings for such a short post.
 
Back
Top