Questions regarding IDE/ATA (building a system)

  • Thread starter Thread starter Panic
  • Start date Start date
P

Panic

I am currently working on a device that shall sit on an IDE bus,
between the host and drives, where it shall do some specific tasks,
and:

- There will be a delay through the circuit, maybe as much as 10
blocks of 16 bits from the databus (when transmitting data).
- The device shall be transparent when looking from the host and
device.

I plan on letting abort and reset signals and such go right throuh
without any delay. And for protocol data, there will only be a short
delay. But for data that is going to be stored, thee will be, as I
previously stated, a delay of aproximately 10 blocks of 16 bit
"words".

And now to my questions:

1. What problems could arise from the fact that I introduce a delay?
2. How important would it (really) be to deal with those problems?
3. Is it possible to "stall" commands that is problematic when delay
is introduced, so that an "answer" can be fetched from the
host/device, even though the protocol states stuff like "wait for
400ms for an answer, and then do this and that"?

I have a system up and running that just doesn't care to address any
problems introduced by the delay, and it works just fine. The question
is, will it keep on doing that, if put through extensive "stress
testing", where all sorts of crazy command sequences is sent to the
drive?

-"Panic"

P.S. No need to reply with RTFM. I'm on it, but the bugger is huge...
;-)
 
Previously Panic said:
I am currently working on a device that shall sit on an IDE bus,
between the host and drives, where it shall do some specific tasks,
and:
- There will be a delay through the circuit, maybe as much as 10
blocks of 16 bits from the databus (when transmitting data).
- The device shall be transparent when looking from the host and
device.
I plan on letting abort and reset signals and such go right throuh
without any delay. And for protocol data, there will only be a short
delay. But for data that is going to be stored, thee will be, as I
previously stated, a delay of aproximately 10 blocks of 16 bit
"words".
And now to my questions:
1. What problems could arise from the fact that I introduce a delay?

Violation of the IDE specification and of the accepted IDE
implementation practices in the real world (not documented
anywhere).
2. How important would it (really) be to deal with those problems?

Critical. The whole bus-arbitration may break down.
3. Is it possible to "stall" commands that is problematic when delay
is introduced, so that an "answer" can be fetched from the
host/device, even though the protocol states stuff like "wait for
400ms for an answer, and then do this and that"?

No idea. Have a look into the specification. It should tell
you what is acceptable and what not. As far as I remember all
the relevant timing and electrical details are in there.
I have a system up and running that just doesn't care to address any
problems introduced by the delay, and it works just fine. The question
is, will it keep on doing that, if put through extensive "stress
testing", where all sorts of crazy command sequences is sent to the
drive?

Well, given that some drives have problems with this, it is very hard
to tell. If this is not a general purpose device, observable
correct behaviour is probably good enough. If it is for a general-
purpose device, you will have to read the manual and in addition
will have to do extensive field tests. Not pretty, but the
only thing that works.
P.S. No need to reply with RTFM. I'm on it, but the bugger is huge...
;-)

Yes, I know. It is.

Maybe you could tamper with the capabilities the drive reports
and just do not allow complex things like queued commands?
That could make the setting a lot easier. It could give some
performance cost though.

Anyway, I have to say it is quite impressive that you have a
working prototype for this!

Arno
 
Back
Top