Partition Setup

  • Thread starter Thread starter Jay Peterman
  • Start date Start date
J

Jay Peterman

I have two Maxtor HDDs(120GB and 200GB) that I want to partition. I
don't really need that much but I got the 200GB for nothing from a
friend. I also have 1GB DDR memory. I use Widows XP Pro and am trying
to partition at least one of the drives.

The programs I use most are Newsleecher with which I download large
(50MB) files. I'll be unraring and assembling the files. I also use
Agent and Photoshop CS. The Phostoshop files get pretty big
sometimes. One other thing I use is Proshow which is used for making
slideshows with audio.

My question is how should I partition a drive and which one?
I need one (partition or HDD) for a scratch disk for Photoshop and I'd
like to assign the memory to the fastest possible partition/HDD.

Thanks for any help.
 
Jay Peterman said:
I have two Maxtor HDDs(120GB and 200GB) that I want to partition. I
don't really need that much but I got the 200GB for nothing from a
friend. I also have 1GB DDR memory. I use Widows XP Pro and am trying
to partition at least one of the drives.

The programs I use most are Newsleecher with which I download large
(50MB) files. I'll be unraring and assembling the files. I also use
Agent and Photoshop CS. The Phostoshop files get pretty big
sometimes. One other thing I use is Proshow which is used for making
slideshows with audio.

My question is how should I partition a drive and which one?
I need one (partition or HDD) for a scratch disk for Photoshop and I'd
like to assign the memory to the fastest possible partition/HDD.

Thanks for any help.
The 200GB I would partition at least 100GB + 100GB. The reason is, if you
want to clean up or defrag at any time, 200GB is too long to wait for. As
for the 120 I`d go for 20>25GB and the rest on part #2. It`s all a matter of
choice really, this is just a suggestion. Once up and running you can do a
speed test on your `drives`. best wishes..J
 
I have two Maxtor HDDs(120GB and 200GB) that I want to partition. I
don't really need that much but I got the 200GB for nothing from a
friend. I also have 1GB DDR memory. I use Widows XP Pro and am trying
to partition at least one of the drives.

The programs I use most are Newsleecher with which I download large
(50MB) files. I'll be unraring and assembling the files. I also use
Agent and Photoshop CS. The Phostoshop files get pretty big
sometimes. One other thing I use is Proshow which is used for making
slideshows with audio.

My question is how should I partition a drive and which one?
I need one (partition or HDD) for a scratch disk for Photoshop and I'd
like to assign the memory to the fastest possible partition/HDD.

Thanks for any help.

Depending on the age of the drives, they might be similar
density platters or one might be higher density than the
other (but the 200GB having 2-3 platters, probably 2).

The newer drive is likely the better to devote to the OS, by
partitioning off the first partition to a limited, small
size, perhaps 10GB more or less, depending on how many apps
you'd install, and how many of those to the OS partition.

You don't mention how comfortable you are dealing with
mutliple parititons either... that could vary the strategy
used. If you don't mind 4 parititons total, I suggest
partitioning off the back-end of the 200GB drive for
archival large storage purposes, and a smaller front
partition for 1) Fixed minimum pagefile, perhaps 1GB
without upper limit 2) Remainer for photoshop et al
scratch disk.

So far this leaves the remainer of the "1st" drive holding
the OS, unused. I still feel it better to have a fixed
partition for OS separate from rest of drive so the OS files
stay together on this faster part of the drive. SInce you
do have the two drives, perhaps making redundant copies of
important data so that if one drive were to fail, the other
still holds this data.
 
Depending on the age of the drives, they might be similar
density platters or one might be higher density than the
other (but the 200GB having 2-3 platters, probably 2).

The newer drive is likely the better to devote to the OS, by
partitioning off the first partition to a limited, small
size, perhaps 10GB more or less, depending on how many apps
you'd install, and how many of those to the OS partition.

You don't mention how comfortable you are dealing with
mutliple parititons either... that could vary the strategy
used. If you don't mind 4 parititons total, I suggest
partitioning off the back-end of the 200GB drive for
archival large storage purposes, and a smaller front
partition for 1) Fixed minimum pagefile, perhaps 1GB
without upper limit 2) Remainer for photoshop et al
scratch disk.

So far this leaves the remainer of the "1st" drive holding
the OS, unused. I still feel it better to have a fixed
partition for OS separate from rest of drive so the OS files
stay together on this faster part of the drive. SInce you
do have the two drives, perhaps making redundant copies of
important data so that if one drive were to fail, the other
still holds this data.

Also as a caveat.Manipulating files between two partitions on the same
drive will be slower that between two hard drives on separate IDE
ports.I don't know for SATA drives etc.
 
I have two Maxtor HDDs(120GB and 200GB) that I want to partition. I
don't really need that much but I got the 200GB for nothing from a
friend. I also have 1GB DDR memory. I use Widows XP Pro and am trying
to partition at least one of the drives.

The programs I use most are Newsleecher with which I download large
(50MB) files. I'll be unraring and assembling the files. I also use
Agent and Photoshop CS. The Phostoshop files get pretty big
sometimes. One other thing I use is Proshow which is used for making
slideshows with audio.

My question is how should I partition a drive and which one?
I need one (partition or HDD) for a scratch disk for Photoshop and I'd
like to assign the memory to the fastest possible partition/HDD.

Thanks for any help.

Newest (most reliable + fastest) drive--> 2 partitions - 15 GB OS partition
balance of the drive to another partition.
Other drive--> 3 partitions - First partition ~1 - 2 GB for paging file.
Split the rest of the drive into 2 partitions.
Store your zips/rars on one drive and extract to the other.
Suggest NTFS file system especially on OS partition. NTFS is much more
robust when it comes to error protection.

regards

Dud
 
Newest (most reliable + fastest) drive--> 2 partitions - 15 GB OS partition
balance of the drive to another partition.
Other drive--> 3 partitions - First partition ~1 - 2 GB for paging file.
Split the rest of the drive into 2 partitions.
Store your zips/rars on one drive and extract to the other.
Suggest NTFS file system especially on OS partition. NTFS is much more
robust when it comes to error protection.

Hmmm...Says who?
Let me guess.MS<grin> :)
 
Shep© said:
.... snip ...

Also as a caveat. Manipulating files between two partitions on the
same drive will be slower that between two hard drives on separate
IDE ports. I don't know for SATA drives etc.

It will always be slower. The heads can't be in two different
places at the same time on a single drive. They can on separate
drives.
 
Hmmm...Says who?
Let me guess.MS<grin> :)


Thanks everyone....a lot of good ideas. The 200 GB is only a couple of
months old. The other is about a year old. I'm quite comfortable with
multiple partititions.

Thanks again.
 
Suggest NTFS file system especially on OS partition. NTFS is much more
robust when it comes to error protection.

Well if your system has logical (as-in, primarily memory)
errors, you're screwed either way. If it's a disc physical
defect, also screwed. "Robust" may mean it makes MS's
wallet more "robust" to steer the industry towards
proprietary technologies.

NTFS is better for a couple other reasons though, support of
files > 4GB and setting security permissons. IMO, these are
two things often better left to other parititons... that
drawback of not having the higher versatility of FAT32 can
in itself offset the robust error protection. When I hear
"robust error protection" I can't help but think MS is
admitting the OS is going to crap itself.
 
It will always be slower. The heads can't be in two different
places at the same time on a single drive. They can on separate
drives.

Before I built this new machine I used Drive Image Pro to create clone
archives. My boot disk had to equal-sized partitions and a separate HD
had one partition the same size. I would lay off the boot partition
onto both the second partition of the same drive and the single
partition on the separate drive. Sometimes I had the second disk
connected as a master on the secondary IDE channel, sometimes as a
slave on the primary IDE channel.

According to DIP, the time it took to create the cloned partitions was
almost exactly the same no matter how I had the disks configured. I
did require that DIP do a verify, but otherwise the setup was the
default configuration, eg. smart sector skipping, etc.

Now how can it be that the clone creation time was the same regardless
of how my target drive was configured?


--

Map of the Vast Right Wing Conspiracy
http://home.houston.rr.com/rkba/vrwc.html

"Whatever crushes individuality is despotism."
--John Stuart Mill, "On Liberty"
 
Before I built this new machine I used Drive Image Pro to create clone
archives. My boot disk had to equal-sized partitions and a separate HD
had one partition the same size. I would lay off the boot partition
onto both the second partition of the same drive and the single
partition on the separate drive. Sometimes I had the second disk
connected as a master on the secondary IDE channel, sometimes as a
slave on the primary IDE channel.

According to DIP, the time it took to create the cloned partitions was
almost exactly the same no matter how I had the disks configured. I
did require that DIP do a verify, but otherwise the setup was the
default configuration, eg. smart sector skipping, etc.

Now how can it be that the clone creation time was the same regardless
of how my target drive was configured?

Likely it was a combination of the (lack of) CPU speed along
with a bios that didn't accomodate using DMA for drive
transfers (in DOS, without a driver loaded). This CPU load
is in combination with the compression or decompression,
such that the bottleneck then may not be the source &
destination partitions' logical placement.
 
On Sun, 19 Jun 2005 07:38:59 GMT He understood Liz1 so well kony

When I hear
"robust error protection" I can't help but think MS is
admitting the OS is going to crap itself.

LOL :D
 
Bob said:
Before I built this new machine I used Drive Image Pro to create clone
archives. My boot disk had to equal-sized partitions and a separate HD
had one partition the same size. I would lay off the boot partition
onto both the second partition of the same drive and the single
partition on the separate drive. Sometimes I had the second disk
connected as a master on the secondary IDE channel, sometimes as a
slave on the primary IDE channel.

According to DIP, the time it took to create the cloned partitions was
almost exactly the same no matter how I had the disks configured. I
did require that DIP do a verify, but otherwise the setup was the
default configuration, eg. smart sector skipping, etc.

Now how can it be that the clone creation time was the same regardless
of how my target drive was configured?

It probably depends on how the copying utility is doing its i/o.
It may well be synchronous, in which each i/o operation waits until
i/o is complete before returning. The point is that the copying to
the same physical drive cannot be faster than using separate
drives. It doesn't have to be.

I use xxcopy /clone for these operations, which checks file
dates/sizes to avoid unneeded copies. This makes maintenance very
fast.

--
Some informative links:
http://www.geocities.com/nnqweb/

http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
 
Likely it was a combination of the (lack of) CPU speed along
with a bios that didn't accomodate using DMA for drive
transfers (in DOS, without a driver loaded). This CPU load
is in combination with the compression or decompression,
such that the bottleneck then may not be the source &
destination partitions' logical placement.

While I have no reason to doubt your explanation, I wonder why
something so computationally unintensive as sucking and spitting would
overwhelm even a 500 MHz K6-II (my old machine) to the extent you have
stated.

The speed given by DIP was around 120-140 MB/min. Maybe that will help
you figure out what the bottleneck was.

I can't use DIP on my new system - it brainfarts when confronted with
80GB drives.LOL. It whines about not enough (Caldera) DOS memory and
that includes after loading MEMSYS.

Not that I want to use it - it's really a piece of complete shit. I am
glad Symantec now has it - they can give it a proper burial.

BTW, for those who might have been following the Enermax ES-352
RAID/Backup saga I got myself into, I think I may have it working well
enough to keep. I believe the disk corruption problem was caused by
Acronis True Image 8. As you know, it loads some agents that Acronis
claims are necessary even if you never use the Windows version (I use
only the CD version). The Enermax unit warns about "low-level" disk
operations (eg, SMART) messing things up. so I yanked TI out. I used
the lengthy procedure for cleaning up the Registry that is posted on
the Acronis forum. What a bummer.

In any event I have not experienced a corrupted boot disk since and I
have been doing the same things as before. However, Enermax needs to
fix some problems, like not forcing a backup just because you insert
the target tray, like providing a new revision of their software with
error codes that make sense, like providing the user with the option
to buy extra trays for more than 2 HDs, etc.

Nevertheless, even with these limitations (which are certainly not any
different than most of the crap on the market), the unit does what it
is intended for - an automatic hardware daily backup device. It has
saved my arse on several occasions. The other day HP wrote me that
there was a printer update on their website so I tried to download it
but it would not execute. When I finally gave up, I could not delete
it. I even used NTFS4DOS and could not get rid of it. What a bummer.

But because I had an easy restore with the backup clone created just a
couple hours earlier, I did not have to fuss with anything. I just
swapped the disks around and I was back on the air. In the past when I
had some data I collected in between, I used XCOPY with the /D:
parameter to collect all the new stuff onto a removable disk and then
I put it onto the new boot disk. I could use NTBACKUP, but I like to
restore the data by hand because then I know exactly what is going on.
It's just Eudora mailbox stuff anyway and some other ascii files like
bookmarks etc. that I wrote in the period after the last backup.

So, unless something drastic happens, I am going to keep the Enermax
ES-352 because it is so convenient - and despite the nuisances listed
above, it is also very friendly. Set it, forget it, use it when needed
to restore a complete disk clone. I keep one per day and one per week
so I am covered for most contingencies.


--

Map of the Vast Right Wing Conspiracy
http://home.houston.rr.com/rkba/vrwc.html

"Whatever crushes individuality is despotism."
--John Stuart Mill, "On Liberty"
 
While I have no reason to doubt your explanation, I wonder why
something so computationally unintensive as sucking and spitting would
overwhelm even a 500 MHz K6-II (my old machine) to the extent you have
stated.

It's not computantionally unintensive at all.
Benchmark your drives in PIO mode and tell us how much CPU %
they use. Then consider compression or decompression tasks-
their completion time is quite largely dependant on CPU
speed. Take for example using WinRAR to compress a CD's
worth of files... it's only ~ 650MB of files but on a K6-II,
could take several minutes if not over an hour (depending on
the file types).

I'd venture to guess that using a modern system that can run
the drives in DMA mode in DOS, the task would finish in
about 1/4th the time. That could be wildly inaccurate, but
it should be right within an order of magnitude. I vaguely
recall one box that had a BIOS setting to enable the DMA DOS
mode, and enabling it resulted in DI transfers at about
1GB/min, while disabling was around 350MB/min. Same drives,
same image, same CPU, etc, etc.
 
For cloning a HD?

What compression/decompression?

Cloning copies sectors - it doesn't know what's in the sectors.

Ok, i mistook the task... some "clone" by making a backup as
a regular task, then restoring that to the other drive... in
that, they have both the clone and the backup (for routine
backup purposes). Even so, operating within DMA could
easily tax a K6-2 which didn't have a lot of muscle to begin
with.
 
Back
Top