B
Bo Hallgren
Please comment on the current DFS implementation, it might not have been the
best decision...
One AD domain but two IP subnets. Basically, the idea is to have two more or
less identical server setups (i.e. DC, file servers, application servers)
for redundancy. We created a number of DFS:es (roots) between these servers
(DC-DC, FS-FS, AS-AS) and allowed for the "domain" to publish these
replicated directory structures under "profiles", "resources" and "apps".
i.e. "[domain]\profiles", etc.
The users maps a number of drives to each of these folders. Especially the
"resources" has caused us problems in so that the two roots eventually get
be out of sync.
Questions:
1. Should you use DFS the way we do, that is for redundancy only?
2. What about the size of the Staging Area, I recon the default 300
something MB is far to little but even ten times that value doesn't remove
problems of "Staging Area full". Is the recommendation, to set the size to
"the largest file to be replicated" or "the sum of all files to be
replicated"?
3. The "resources" DFS is used in a way that might not be suitable; users
creates logfiles in the morning, files that are opened all day and closed
when logging off the (in-house) application in the evening. Does this
prevent the files to be replicated?
Thanx a bunch!
/Bo H.
best decision...
One AD domain but two IP subnets. Basically, the idea is to have two more or
less identical server setups (i.e. DC, file servers, application servers)
for redundancy. We created a number of DFS:es (roots) between these servers
(DC-DC, FS-FS, AS-AS) and allowed for the "domain" to publish these
replicated directory structures under "profiles", "resources" and "apps".
i.e. "[domain]\profiles", etc.
The users maps a number of drives to each of these folders. Especially the
"resources" has caused us problems in so that the two roots eventually get
be out of sync.
Questions:
1. Should you use DFS the way we do, that is for redundancy only?
2. What about the size of the Staging Area, I recon the default 300
something MB is far to little but even ten times that value doesn't remove
problems of "Staging Area full". Is the recommendation, to set the size to
"the largest file to be replicated" or "the sum of all files to be
replicated"?
3. The "resources" DFS is used in a way that might not be suitable; users
creates logfiles in the morning, files that are opened all day and closed
when logging off the (in-house) application in the evening. Does this
prevent the files to be replicated?
Thanx a bunch!
/Bo H.