AD Site Topology

  • Thread starter Thread starter AJ
  • Start date Start date
A

AJ

Hi Folks

I wondered if I could get some input into this situation. We are
looking at doing an AD design. We have pretty much a hub spoke network
architecture with a lot of good links between sites (1GB/100MB).
Prevously in other designs I have always seperated poorly connected
sites out into their own AD site depending on if a local domain
controller was required or not. If no local services were required
then that sites subnet was simply added to their parent sites AD site.
Given the situation that most sites are connected via 1GB connections
I am leaning towards not creating individual sites but grouping these
locations into a single site. Due to the network speeds here
authenticating with a domain controller in a different physical
location (which could happen) should not be an issue here and also
with links this fast who cares about the replication path the KCC
creates and the replication traffic generated between the domain
controllers? On the other hand it would be cleaner and tidier I guess
to create individual sites for each physical location. I am really
undecided here, I dont think either way is right or wrong but would
value any input anybody cares to add.

Incidently Exchange 2007 will be in the mix and that uses AD sites for
routing purposes, however the plan is to only have a couple of
clusters in strategic locations and the well connected sites will
simply be accessing the centralised servers over the WAN.

Appreciate any input.

TIA

AJ
 
AJ said:
Hi Folks

I wondered if I could get some input into this situation. We are
looking at doing an AD design. We have pretty much a hub spoke network
architecture with a lot of good links between sites (1GB/100MB).
Prevously in other designs I have always seperated poorly connected
sites out into their own AD site depending on if a local domain
controller was required or not. If no local services were required
then that sites subnet was simply added to their parent sites AD site.
Given the situation that most sites are connected via 1GB connections
I am leaning towards not creating individual sites but grouping these
locations into a single site. Due to the network speeds here
authenticating with a domain controller in a different physical
location (which could happen) should not be an issue here and also
with links this fast who cares about the replication path the KCC
creates and the replication traffic generated between the domain
controllers? On the other hand it would be cleaner and tidier I guess
to create individual sites for each physical location. I am really
undecided here, I dont think either way is right or wrong but would
value any input anybody cares to add.

Incidently Exchange 2007 will be in the mix and that uses AD sites for
routing purposes, however the plan is to only have a couple of
clusters in strategic locations and the well connected sites will
simply be accessing the centralised servers over the WAN.

Appreciate any input.

TIA

AJ


I would consolidate any child domains into one domain (forest root domain),
but I actually prefer to use Sites, even with a such a high speed backbone.
This way if any of the link goes down, at least the clients in their
respective locations will still be looking for that cached DC having
problems authenticating to print, or Outlook looking for that cached DSProxy
to a GC outside of it's physical location resulting in Outlook problems, as
well as Exchange, because it discovers GCs based on Sites, will hollar, shut
services down, etc, if a GC is no longer accessible.

I hope that helps.

--
Ace

This posting is provided "AS-IS" with no warranties or guarantees and
confers no rights.

Please reply back to the newsgroup or forum for collaboration benefit among
responding engineers, and to help others benefit from your resolution.

Ace Fekay, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007, MCSE & MCSA
2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer

For urgent issues, please contact Microsoft PSS directly. Please check
http://support.microsoft.com for regional support phone numbers.
 
I would consolidate any child domains into one domain (forest root domain),
but I actually prefer to use Sites, even with a such a high speed backbone.
This way if any of the link goes down, at least the clients in their
respective locations will still be looking for that cached DC having
problems authenticating to print, or Outlook looking for that cached DSProxy
to a GC outside of it's physical location resulting in Outlook problems, as
well as Exchange, because it discovers GCs based on Sites, will hollar, shut
services down, etc, if a GC is no longer accessible.

I hope that helps.

--
Ace

This posting is provided "AS-IS" with no warranties or guarantees and
confers no rights.

Please reply back to the newsgroup or forum for collaboration benefit among
responding engineers, and to help others benefit from your resolution.

Ace Fekay, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007, MCSE & MCSA
2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer

For urgent issues, please contact Microsoft PSS directly. Please checkhttp://support.microsoft.comfor regional support phone numbers.- Hide quoted text -

- Show quoted text -

Hi Ace

thanks for your reply. The high speed connected sites would still have
local infrastrcuture i.e. DC's/F&P but not exchange.
Just to make sure I'm clear on your response - You mean if the local
client happended to authenticate with infrastrcuture outside of its
physical location these problems would occur if a link went down, i.e
the DC/GC would not be available anymore?

Exchange is normally pretty good when a GC/DC goes down or is
unavailble as it will have knowledge of all the GCs in the site as
well as out of the site and should recover, so I dont really see that
as being an issue. The services should only stop if there are no GCs
available and that wont be a problem as there will be quite a few GCs
local to the Exchange Servers and in other physical locations. The
client side is a good point though although I thought MS improved the
ability for Outlook to recover from a lost GC, I seemed to recall
reading something a couple of years back.

TIA
AJ
 
I see no reason to create a seperate site with that type of connectivity.
If a site has a large number of users or there is some critical app, you
should consider placing a DC at the site.

--
Paul Bergson
MVP - Directory Services
MCTS, MCT, MCSE, MCSA, Security+, BS CSci
2008, 2003, 2000 (Early Achiever), NT4
Microsoft's Thrive IT Pro of the Month - June 2009

http://www.pbbergs.com

Please no e-mails, any questions should be posted in the NewsGroup This
posting is provided "AS IS" with no warranties, and confers no rights.
 
AJ said:
Hi Ace

thanks for your reply. The high speed connected sites would still have
local infrastrcuture i.e. DC's/F&P but not exchange.
Just to make sure I'm clear on your response - You mean if the local
client happended to authenticate with infrastrcuture outside of its
physical location these problems would occur if a link went down, i.e
the DC/GC would not be available anymore?
Correct.


Exchange is normally pretty good when a GC/DC goes down or is
unavailble as it will have knowledge of all the GCs in the site as
well as out of the site and should recover, so I dont really see that
as being an issue.

Actually, if there are multiple GCs, it depends on which GC it has locked on
to. If your whole infrastructure is in one AD Site, you won't know which one
it is until it goes down.

I prefer to create a Site for each location.
The services should only stop if there are no GCs
available and that wont be a problem as there will be quite a few GCs
local to the Exchange Servers and in other physical locations. The
client side is a good point though although I thought MS improved the
ability for Outlook to recover from a lost GC, I seemed to recall
reading something a couple of years back.

Unfortunately, no, not yet.

I worked at one place as an Exchange engineer. We had 20 Exchange servers in
a global infrastructure with 5000 seats. The AD boss (I wasn't part of the
AD team) ran Windows update in our Site that had 1200 users, around 9 am
every Tuesday. We asked him not to because Exchange pops up with DSAccess
errors if the DC it was using went down. Once the DC came back up, the
errors went away. However, the BES servers weren't as forgiving and all had
to be restarted manually. Help desk received numerous calls about Outlook
not working. They escalated the tickets to the 2nd tire, then those guys
escalated them to us. I simply told them to have the users restart Outlook,
and if that didn't work, restart the machine. Later we had a meeting
discussing the DC Windows Updates. The schedule got changed to after hours,
however the BES servers still needed to be restarted.

Ace
 
Actually, if there are multiple GCs, it depends on which GC it has locked on
to. If your whole infrastructure is in one AD Site, you won't know which one
it is until it goes down.

I prefer to create a Site for each location.


Unfortunately, no, not yet.

I worked at one place as an Exchange engineer. We had 20 Exchange servers in
a global infrastructure with 5000 seats. The AD boss (I wasn't part of the
AD team) ran Windows update in our Site that had 1200 users, around 9 am
every Tuesday. We asked him not to because Exchange pops up with DSAccess
errors if the DC it was using went down. Once the DC came back up, the
errors went away. However, the BES servers weren't as forgiving and all had
to be restarted manually. Help desk received numerous calls about Outlook
not working. They escalated the tickets to the 2nd tire, then those guys
escalated them to us. I simply told them to have the users restart Outlook,
and if that didn't work, restart the machine. Later we had a meeting
discussing the DC Windows Updates. The schedule got changed to after hours,
however the BES servers still needed to be restarted.


Ace- Hide quoted text -

- Show quoted text -

Hi Ace

I value your input and this discussion. Outlook, especially newer
clients will eventually find another GC it will take a little while
for it to realise the GC was down but still connection would be
established again providing DNS is all straight (Outlook 2000 and XP
had problems with this I remember). This is providing of course no
local registry changes have been made to the clients to hard code them
to a specific GC. Admittedly there will be a slight outage which could
cause the helpdeks calls as you experienced. The same is true for
Exchange but as you point out there will be a timeframe when DSACCESS
is not happy, it wont cause the services to stop though. In Exchange
2003 (as you know) you can actually see the domain controllers it is
using in ESM. This will list all GCs and domain controllers and will
also show you which domain controller is being used for the
configuration information. If you had to you can also manually overide
the automatic settings, obviously not something you would normally
want to do. I have seen problems with BES before, that can be a bit
tempramental, I have seen this many times.

I'm still not convinced :) however I guess I could always look at it
the other way, what benefit am I getting in not splitting the physical
locations into sites?

Quick directory updates.............

Thanks again for your input ACE and Paul.

AJ
 
AJ said:
On 30 Oct, 05:55, "Ace Fekay [MCT]" <[email protected]>
wrote:

Hi Folks
I wondered if I could get some input into this situation. We are
looking at doing an AD design. We have pretty much a hub spoke
network
architecture with a lot of good links between sites (1GB/100MB).
Prevously in other designs I have always seperated poorly connected
sites out into their own AD site depending on if a local domain
controller was required or not. If no local services were required
then that sites subnet was simply added to their parent sites AD
site.
Given the situation that most sites are connected via 1GB
connections
I am leaning towards not creating individual sites but grouping
these
locations into a single site. Due to the network speeds here
authenticating with a domain controller in a different physical
location (which could happen) should not be an issue here and also
with links this fast who cares about the replication path the KCC
creates and the replication traffic generated between the domain
controllers? On the other hand it would be cleaner and tidier I
guess
to create individual sites for each physical location. I am really
undecided here, I dont think either way is right or wrong but would
value any input anybody cares to add.
Incidently Exchange 2007 will be in the mix and that uses AD sites
for
routing purposes, however the plan is to only have a couple of
clusters in strategic locations and the well connected sites will
simply be accessing the centralised servers over the WAN.
Appreciate any input.


I would consolidate any child domains into one domain (forest root
domain),
but I actually prefer to use Sites, even with a such a high speed
backbone.
This way if any of the link goes down, at least the clients in their
respective locations will still be looking for that cached DC having
problems authenticating to print, or Outlook looking for that cached
DSProxy
to a GC outside of it's physical location resulting in Outlook
problems,
as
well as Exchange, because it discovers GCs based on Sites, will
hollar,
shut
services down, etc, if a GC is no longer accessible.
I hope that helps.
This posting is provided "AS-IS" with no warranties or guarantees and
confers no rights.
Please reply back to the newsgroup or forum for collaboration benefit
among
responding engineers, and to help others benefit from your resolution.
Ace Fekay, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007, MCSE &
MCSA
2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
For urgent issues, please contact Microsoft PSS directly. Please
checkhttp://support.microsoft.comforregional support phone numbers.-
Hide quoted text -
- Show quoted text -
thanks for your reply. The high speed connected sites would still have
local infrastrcuture i.e. DC's/F&P but not exchange.
Just to make sure I'm clear on your response - You mean if the local
client happended to authenticate with infrastrcuture outside of its
physical location these problems would occur if a link went down, i.e
the DC/GC would not be available anymore?
Correct.



Exchange is normally pretty good when a GC/DC goes down or is
unavailble as it will have knowledge of all the GCs in the site as
well as out of the site and should recover, so I dont really see that
as being an issue.

Actually, if there are multiple GCs, it depends on which GC it has locked
on
to. If your whole infrastructure is in one AD Site, you won't know which
one
it is until it goes down.

I prefer to create a Site for each location.
The services should only stop if there are no GCs
available and that wont be a problem as there will be quite a few GCs
local to the Exchange Servers and in other physical locations. The
client side is a good point though although I thought MS improved the
ability for Outlook to recover from a lost GC, I seemed to recall
reading something a couple of years back.

Unfortunately, no, not yet.

I worked at one place as an Exchange engineer. We had 20 Exchange servers
in
a global infrastructure with 5000 seats. The AD boss (I wasn't part of
the
AD team) ran Windows update in our Site that had 1200 users, around 9 am
every Tuesday. We asked him not to because Exchange pops up with DSAccess
errors if the DC it was using went down. Once the DC came back up, the
errors went away. However, the BES servers weren't as forgiving and all
had
to be restarted manually. Help desk received numerous calls about Outlook
not working. They escalated the tickets to the 2nd tire, then those guys
escalated them to us. I simply told them to have the users restart
Outlook,
and if that didn't work, restart the machine. Later we had a meeting
discussing the DC Windows Updates. The schedule got changed to after
hours,
however the BES servers still needed to be restarted.

Ace- Hide quoted text -

- Show quoted text -

Hi Ace

I value your input and this discussion. Outlook, especially newer
clients will eventually find another GC it will take a little while
for it to realise the GC was down but still connection would be
established again providing DNS is all straight (Outlook 2000 and XP
had problems with this I remember).

With DNS, and one of the DCs that went down happened to be the first DNS
entry in the client machine or in the Exchange server's IP properties, then
the client side resolver algorith is what will determine if the GC is
resolved. As you know, if the first entry doesn't response (NULL response,
and not an NXDomain response), it will remove the first one out of the
elgible resolver list, and then move on to the next entry. So the next entry
may be a DC that is still up, however, if say the Exchange server was locked
on to DC1 for GC, but that is the one down, it will attempt to connect to it
until it times out before going through the list of DCs/GCs it has found
automatically in the server properties, directory services tab. So there's a
wait period based on the client side resolver, as well as the automatic DS
discovery Exchange uses.
This is providing of course no
local registry changes have been made to the clients to hard code them
to a specific GC. Admittedly there will be a slight outage which could
cause the helpdeks calls as you experienced.

Yep, it's also known as 'impatience.' They want a fix, and they want it now.
"Please restart," appears to be the standard answer at that company that
everyone apparently followed. Not my preference to tell them that, but
nonetheless, they got an answer and by the time the DCs were back up, the
client has restarted and established a connection.
The same is true for
Exchange but as you point out there will be a timeframe when DSACCESS
is not happy, it wont cause the services to stop though. In Exchange
2003 (as you know) you can actually see the domain controllers it is
using in ESM. This will list all GCs and domain controllers and will
also show you which domain controller is being used for the
configuration information.

Yes, you can, but if there are multiples, it will either randomly pick one,
or Round Robin between them.
If you had to you can also manually overide
the automatic settings, obviously not something you would normally
want to do. I have seen problems with BES before, that can be a bit
tempramental, I have seen this many times.

I'm still not convinced :) however I guess I could always look at it
the other way, what benefit am I getting in not splitting the physical
locations into sites?

Well, that is up to you. I am not trying to convince you either way. I used
to see cars 20 years ago, and I have the knack, but there is no benefit on
my part. However, I think the benefits will help, which include such things
as printer location attributes based on location (set by GPO), which is a
neat feature, compressed replication traffic (which doesn't matter anyway in
a high speed MPLS), but most of all, controlling logon and authentication
traffic to DCs in their own physical location.

Quick directory updates.............

Thanks again for your input ACE and Paul.

AJ

You are welcome. Oh, I get a commission if you decide to configure AD Sites.
:-)

(just kidding... !!!!)

I am curious as to which way you would decide, and why.

Ace
 
On 30 Oct, 05:55, "Ace Fekay [MCT]" <[email protected]>
wrote:

Hi Folks
I wondered if I could get some input into this situation. We are
looking at doing an AD design. We have pretty much a hub spoke
network
architecture with a lot of good links between sites (1GB/100MB).
Prevously in other designs I have always seperated poorly connected
sites out into their own AD site depending on if a local domain
controller was required or not. If no local services were required
then that sites subnet was simply added to their parent sites AD
site.
Given the situation that most sites are connected via 1GB
connections
I am leaning towards not creating individual sites but grouping
these
locations into a single site. Due to the network speeds here
authenticating with a domain controller in a different physical
location (which could happen) should not be an issue here and also
with links this fast who cares about the replication path the KCC
creates and the replication traffic generated between the domain
controllers? On the other hand it would be cleaner and tidier I
guess
to create individual sites for each physical location. I am really
undecided here, I dont think either way is right or wrong but would
value any input anybody cares to add.
Incidently Exchange 2007 will be in the mix and that uses AD sites
for
routing purposes, however the plan is to only have a couple of
clusters in strategic locations and the well connected sites will
simply be accessing the centralised servers over the WAN.
Appreciate any input.
TIA
AJ
I would consolidate any child domains into one domain (forest root
domain),
but I actually prefer to use Sites, even with a such a high speed
backbone.
This way if any of the link goes down, at least the clients in their
respective locations will still be looking for that cached DC having
problems authenticating to print, or Outlook looking for that cached
DSProxy
to a GC outside of it's physical location resulting in Outlook
problems,
as
well as Exchange, because it discovers GCs based on Sites, will
hollar,
shut
services down, etc, if a GC is no longer accessible.
I hope that helps.
--
Ace
This posting is provided "AS-IS" with no warranties or guarantees and
confers no rights.
Please reply back to the newsgroup or forum for collaboration benefit
among
responding engineers, and to help others benefit from your resolution.
Ace Fekay, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007, MCSE &
MCSA
2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
For urgent issues, please contact Microsoft PSS directly. Please
checkhttp://support.microsoft.comforregionalsupport phone numbers.-
Hide quoted text -
- Show quoted text -
Hi Ace
thanks for your reply. The high speed connected sites would still have
local infrastrcuture i.e. DC's/F&P but not exchange.
Just to make sure I'm clear on your response - You mean if the local
client happended to authenticate with infrastrcuture outside of its
physical location these problems would occur if a link went down, i.e
the DC/GC would not be available anymore?
Correct.
Exchange is normally pretty good when a GC/DC goes down or is
unavailble as it will have knowledge of all the GCs in the site as
well as out of the site and should recover, so I dont really see that
as being an issue.
Actually, if there are multiple GCs, it depends on which GC it has locked
on
to. If your whole infrastructure is in one AD Site, you won't know which
one
it is until it goes down.
I prefer to create a Site for each location.
The services should only stop if there are no GCs
available and that wont be a problem as there will be quite a few GCs
local to the Exchange Servers and in other physical locations. The
client side is a good point though although I thought MS improved the
ability for Outlook to recover from a lost GC, I seemed to recall
reading something a couple of years back.
Unfortunately, no, not yet.
I worked at one place as an Exchange engineer. We had 20 Exchange servers
in
a global infrastructure with 5000 seats. The AD boss (I wasn't part of
the
AD team) ran Windows update in our Site that had 1200 users, around 9 am
every Tuesday. We asked him not to because Exchange pops up with DSAccess
errors if the DC it was using went down. Once the DC came back up, the
errors went away. However, the BES servers weren't as forgiving and all
had
to be restarted manually. Help desk received numerous calls about Outlook
not working. They escalated the tickets to the 2nd tire, then those guys
escalated them to us. I simply told them to have the users restart
Outlook,
and if that didn't work, restart the machine. Later we had a meeting
discussing the DC Windows Updates. The schedule got changed to after
hours,
however the BES servers still needed to be restarted.
TIA
AJ
Ace- Hide quoted text -
- Show quoted text -
I value your input and this discussion. Outlook, especially newer
clients will eventually find another GC it will take a little while
for it to realise the GC was down but still connection would be
established again providing DNS is all straight (Outlook 2000 and XP
had problems with this I remember).

With DNS, and one of the DCs that went down happened to be the first DNS
entry in the client machine or in the Exchange server's IP properties, then
the client side resolver algorith is what will determine if the GC is
resolved. As you know, if the first entry doesn't response (NULL response,
and not an NXDomain response), it will remove the first one out of the
elgible resolver list, and then move on to the next entry. So the next entry
may be a DC that is still up, however, if say the Exchange server was locked
on to DC1 for GC, but that is the one down, it will attempt to connect to it
until it times out before going through the list of DCs/GCs it has found
automatically in the server properties, directory services tab. So there's a
wait period based on the client side resolver, as well as the automatic DS
discovery Exchange uses.
This is providing of course no
local registry changes have been made to the clients to hard code them
to a specific GC. Admittedly there will be a slight outage which could
cause the helpdeks calls as you experienced.

Yep, it's also known as 'impatience.' They want a fix, and they want it now.
"Please restart," appears to be the standard answer at that company that
everyone apparently followed. Not my preference to tell them that, but
nonetheless, they got an answer and by the time the DCs were back up, the
client has restarted and established a connection.
The same is true for
Exchange but as you point out there will be a timeframe when DSACCESS
is not happy, it wont cause the services to stop though. In Exchange
2003 (as you know) you can actually see the domain controllers it is
using in ESM. This will list all GCs and domain controllers and will
also show you which domain controller is being used for the
configuration information.

Yes, you can, but if there are multiples, it will either randomly pick one,
or Round Robin between them.
If you had to you can also manually overide
the automatic settings, obviously not something you would normally
want to do. I have seen problems with BES before, that can be a bit
tempramental, I have seen this many times.
I'm still not convinced :) however I guess I could always look at it
the other way, what benefit am I getting in not splitting the physical
locations into sites?

Well, that is up to you. I am not trying to convince you either way. I used
to see cars 20 years ago, and I have the knack, but there is no benefit on
my part. However, I think the benefits will help, which include such things
as printer location attributes based on location (set by GPO), which is a
neat feature, compressed replication traffic (which doesn't matter anyway in
a high speed MPLS), but most of all, controlling logon and authentication
traffic to DCs in their own physical location.


Quick directory updates.............
Thanks again for your input ACE and Paul.

You are welcome. Oh, I get a commission if you decide to configure AD Sites.
:-)

(just kidding... !!!!)

I am curious as to which way you would decide, and why.

Ace- Hide quoted text -

- Show quoted text -

:)

I have put a lot of thought into this and I think I am going to go the
AD sites route! :) Cheque is in the post, just remember we have a
postal strike over here at the moment so it might be some time
arriving ;)

This is more of a conventional design and one that I am used to doing.
Generally speaking I think most people would utilise sites even though
things could probably tick along nicely without. Reasons are as
discussed I would say. I don't think either way is right or wrong
though but I cant see any compelling reason not to go with sites.
Better to be safe than sorry as well.

Thanks again for the discussion its good to get someone elses opinion
on this.

Have a good one Ace

Cheers

AJ
 
AJ said:
On 31 Oct, 07:14, "Ace Fekay [MCT]" <[email protected]>
wrote:
news:775fa7fe-9c9c-4190-b6bf-755db731fe8e@p35g2000yqh.googlegroups.com...

On 30 Oct, 05:55, "Ace Fekay [MCT]" <[email protected]>
wrote:

Hi Folks
I wondered if I could get some input into this situation. We are
looking at doing an AD design. We have pretty much a hub spoke
network
architecture with a lot of good links between sites (1GB/100MB).
Prevously in other designs I have always seperated poorly
connected
sites out into their own AD site depending on if a local domain
controller was required or not. If no local services were
required
then that sites subnet was simply added to their parent sites AD
site.
Given the situation that most sites are connected via 1GB
connections
I am leaning towards not creating individual sites but grouping
these
locations into a single site. Due to the network speeds here
authenticating with a domain controller in a different physical
location (which could happen) should not be an issue here and
also
with links this fast who cares about the replication path the KCC
creates and the replication traffic generated between the domain
controllers? On the other hand it would be cleaner and tidier I
guess
to create individual sites for each physical location. I am
really
undecided here, I dont think either way is right or wrong but
would
value any input anybody cares to add.
Incidently Exchange 2007 will be in the mix and that uses AD
sites
for
routing purposes, however the plan is to only have a couple of
clusters in strategic locations and the well connected sites will
simply be accessing the centralised servers over the WAN.
Appreciate any input.


I would consolidate any child domains into one domain (forest root
domain),
but I actually prefer to use Sites, even with a such a high speed
backbone.
This way if any of the link goes down, at least the clients in
their
respective locations will still be looking for that cached DC
having
problems authenticating to print, or Outlook looking for that
cached
DSProxy
to a GC outside of it's physical location resulting in Outlook
problems,
as
well as Exchange, because it discovers GCs based on Sites, will
hollar,
shut
services down, etc, if a GC is no longer accessible.
I hope that helps.
This posting is provided "AS-IS" with no warranties or guarantees
and
confers no rights.
Please reply back to the newsgroup or forum for collaboration
benefit
among
responding engineers, and to help others benefit from your
resolution.
Ace Fekay, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007, MCSE &
MCSA
2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
For urgent issues, please contact Microsoft PSS directly. Please
checkhttp://support.microsoft.comforregionalsupport phone numbers.-
Hide quoted text -
- Show quoted text -
thanks for your reply. The high speed connected sites would still
have
local infrastrcuture i.e. DC's/F&P but not exchange.
Just to make sure I'm clear on your response - You mean if the local
client happended to authenticate with infrastrcuture outside of its
physical location these problems would occur if a link went down,
i.e
the DC/GC would not be available anymore?

Exchange is normally pretty good when a GC/DC goes down or is
unavailble as it will have knowledge of all the GCs in the site as
well as out of the site and should recover, so I dont really see
that
as being an issue.
Actually, if there are multiple GCs, it depends on which GC it has
locked
on
to. If your whole infrastructure is in one AD Site, you won't know
which
one
it is until it goes down.
I prefer to create a Site for each location.
The services should only stop if there are no GCs
available and that wont be a problem as there will be quite a few
GCs
local to the Exchange Servers and in other physical locations. The
client side is a good point though although I thought MS improved
the
ability for Outlook to recover from a lost GC, I seemed to recall
reading something a couple of years back.
Unfortunately, no, not yet.
I worked at one place as an Exchange engineer. We had 20 Exchange
servers
in
a global infrastructure with 5000 seats. The AD boss (I wasn't part of
the
AD team) ran Windows update in our Site that had 1200 users, around 9
am
every Tuesday. We asked him not to because Exchange pops up with
DSAccess
errors if the DC it was using went down. Once the DC came back up, the
errors went away. However, the BES servers weren't as forgiving and
all
had
to be restarted manually. Help desk received numerous calls about
Outlook
not working. They escalated the tickets to the 2nd tire, then those
guys
escalated them to us. I simply told them to have the users restart
Outlook,
and if that didn't work, restart the machine. Later we had a meeting
discussing the DC Windows Updates. The schedule got changed to after
hours,
however the BES servers still needed to be restarted.

Ace- Hide quoted text -
- Show quoted text -
I value your input and this discussion. Outlook, especially newer
clients will eventually find another GC it will take a little while
for it to realise the GC was down but still connection would be
established again providing DNS is all straight (Outlook 2000 and XP
had problems with this I remember).

With DNS, and one of the DCs that went down happened to be the first DNS
entry in the client machine or in the Exchange server's IP properties,
then
the client side resolver algorith is what will determine if the GC is
resolved. As you know, if the first entry doesn't response (NULL
response,
and not an NXDomain response), it will remove the first one out of the
elgible resolver list, and then move on to the next entry. So the next
entry
may be a DC that is still up, however, if say the Exchange server was
locked
on to DC1 for GC, but that is the one down, it will attempt to connect to
it
until it times out before going through the list of DCs/GCs it has found
automatically in the server properties, directory services tab. So
there's a
wait period based on the client side resolver, as well as the automatic
DS
discovery Exchange uses.
This is providing of course no
local registry changes have been made to the clients to hard code them
to a specific GC. Admittedly there will be a slight outage which could
cause the helpdeks calls as you experienced.

Yep, it's also known as 'impatience.' They want a fix, and they want it
now.
"Please restart," appears to be the standard answer at that company that
everyone apparently followed. Not my preference to tell them that, but
nonetheless, they got an answer and by the time the DCs were back up, the
client has restarted and established a connection.
The same is true for
Exchange but as you point out there will be a timeframe when DSACCESS
is not happy, it wont cause the services to stop though. In Exchange
2003 (as you know) you can actually see the domain controllers it is
using in ESM. This will list all GCs and domain controllers and will
also show you which domain controller is being used for the
configuration information.

Yes, you can, but if there are multiples, it will either randomly pick
one,
or Round Robin between them.
If you had to you can also manually overide
the automatic settings, obviously not something you would normally
want to do. I have seen problems with BES before, that can be a bit
tempramental, I have seen this many times.
I'm still not convinced :) however I guess I could always look at it
the other way, what benefit am I getting in not splitting the physical
locations into sites?

Well, that is up to you. I am not trying to convince you either way. I
used
to see cars 20 years ago, and I have the knack, but there is no benefit
on
my part. However, I think the benefits will help, which include such
things
as printer location attributes based on location (set by GPO), which is a
neat feature, compressed replication traffic (which doesn't matter anyway
in
a high speed MPLS), but most of all, controlling logon and authentication
traffic to DCs in their own physical location.


Quick directory updates.............
Thanks again for your input ACE and Paul.

You are welcome. Oh, I get a commission if you decide to configure AD
Sites.
:-)

(just kidding... !!!!)

I am curious as to which way you would decide, and why.

Ace- Hide quoted text -

- Show quoted text -

:)

I have put a lot of thought into this and I think I am going to go the
AD sites route! :) Cheque is in the post, just remember we have a
postal strike over here at the moment so it might be some time
arriving ;)

This is more of a conventional design and one that I am used to doing.
Generally speaking I think most people would utilise sites even though
things could probably tick along nicely without. Reasons are as
discussed I would say. I don't think either way is right or wrong
though but I cant see any compelling reason not to go with sites.
Better to be safe than sorry as well.

Thanks again for the discussion its good to get someone elses opinion
on this.

Have a good one Ace

Cheers

AJ


You are welcome, AJ!

Ace
 
Back
Top