H
Herb Martin
[This message presents two solutions and progress report s
-- and is not a request for help but all suggestions to improve
on these are appreciated. The biggest improvement for me
would be to have these work through Win2000+ DNS. <sigh>]
Looking into the issue of forwarding to a second (disjoint)
namespace caused me to spend some time with BIND and
returned my attention to a problem I had been wanting to
solve (and has been previously requested on this group by
others): How to pre-load the "cache" with records to
prevent access to "ad display" (or other undesirable) site?
Unfortunately, I cannot do neither with Win2000 (and I believe
not with Win2003) DNS so I was forced to use BIND
(it is running on Win2000 just fine.)
Four issues I am pursuing:
1) Loading a large "blacklist" into DNS through the cachhe: SOLVED
2) Using a forwarder to check a separate (disjoint) namespace
while still using internal servers to check the internal namespace
even when the forwarder cannot resolve the query. SOLVED
3) Building an RBL (real-time blacklist) for email that dynamically
checks, synthesizes composite records, and caches from multiple RBL
servers: Proof of concept in Perl; needs to be added to the DNS
server.
The "hosts" file issue came up because there are various
large "hosts" file that resolve their records to 127.0.0.1
(or another essentially wrong or invalid address) to prevent
loading of advertisement graphics. Turns out that a large
but reasonable list can block a high percentage of ads that
derive from the same subset of the Internet.
Unfortunately, MS DNS won't let me add records other
than (true) "root hints" to the cache -- it seems to just ignore
them.
Loading such hosts files on EVERY PC in even a small network
is both a nuisance, and it causes a (temporary) performance problem
every time the file is load or EVEN ONE record is updated -- the
PC spends up to an hour at near 100% utilization churning through
the entire hosts file (again.) [We are talking 750K hosts files with
17,000 records or so.]
Putting it on the DNS server (at the NAT/forwarding position)
solves this, and it loads without undue stress on the server in
under 10 seconds. Full reloads take less than a minute because
they require stopping/re-starting BIND -- but I am looking at
reducing that too.
Hard part was figuring out how to make BIND 9 do
persistent caching -- it's not in the manual, but reading
the source code indicated that giving a NAME to the
cache file, e.g., as an 'option' or 'view' setting of
cache-file "cache-file.dns";
....would save and re-load the cache on next start. Works
great.
(Two caveats: prevent the cache from being overwritten at
shutdown (Read-Only Attribute solved this), and THEN deal
with the "cache date" you need in the file since it never gets
updated and eventually -- a really long time with a 32-bit
integer -- it might expire. Setting TLL on the records to
2147483647 should handle this.)
Goal #2:
Arrange for a completely separate (DISJOINT) Namespace
to have it's DNS servers forward (to the Internet) for public
namespace entries, but STILL do internal recursion for
private names using a private root namespace.
Result: Success
Problem: For private names the Fowarder returns NXDOMAIN
and the internal DNS servers stop searching -- so trick the
forwarder into REFUSING the requests (or if desparate giving
Server Failure for internal names) so that the internal DNS will
KEEP LOOKING -- no answer would be worse because we
would then have to wait internally for the timeout to expire.
Method: Create "stub" zones for the internal zones on the forwarder
but also use an ACCESS list (ACL) to deny on those zones
so they never really get searched. (A view works best but isn't
absolutely necessary.) It also works with a 'Master' but 'stub'
is closer to the concept.
Note: "stub" zone is a technical term in BIND and although I am
using the "stub" zones to perform this function it can be done with
other types -- ideally there would just be a "refuse" or "constant"
Zone type...
Two improvements would be nice (I'll have to change the BIND
source for these):
1) Never "fail" but always refuse
2) A new Zone type "Refuse" where only the zone name is
needed and the extra "stub" cruft can be skipped.
It's not worth changing the code JUST for these but I have another goal
or two:
Goal #3: A multiplexor RBL (real-time blackhole list) with scoring;
when a "blackhole spam server" test is requested, do a lookup to
a "group" of RBLs and use a factor (e.g., 0.7 or 0.5) with a threshold
(e.g., 1.0) to determined if an RBL record should be "synthesized".
Purpose: Allows checking more than more RBL for (weighted)
concurrence before rejecting email from that source, and treating
'aggressive' RBLs differently than 'conservative' RBLs, e.g., 3
agressive server reports might equal one conservative RBL report.
RBLs have been the single most effective tool I have found against
spam -- don't get excited, they aren't complete, but I did knock out
75-90% of MY SPAM by checking the 2 RBLs my email server
allows. If I can push that towards 95%, the remaining 5% can be
dealt with easier by Bayesian and keyword filters.
Status: I have a working Perl DNS server as proof of concept but it
needs to be converted to BIND 9 source and compiled into there.
[I haven't modified BIND yet, but I have it compiling under VC in
VS.Net 2003. BIND 9 is a good size program and I have yet to find
any significant "programmers' notes" other than header files and
comments.]
Thanks to anyone who helped, who tried to help, or is just interested
in reading this report.
-- and is not a request for help but all suggestions to improve
on these are appreciated. The biggest improvement for me
would be to have these work through Win2000+ DNS. <sigh>]
Looking into the issue of forwarding to a second (disjoint)
namespace caused me to spend some time with BIND and
returned my attention to a problem I had been wanting to
solve (and has been previously requested on this group by
others): How to pre-load the "cache" with records to
prevent access to "ad display" (or other undesirable) site?
Unfortunately, I cannot do neither with Win2000 (and I believe
not with Win2003) DNS so I was forced to use BIND
(it is running on Win2000 just fine.)
Four issues I am pursuing:
1) Loading a large "blacklist" into DNS through the cachhe: SOLVED
2) Using a forwarder to check a separate (disjoint) namespace
while still using internal servers to check the internal namespace
even when the forwarder cannot resolve the query. SOLVED
3) Building an RBL (real-time blacklist) for email that dynamically
checks, synthesizes composite records, and caches from multiple RBL
servers: Proof of concept in Perl; needs to be added to the DNS
server.
The "hosts" file issue came up because there are various
large "hosts" file that resolve their records to 127.0.0.1
(or another essentially wrong or invalid address) to prevent
loading of advertisement graphics. Turns out that a large
but reasonable list can block a high percentage of ads that
derive from the same subset of the Internet.
Unfortunately, MS DNS won't let me add records other
than (true) "root hints" to the cache -- it seems to just ignore
them.
Loading such hosts files on EVERY PC in even a small network
is both a nuisance, and it causes a (temporary) performance problem
every time the file is load or EVEN ONE record is updated -- the
PC spends up to an hour at near 100% utilization churning through
the entire hosts file (again.) [We are talking 750K hosts files with
17,000 records or so.]
Putting it on the DNS server (at the NAT/forwarding position)
solves this, and it loads without undue stress on the server in
under 10 seconds. Full reloads take less than a minute because
they require stopping/re-starting BIND -- but I am looking at
reducing that too.
Hard part was figuring out how to make BIND 9 do
persistent caching -- it's not in the manual, but reading
the source code indicated that giving a NAME to the
cache file, e.g., as an 'option' or 'view' setting of
cache-file "cache-file.dns";
....would save and re-load the cache on next start. Works
great.
(Two caveats: prevent the cache from being overwritten at
shutdown (Read-Only Attribute solved this), and THEN deal
with the "cache date" you need in the file since it never gets
updated and eventually -- a really long time with a 32-bit
integer -- it might expire. Setting TLL on the records to
2147483647 should handle this.)
Goal #2:
Arrange for a completely separate (DISJOINT) Namespace
to have it's DNS servers forward (to the Internet) for public
namespace entries, but STILL do internal recursion for
private names using a private root namespace.
Result: Success
Problem: For private names the Fowarder returns NXDOMAIN
and the internal DNS servers stop searching -- so trick the
forwarder into REFUSING the requests (or if desparate giving
Server Failure for internal names) so that the internal DNS will
KEEP LOOKING -- no answer would be worse because we
would then have to wait internally for the timeout to expire.
Method: Create "stub" zones for the internal zones on the forwarder
but also use an ACCESS list (ACL) to deny on those zones
so they never really get searched. (A view works best but isn't
absolutely necessary.) It also works with a 'Master' but 'stub'
is closer to the concept.
Note: "stub" zone is a technical term in BIND and although I am
using the "stub" zones to perform this function it can be done with
other types -- ideally there would just be a "refuse" or "constant"
Zone type...
Two improvements would be nice (I'll have to change the BIND
source for these):
1) Never "fail" but always refuse
2) A new Zone type "Refuse" where only the zone name is
needed and the extra "stub" cruft can be skipped.
It's not worth changing the code JUST for these but I have another goal
or two:
Goal #3: A multiplexor RBL (real-time blackhole list) with scoring;
when a "blackhole spam server" test is requested, do a lookup to
a "group" of RBLs and use a factor (e.g., 0.7 or 0.5) with a threshold
(e.g., 1.0) to determined if an RBL record should be "synthesized".
Purpose: Allows checking more than more RBL for (weighted)
concurrence before rejecting email from that source, and treating
'aggressive' RBLs differently than 'conservative' RBLs, e.g., 3
agressive server reports might equal one conservative RBL report.
RBLs have been the single most effective tool I have found against
spam -- don't get excited, they aren't complete, but I did knock out
75-90% of MY SPAM by checking the 2 RBLs my email server
allows. If I can push that towards 95%, the remaining 5% can be
dealt with easier by Bayesian and keyword filters.
Status: I have a working Perl DNS server as proof of concept but it
needs to be converted to BIND 9 source and compiled into there.
[I haven't modified BIND yet, but I have it compiling under VC in
VS.Net 2003. BIND 9 is a good size program and I have yet to find
any significant "programmers' notes" other than header files and
comments.]
Thanks to anyone who helped, who tried to help, or is just interested
in reading this report.