J
jehugaleahsa
Hello:
We have some very generic libraries. For instance, we have classes
that map between database records and objects. Every class that does
this is placed in a Mapper library. All of the data objects are placed
in a DataObject library. Then _another_ library holds interfaces that
the data objects implement (separated interfaces).
Regardless of their similar nature, they change at different times.
So, if I change any one of these classes, all code sets dependent on
them need to be recompiled. I can see this becoming a major pain in my
neck. I would like to move these code sets as close to the code that
uses them as possible.
Originally, I thought I could just move these classes into the
libraries that used them. However, then there is the little problem
where a class is reused by multiple libraries. Do I create a small
library just for that one or two classes? This seems like major
overkill. I am not sure at this stage, but I am pretty sure that the
code for (pretty much) every table in the database is used in at least
two places (in the UI and in server-side applications). I would have
more libraries that I think I could manage. We have about 100 commonly
used database tables. I don't want 100 libraries!
How can I achieve the granularity I am looking for and not have an
explosion of libraries? I have a book my Bob Martin and he has ways of
determining how to partition a system. However, the metrics shine
poorly on my comfort zone.
I am just wondering how other groups have dealt with slowly growing
code sets. I am, of course, referring to code sets that are composed
of a very large number of classes and multiple deployment sites.
Thanks for any direction,
Travis
We have some very generic libraries. For instance, we have classes
that map between database records and objects. Every class that does
this is placed in a Mapper library. All of the data objects are placed
in a DataObject library. Then _another_ library holds interfaces that
the data objects implement (separated interfaces).
Regardless of their similar nature, they change at different times.
So, if I change any one of these classes, all code sets dependent on
them need to be recompiled. I can see this becoming a major pain in my
neck. I would like to move these code sets as close to the code that
uses them as possible.
Originally, I thought I could just move these classes into the
libraries that used them. However, then there is the little problem
where a class is reused by multiple libraries. Do I create a small
library just for that one or two classes? This seems like major
overkill. I am not sure at this stage, but I am pretty sure that the
code for (pretty much) every table in the database is used in at least
two places (in the UI and in server-side applications). I would have
more libraries that I think I could manage. We have about 100 commonly
used database tables. I don't want 100 libraries!
How can I achieve the granularity I am looking for and not have an
explosion of libraries? I have a book my Bob Martin and he has ways of
determining how to partition a system. However, the metrics shine
poorly on my comfort zone.
I am just wondering how other groups have dealt with slowly growing
code sets. I am, of course, referring to code sets that are composed
of a very large number of classes and multiple deployment sites.
Thanks for any direction,
Travis