I am aware of the fact that 100,100 are coordinates.
I didn't say they are coordinates, I said they are "SCREEN coordinates," and
that means something very specific in Windows. An example is in order. Let's
make things easy and assume you just have one monitor.
The pixel at the very top left of your screen is screen coordinate (0, 0).
As you go right, the X values increase. As you go down, the Y values
increase. I mention this only because people with a heavy math background
may expect Y to increase as you go UP, but in Windows this is not the case
unless you explicitly tell Windows to do it this way, and that doesn't apply
to this example.
Now let's say you have Notepad open and that it's not maximized and is
located somewhere "middle-ish" on the screen. Look at the rectangle that it
occupies. Let's say the top-left pixel of Notepad's border is at the
physical screen location of (200, 100). Now look at the various pieces of
the Notepad window. There's the title bar, the menu bar, a big text box, and
surrounding the whole thing, a sizing border. Everything except the text box
is part of what is called the "non-client area" of the window, and this is
an area that your code generally doesn't touch (as far as drawing or placing
controls goes). The area occupied by the text box (i.e., everything that's
not the non-client area) is--surprise--the client area. This is where most
of the "action" occurs, that is to say, this is the area of the window that
your program can interact with the easiest. Because it is so common for a
program to interact with this area, Windows allows you to specify
coordinates that are local to this area and act as if it were the only area
on the screen. Therefore, the top-left corner of this area has a CLIENT
coordinate of (0, 0), just as if it were at the top-left of your monitor.
But it's not. Unless you have your window positioned in a really weird way,
the client coordinate (0, 0) almost never represents the same physical point
as screen coordinate (0, 0). In our case, this client coordinate may be
physical point (205, 125). So if you right-click in this area and your
program wants to display a context menu at client coordinate (0, 0), but the
context menu only deals in screen coordinates, you've got to translate your
relative (logical) client coordinate to an absolute (physical) screen
coordinate. That's what PointToScreen() does. Windows knows where your
form's client area is at all times and can turn any client point into a
screen point for those things that need screen points to operate.
"PointToScreen() method" is new to me, for which thanks.
How to use the method in conjunction with a context menu strip is a
puzzle to me.
Might you lift the veil just a wee bit for me to have a peek?
I'm much more a believer in teaching a man to fish instead of giving him a
fish, so with what I told you above, see if you can't figure it out yourself
first.