As mentioned at the end of the last post, let’s have a look at implementing rotational transformations.
A lot of things I don’t understand about how rotational transformations work. In general and specifically in matplotlib. And, there is a gotcha in matplotlib transformations that I read about and promptly forgot/ignored. Definitely the cause of some considerable grief.
Not to sure how I am going to go about this post. I would really like to show the grief, the misunderstanding, etc. But I didn’t keep notes or commit the code in small steps. So I am going to have to go by my faulty memory (age related thing) and try to recreate my mental and coding steps.
As I write this, I still have issues with properly sizing the images. Sometimes the data limits are too large and other times there is some clipping going on. I expect I need to explore the geometry of the rotations; but, that seems like way too much work. We shall see.
Initial Attempt
In my reading while working on linear transformations, I came across transforms.Affine2D().rotate_around()
. As I was hoping to rotate the curve around the same centre points I was using in the linear translation, figured this would do the trick. I set up a new plot type to start work on this image variation (if do_plt == 30:
).
elif do_plt == 30:
# 'gnarly' curve, a random or user selected single shape
# going to play with rotational transform
# for dev use single colour for each plot of curve
clrs = ['r', 'b', 'g', 'y']
if t_tl:
m_ttl = get_plt_ttl(shp=splt.shp_nm[su], ld=r_skp, df=drp_f, lw=ln_w)
fig.suptitle(m_ttl)
# sort linear translation for point rotation is to be about
if not (t_sv or t_hc):
tsz = random.randint(1,3)
if tsz == 1:
tx = [-72, -72, 72, 72]
ty = [72, -72, -72, 72]
elif tsz == 2:
tx = [-108, -108, 108, 108]
ty = [108, -108, -108, 108]
else:
tx = [-144, -144, 144, 144]
ty = [144, -144, -144, 144]
tq = 0
# rotation angle for each 'quadrant'
d_rot = [45, 135, 225, 315]
# rotate_around() uses radians
r_rot = [math.radians(d_rot[i]) for i in range(4)]
# for dev let's see the axis labels/values
for spine in ax.spines.values():
spine.set_visible(True)
ax.tick_params(bottom=True, labelbottom=True, left=True, labelleft=True)
# set up data for plot
p_lw = 3
p_alph = alph
p_alph = 1
ax.autoscale(True)
# for dev use solid colours for each curve plot
ax.plot(r_xs, r_ys, lw=p_lw, alpha=p_alph, color='k', zorder=pz_ord)
ax.plot(m_xs, m_ys, lw=p_lw, alpha=p_alph, zorder=pz_ord)
ax.plot(m2_xs, m2_ys, lw=p_lw, alpha=p_alph, zorder=pz_ord)
print(f"DEBUG {do_plt}: translate (c) -> nbr qs 4 @ range(4) => {tx}, {ty} (lw: {ln_w})")
# don't think transparency will be a good thing
t_alph = 1
# get/mark centre of axes box
x_tr = (fig_sz * 100 // 2)
y_tr = (fig_sz * 100 // 2)
xc, yc = x_tr, y_tr
print(f"\t half figure (points): ({x_tr}, {x_tr});")
xc_0, yc_0 = ax.transData.inverted().transform((x_tr, y_tr))
plt.plot(xc_0, yc_0, marker="*", markersize=20, markerfacecolor='r', clip_on=False, zorder=13)
for tq in range(4):
# get rotation around point based on linear translation value
rcx, rcy = x_tr + tx[tq], y_tr + ty[tq]
cptx, cpty = ax.transData.inverted().transform((rcx, rcy))
plt.plot(cptx, cpty, marker="o", markersize=20, markerfacecolor=clrs[tq], clip_on=False, zorder=13)
rot = transforms.Affine2D().rotate_around(cptx, cpty, r_rot[tq])
shadow_transform = (ax.transData + rot)
# now plot the same data with our offset transform;
# use the zorder to make sure we are below the line
# clip_on=False,
# using clip_on=False so can see whatever is being plotted outside the bounding box
ax.plot(r_xs, r_ys, lw=p_lw, alpha=t_alph,
transform=shadow_transform, clip_on=False, color=clrs[tq],
zorder=5)
And, here’s a look at a sample result. Excluding the scaling issues, not exactly what I expected.
My apologies for the background, I reduced the image to 16 colours to keep the size down. Not like the background matters at this point, eh.
Expected the yellow-green to be in the upper right quadrant. And, don’t even see the blue and green which should be positioned similarly to the red but in their appropriate quadrants.
I also don’t understand why the rotation points are so close together. I would expect that a 72 point shift at 100 dpi should show as an ¾ inch difference on the plot. Am I wrong?
Some Degugging
Decided to print out the information for the rotations along with the resulting translation matrix. Maybe that will help me figure out what is going on. I have also changed the dpi setting while debugging to 72 dpi. Here’s the modified code.
for tq in range(4):
rcx, rcy = x_tr + tx[tq], y_tr + ty[tq]
rot_cx, rot_cy = ax.transData.inverted().transform((rcx, rcy))
plt.plot(rot_cx, rot_cy, marker="o", markersize=20, markerfacecolor=clrs[tq], clip_on=False, zorder=13)
rcx1, rcy1 = ax.transData.transform((rot_cx, rot_cy))
print(f"\titeration {tq} -> rcx, rcy: ({rcx}, {rcy}) <- ({rcx1}, {rcy1})")
rot = transforms.Affine2D().rotate_around(rot_cx, rot_cy, r_rot[tq])
print(f"\ttransforms.Affine2D().rotate_around({rot_cx}, {rot_cy}, {d_rot[tq]}) (translate: {tx[tq]}, {ty[tq]})")
shadow_transform = (ax.transData + rot)
st_m = shadow_transform.get_matrix()
print(st_m)
rcx2, rcy2 = ax.transData.inverted().transform((st_m[0][2], st_m[1][2]))
print(f"translation from matrix: ({rcx2}, {rcy2})")
And the output.
DEBUG 30: rotate with translate -> nbr qs 4
plot limits
half figure (points): (504, 504);
iteration 0 -> rcx, rcy: (360, 648) <- (360.00000000000006, 648.0000000000001)
transforms.Affine2D().rotate_around(-0.019980019980019977, 0.019980019980019997, 45) (translate: -144, 144)
[[ 5.09625999e+03 -5.09625999e+03 8.27599525e-03]
[ 5.09625999e+03 5.09625999e+03 7.12783615e+02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
translation from matrix: (-0.0699289216345804, 0.0289687556132784)
iteration 1 -> rcx, rcy: (360, 360) <- (360.00000000000006, 360.00000000000006)
transforms.Affine2D().rotate_around(-0.019980019980019977, -0.019980019980019977, 135) (translate: -144, -144)
[[-5.09625999e+03 -5.09625999e+03 -7.12811871e+02]
[ 5.09625999e+03 -5.09625999e+03 -1.99800200e-02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
translation from matrix: (-0.16883281599945227, -0.06993284216061439)
iteration 2 -> rcx, rcy: (648, 360) <- (648.0000000000001, 360.00000000000006)
transforms.Affine2D().rotate_around(0.019980019980019997, -0.019980019980019977, 225) (translate: 144, -144)
[[-5.09625999e+03 5.09625999e+03 4.82360352e-02]
[-5.09625999e+03 -5.09625999e+03 -7.12783615e+02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
translation from matrix: (-0.0699233771734915, -0.16882889547341828)
iteration 3 -> rcx, rcy: (648, 648) <- (648.0000000000001, 648.0000000000001)
transforms.Affine2D().rotate_around(0.019980019980019997, 0.019980019980019997, 315) (translate: 144, 144)
[[ 5.09625999e+03 5.09625999e+03 7.12755359e+02]
[-5.09625999e+03 5.09625999e+03 1.99800200e-02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
translation from matrix: (0.0289648350872444, -0.0699272976995255)
The rotation point is not where I expect it. For the first iteration we have rcx, rcy: (360, 648)
. In the above example, we are shifting the rotation points 144 points off center. For 72 dpi and a 14 inch square figure, we are talking a movement of 14% of the total image, or 28% of half the image. That is not where the points are being plotted. So, let’s sort that.
Looking at the documentation, I see that for ax.transData()
the description reads:
The coordinate system for the data, controlled by xlim and ylim.
So, I decided to print out the plot limits after plotting the base image. I added print(f"\tplot limits: ({ax.get_xlim()}), ({ax.get_ylim()})")
after the plot command. And voila!
Looks like the limits had to be fixed in some fashion other than by autoscaling. You will, I am sure, also notice that the rotated images are also now further from the base image.
Center?
While generating images I recalled that for many of the wheel shapes the centre of the image and the centre of the plot are not the same. So, I am going to switch to estimating the center of the image to use in determining the rotation points. Here’s the modified code. And an example image and some of the command line output.
# get/mark centre of axes box
x_tr = (fig_sz * 72 // 2)
y_tr = (fig_sz * 72 // 2)
xc, yc = x_tr, y_tr
print(f"\thalf figure (points): ({x_tr}, {x_tr});")
# xc_0, yc_0 = ax.transData.inverted().transform((x_tr, y_tr))
# plt.plot(xc_0, yc_0, marker="*", markersize=5, markerfacecolor='r', clip_on=False, zorder=14)
pxn26, pxx26, pyn26, pyx26 = get_plot_bnds(t_xs, t_ys, x_adj=0)
x_tmp = (pxx26 + pxn26) / 2
y_tmp = (pyx26 + pyn26) / 2
x_tr, y_tr = ax.transData.transform((x_tmp, y_tmp))
xc_0, yc_0 = ax.transData.inverted().transform((x_tr, y_tr))
print(f"\testimated centre of image: ({xc_0}, {yc_0}) -> ({x_tr}, {y_tr})")
plt.plot(xc_0, yc_0, marker="*", markersize=20, markerfacecolor='r', clip_on=False, zorder=14)
DEBUG 30: rotate with translate -> nbr qs 4
plot limits
plot limits: ((-0.8898373410396552, 1.1945708683898366)), ((-1.0345287186464356, 0.5300720441270258))
half figure (points): (504, 504);
estimated centre of image: (0.15236676367509094, -0.25222833725970495) -> (504.0, 504.0)
You will note that the estimated centre is not (0, 0)
. But because autoscaling is enabled, that centre is still in the centre of the figure. But, that may not always be the case. Especially when I start trying to scale the axes to fit the rotated images.
More Debugging
Okay, still don’t see the other rotated images. Let’s try rotating them all at 45°. After all the red rotation at 45° shows up, and the yellow at 315° also does. And 315° is equivalent to -45° So 45 looks like the magic number for now. Time will tell.
Well it seems that each additional transformation affects the whole image. So, in the end only the yellowish rotational image shows up.
I tried putting each individual rotation in a list. Then adding them together to get a single transformation for plotting. That didn’t work either. The transformation matrix ended up looking like this.
[[-245.21821346 0. -493.68573524]
[ 0. -241.8451267 -504.8338641 ]
[ 0. 0. 1. ]]
Not to sure what that ended up doing.
I decided to make the plotting axes much smaller in order to try to see what was happening with the various transformations. Hoping that the rotations would become visible (remember I am using clip_on=False
when plotting). And, I went back to my original set of rotation angles, [45, 135, 225, 315]
. That didn’t really help. Still only see the red and yellowish transformations.
At this point, after a day or two of frustration, I was ready to give up.
Second Attempt
But, once again, a middle of the night wakeful period got me thinking about how the rotation was being made. I made a guess that it was the lower left hand corner that was the pivot for the rotation. So that pivot was placed at my translated rotation points and the image rotated. So I figured what if took the bottom left corner of my curve and based my rotation points on 45, 135, 225 and 315 degree rotational translations of that lower left corner. I no longer use those linear translation values to generate the rotation pivot point.
Had to move some code around to make sure I had the necessary variables available in the right places.
lld_rot = [45, 135, 225, 315]
llr_rot = [math.radians(lld_rot[i]) for i in range(4)]
d_rot = [45, 45, 45, 45]
# rotate_around() uses radians
r_rot = [math.radians(d_rot[i]) for i in range(4)]
print(f"\tangles: {d_rot} -> {r_rot}")
pxn26, pxx26, pyn26, pyx26 = get_plot_bnds(t_xs, t_ys, x_adj=0)
x_tmp = (pxx26 + pxn26) / 2
y_tmp = (pyx26 + pyn26) / 2
x_tr, y_tr = ax.transData.transform((x_tmp, y_tmp))
xc_0, yc_0 = ax.transData.inverted().transform((x_tr, y_tr))
print(f"\testimated centre of image: ({xc_0}, {yc_0}) -> ({x_tr}, {y_tr})")
plt.plot(xc_0, yc_0, marker="*", markersize=20, markerfacecolor='r', clip_on=False, zorder=14)
# approach #2 used rotated lower left corner of image for rotation pivot points
i_blx, i_bly = pxn26, pyn26
print(f"\timage bottom left: ({i_blx}, {i_bly}) in data coord")
tq_xs = [(i_blx - x_tmp)*math.cos(llr_rot[i]) - (i_bly - y_tmp)*math.sin(llr_rot[i]) + x_tmp for i in range(4)]
tq_ys = [(i_blx - x_tmp)*math.sin(llr_rot[i]) + (i_bly - y_tmp)*math.cos(llr_rot[i]) + y_tmp for i in range(4)]
for i in range(4):
print(f"\trotation pivots #{i}: ({tq_xs[i]}, { tq_ys[i]}) in data coord")
plt.plot(tq_xs[i], tq_ys[i], marker="o", markersize=20, markerfacecolor=clrs[i], clip_on=False, zorder=13)
And I modified the code generating the rotation transformation to use those to new lists to get the values for the around part of the rotation.
# attempt 2
plt.plot(tq_xs[tq], tq_ys[tq], marker="o", markersize=20, markerfacecolor=clrs[tq], clip_on=False, zorder=13)
rcx1, rcy1 = ax.transData.transform((tq_xs[tq], tq_ys[tq]))
print(f"\titeration {tq} -> rcx, rcy: ({tq_xs[tq]}, {tq_ys[tq]}) <- ({rcx1}, {rcy1})")
rot = transforms.Affine2D().rotate_around(tq_xs[tq], tq_ys[tq], r_rot[tq])
print(f"\ttransforms.Affine2D().rotate_around({tq_xs[tq]}, {tq_ys[tq]}, {d_rot[tq]})")
Not much change.
More Debugging
Let’s try rotating by 45⪚ for each case.
That didn’t work either. If you look closely, you can likely see that all the rotated images are being plotted in virtually the same location.
iteration 0 -> rcx, rcy: (0.1504579821313181, -1.769858602094923) <- (511.78492655089906, 143.6400000000001)
transforms.Affine2D().rotate_around(0.1504579821313181, -1.769858602094923, 45)
[[ 143.97364816 -143.97364816 -17.36462112]
[ 143.97364816 143.97364816 695.98165577]
[ 0. 0. 1. ]]
translation from matrix: (-2.4483872802342725, 0.9428915109988503)
iteration 1 -> rcx, rcy: (1.8820819876484136, 0.03823459657782702) <- (864.3599999999999, 511.784926550899)
transforms.Affine2D().rotate_around(1.8820819876484136, 0.03823459657782702, 45)
[[ 143.97364816 -143.97364816 -15.57892523]
[ 143.97364816 143.97364816 695.28679093]
[ 0. 0. 1. ]]
translation from matrix: (-2.4396170803435684, 0.9394787778592755)
iteration 2 -> rcx, rcy: (0.07398878897566419, 1.769858602094923) <- (496.21507344910106, 864.3599999999999)
transforms.Affine2D().rotate_around(0.07398878897566419, 1.769858602094923, 45)
[[ 143.97364816 -143.97364816 -14.88406039]
[ 143.97364816 143.97364816 697.07248682]
[ 0. 0. 1. ]]
translation from matrix: (-2.4362043472039936, 0.9482489777499796)
iteration 3 -> rcx, rcy: (-1.6576352165414319, -0.038234596577826685) <- (143.64000000000004, 496.21507344910106)
transforms.Affine2D().rotate_around(-1.6576352165414319, -0.038234596577826685, 45)
[[ 143.97364816 -143.97364816 -16.66975628]
[ 143.97364816 143.97364816 697.76735166]
[ 0. 0. 1. ]]
translation from matrix: (-2.4449745470946977, 0.9516617108895549)
DEBUG (30): after plot -> (-1.6576352165414319, 1.8820819876484136), (-1.769858602094923, 1.769858602094923)
DEBUG (30): plot limits by x/ylim(): -0.05500000000000001, 0.05500000000000001, -0.05500000000000001, 0.05500000000000001
Okay, instead of data coordinates, let’s try using display coordinates.
That seems to be a little better. But, I expected the red rotation to be in the upper left quadrant. Why isn’t it? Let’s look at my debugging output.
tq1 xs, tq1 ys: (0.00014176955817279957, -1.6364633242344873) in data coords => ([504.01153198 -5.61446787]) in points
tq2 xs, tq2 ys: (1.63656806420105, 3.702959160989394e-05) in data coords => ([1013.63753132 504.01153146]) in points
tq3 xs, tq3 ys: (6.771037495301169e-05, 1.6364633242344875) in data coords => ([ 503.98846802 1013.61446787]) in points
tq4 xs, tq4 ys: (-1.6363585842679245, -3.70295916096719e-05) in data coords => ([ -5.63753132 503.98846854]) in points
plot limits: ((-1.6363585842679245, 1.63656806420105)), ((-1.6364633242344873, 1.6364633242344875))
iteration 0 -> rcx, rcy: (0.00014176955817279957, -1.6364633242344873) <- (504.0081541599099, 143.64)
transforms.Affine2D().rotate_around(504.011531984478, -5.614467874165825, 45)
[[ 155.70956947 -155.70956947 143.6352226 ]
[ 155.70956947 155.70956947 354.71291479]
[ 0. 0. 1. ]]
iteration 1 -> rcx, rcy: (1.63656806420105, 3.702959160989394e-05) <- (864.3599999999999, 504.0081541599098)
transforms.Affine2D().rotate_around(1013.6375313212465, 504.01153146260253, 45)
[[ 155.70956947 -155.70956947 653.26122194]
[ 155.70956947 155.70956947 143.61891413]
[ 0. 0. 1. ]]
iteration 2 -> rcx, rcy: (6.771037495301169e-05, 1.6364633242344875) <- (503.99184584009004, 864.3599999999999)
transforms.Affine2D().rotate_around(503.98846801552196, 1013.6144678741659, 45)
[[ 155.70956947 -155.70956947 864.33215937]
[ 155.70956947 155.70956947 653.25446718]
[ 0. 0. 1. ]]
iteration 3 -> rcx, rcy: (-1.6363585842679245, -3.70295916096719e-05) <- (143.64000000000004, 503.99184584009)
transforms.Affine2D().rotate_around(-5.63753132124657, 503.9884685373975, 45)
[[ 155.70956947 -155.70956947 354.70616003]
[ 155.70956947 155.70956947 864.34846784]
[ 0. 0. 1. ]]
The values shown for tq_xs
and tq_ys
at the top, don’t match the rcx
and rcy
values for each iteration.
Okay, turns out I had a second print(f"\tplot limits: ({ax.get_xlim()}), ({ax.get_ylim()})")
after the first set of calculations and before the second. That changed the transformation parameters for getting the display coordinates. The markers showing the positions for the translation pivot points were big enough to alter the data limits. The stuff you just don’t think about! That fixed the number issue, but not the location of the rotations.
Is it x
or is it y
After a lot of messing about that’s when that article came back into my thoughts.
In matplotlib, the x-axis is the second axis of an array.
So, I tried swapping the values I was using for the x
and y
display coordinates.
Well, an improvement of sorts. But now the colours go in reverse. Another thing I don’t understand, apparently?
Scaling Image
I haven’t yet gotten around how to set the data limits so that all the rotated images fit within the axes to which I am plotting the base image. Just in case I ever decide to print and frame one of these images.
But, I am just going to double the limits set by matplotlib after I plot the base image and see how things look.
xmn26, xmx26 = ax.get_xlim()
ymn26, ymx26 = ax.get_ylim()
ax.set_xlim(xmn26*2, xmx26*2)
ax.set_ylim(ymn26*2, ymx26*2)
And finally let’s generate one using a full colour scheme and the markers removed.
Clearly still have to sort the image scaling.
Done M’thinks
This is already a lengthy (438 lines and 26 Kb) and time consuming post. Before I try plotting some high DPI images I want to see if I can sort out properly scaling the axes so as to contain the full image. But I will leave that for the next post, as I expect it may take considerable time and effort.
‘Til then, enjoy your time coding. And debugging!
Resources
Some repetition, but…
- Affine transformation
- Rotating Points in Two-Dimensions
- matplotlib Transformations Tutorial
- matplotlib.transforms
- Using offset transforms to create a shadow effect
- matplotlib Zorder Demo
- Gotchas with Affine Transformations in Python
- random — Generate pseudo-random numbers