Yes, 45 deg or even more would have the shadow cover more pixels and might give more resolution. I have not really thought about what the drawbacks would be going with that approach. Might be more sensitive to the angle that the filament is going into the sensor, which could change by a small amount as the filament moves. It might allow a simpler approach just thresholding the pixels (non sub-pixel), but with an mcu needed to read the sensor anyways, why not use a subpixel approach.

The subpixel approach improves apon a more basic approach. The basic approach is to essentially count the number of 'dark' pixels in the array to estimate the shadow size (proportional to the filament diameter). This requires thresholding the pixel values to come up with a definition of 'dark'. Its a little more complicated, because you want to search for an edge - the transition between a 'illuminated' pixel and a 'dark' pixel. And then search for the other edge dark to illuminated.
The subpixel approach takes the basic approach (count of the dark pixels) and adds corrections for each edge. Since the filament edge shadow does not fall exactly on a pixel boundary, it requires looking at the 3 pixels on the edge (definitely dark, grey, definitely illuminated). The algorithm uses a quadratic fit of the 3 pixel values to estimate a more precise edge. This is done for both edges, yielding a subpixel correction amount for each edge (something between 0-1 pixel width). Then add up the pixels counted in the basic approach and the subpixel amounts from each edge and you get a better width approximation.

Yes, 45 deg or even more would have the shadow cover more pixels and might give more resolution. I have not really thought about what the drawbacks would be going with that approach. Might be more sensitive to the angle that the filament is going into the sensor, which could change by a small amount as the filament moves. It might allow a simpler approach just thresholding the pixels (non sub-pixel), but with an mcu needed to read the sensor anyways, why not use a subpixel approach.

The subpixel approach improves apon a more basic approach. The basic approach is to essentially count the number of 'dark' pixels in the array to estimate the shadow size (proportional to the filament diameter). This requires thresholding the pixel values to come up with a definition of 'dark'. Its a little more complicated, because you want to search for an edge - the transition between a 'illuminated' pixel and a 'dark' pixel. And then search for the other edge dark to illuminated.

The subpixel approach takes the basic approach (count of the dark pixels) and adds corrections for each edge. Since the filament edge shadow does not fall exactly on a pixel boundary, it requires looking at the 3 pixels on the edge (definitely dark, grey, definitely illuminated). The algorithm uses a quadratic fit of the 3 pixel values to estimate a more precise edge. This is done for both edges, yielding a subpixel correction amount for each edge (something between 0-1 pixel width). Then add up the pixels counted in the basic approach and the subpixel amounts from each edge and you get a better width approximation.

Here is a link to the paper I used for the math: http://dev.ipol.im/~morel/Dossier_MVA_2011_Cours_Transparents_Documents/2011_Cours1_Document1_1995-devernay--a-non-maxima-suppression-method-for-edge-detection-with-sub-pixel-accuracy.pdf

Pictures in the paper do a good job showing whats happening.