It could be done with just one camera. The "pointiness" could be determined from the surface normal, which is already given in Unity's rendering pipeline. Side cameras would miss any indentation in the front surface, but these would be visible to a front camera. The drag could then be scaled by the dot product of the surface normal with the drag camera's forward vector. I'm not sure offhand whether an indentation on a front surface would produce more drag than a totally flat surface. Either way, to accurately model drag this way they'd need to be detected. Let's say the front camera uses a shader that includes calculation and transformation of the aforementioned dot product. Indentations would appear to the camera as regions outlined in high drag containing regions of lower drag. A modified flood fill algorithm could be used on these areas to represent the drag of the indentation, as long as the method for differentiating between the inside and outside of these regions is reliable.